We can organize an interview with Aldin or any of our 25,000 available candidates within 48 hours. How would you like to proceed?
Schedule Interview NowMy name is Danyal Z. and I have over 11 years of experience in the tech industry. I specialize in the following technologies: Python, Data Mining, Web Crawling, Robotic Process Automation Software, API Testing, etc.. I hold a degree in High school degree, Bachelor of Computer Science (BCompSc), Other, Other. Some of the notable projects I’ve worked on include: Korean Market Crawlers (100+ Markets), Nike Korea Scraper (Python + Playwright), Beseller Market Scraper (Python + Playwright), LinkedIn Jobs Scraper (Python + Playwright), Distributed Application Manager, etc.. I am based in Karachi, Pakistan. I've successfully completed 28 projects while developing at Softaims.
My passion is building solutions that are not only technically sound but also deliver an exceptional user experience (UX). I constantly advocate for user-centered design principles, ensuring that the final product is intuitive, accessible, and solves real user problems effectively. I bridge the gap between technical possibilities and the overall product vision.
Working within the Softaims team, I contribute by bringing a perspective that integrates business goals with technical constraints, resulting in solutions that are both practical and innovative. I have a strong track record of rapidly prototyping and iterating based on feedback to drive optimal solution fit.
I’m committed to contributing to a positive and collaborative team environment, sharing knowledge, and helping colleagues grow their skills, all while pushing the boundaries of what's possible in solution development.
Main technologies
11 years
1 Year
4 Years
2 Years
Potentially possible
Systems Limited
I have built a web scraping framework that is based on Python libraries such as Playwright, aiohttp, etc. It has support for Unicode and non-Unicode text and the ability to scrape both authentication-based (browser controlled through Playwright's Chromium) and non-authentication-based websites. It is able to scrape multiple pages concurrently and save the information in spreadsheets. It also has the ability to bypass CAPTCHAs or workaround the limitations of websites in regards to their anti-scraping mechanisms (i.e., IP blocking, slow responses, etc.). It has a detailed log output.
I developed a scraper for Nike Korea that collects products data from the Men, Women, and Kids sections, including all categories and subcategories. This scraper is based on Playwright. The program also takes arguments for skipping specific categories or subcategories, giving flexibility in what to scrape. It also supports batching, where the batch size defines how many product pages are scraped asynchronously at once. This prevents disconnection or lag from server overload and reduces the risk of a ban. Results are saved in clean, structured CSV files for each category and subcategory.
I developed a scraper for Beseller (https://beseller.net/) that collects product data across all available categories and subcategories. This scraper is built with Python and Playwright. The program allows passing arguments to skip specific categories or subcategories, giving full control over what to scrape. It supports batching, where the batch size defines how many product pages are scraped asynchronously at once. This prevents server overload, avoids disconnection, and lowers the risk of a ban. The scraped data is saved in clean, structured CSV files for each category and subcategory.
I developed a LinkedIn jobs scraper to automate and speed up job data collection. The project includes two scripts. The first script takes a keyword or company name, location, and output filename as arguments, then scrapes job listings from LinkedIn’s search page and saves them in a CSV file with the columns: title, company, location, and link. The second script handles bulk scraping by reading company names and URLs from a file and spawning multiple pages asynchronously. It accepts a filename, starting row, and ending row as arguments and saves each company’s jobs into its own CSV file.
The Distributed Application Manager is a full-stack task orchestration and automation platform I built using FastAPI, WebSockets, and React. It centralizes the execution of Python-based automation tools (web crawlers, registration automation, image processors, Excel utilities, etc.), allowing users to configure, run, and monitor tasks through a clean web interface. Designed for internal company deployment, it supports real-time log streaming, task scheduling, and parallel execution to manage multiple workflows seamlessly. It transforms scattered scripts into a unified, scalable automation hub.
High school degree in Pre-Engineering
2011-01-01-2013-01-01
Bachelor of Computer Science (BCompSc) in Software Engineering
2014-01-01-2018-01-01
Other in Computer science
2008-01-01-2010-01-01
Other in Computer science
2000-01-01-2008-01-01