We can organize an interview with Aldin or any of our 25,000 available candidates within 48 hours. How would you like to proceed?
Schedule Interview NowMy name is Vadym M. and I have over 11 years years of experience in the tech industry. I specialize in the following technologies: Appium, Software QA, Automated Testing, Selenium, QA Engineering, etc.. I hold a degree in Master of Computer Applications (MCA), Foundation degree. Some of the notable projects I’ve worked on include: Quality Assurance for AI powered media marketplace, Automation testing, Manual and automation testing, Automation testing of software, Testing and software development for SaaS project, etc.. I am based in Kyiv, Ukraine. I've successfully completed 10 projects while developing at Softaims.
I'm committed to continuous learning, always striving to stay current with the latest industry trends and technical methodologies. My work is driven by a genuine passion for solving complex, real-world challenges through creative and highly effective solutions. Through close collaboration with cross-functional teams, I've consistently helped businesses optimize critical processes, significantly improve user experiences, and build robust, scalable systems designed to last.
My professional philosophy is truly holistic: the goal isn't just to execute a task, but to deeply understand the project's broader business context. I place a high priority on user-centered design, maintaining rigorous quality standards, and directly achieving business goals—ensuring the solutions I build are technically sound and perfectly aligned with the client's vision. This rigorous approach is a hallmark of the development standards at Softaims.
Ultimately, my focus is on delivering measurable impact. I aim to contribute to impactful projects that directly help organizations grow and thrive in today’s highly competitive landscape. I look forward to continuing to drive success for clients as a key professional at Softaims.
Main technologies
11 years
10 Years
9 Years
6 Years
Potentially possible
📝 BEFORE Upon beginning the project, I found close to 100 E2E tests that ran for an hour and around 200 Postman + Newman smoke tests for API testing. The project lacked documentation, and the extent of test coverage was unclear. My assignment was to update the backend and frontend test frameworks for the client's project. ⁉️ CHALLENGES AND SOLUTIONS — Less than 10% test coverage ➡️ Developed 2000 E2E automation tests with Allure reports, achieving almost 90% functionality test coverage. — 1 hour execution time for nearly 100 E2E tests ➡️ Optimized tests and established standard methods, reducing execution time to approximately 30 minutes despite a significant increase in test cases. — Diverse testing requirements including front and backend ➡️ Utilized tools like Postman, Runscope (BlazeMeter) for API and contract testing. Developed test plans and checklists for frontend testing. Combined manual and automated testing for comprehensive coverage. — Lack of documentation ➡️ Created specific test plans, checklists, and over 1000 bug reports. Implemented reporting post-tests. — Unclear test results without a reporting system ➡️ Implemented a reporting system for clear and concise test reports, facilitating informed decision-making. — Ensuring continuous improvement and adherence to standards ➡️ Introduced testing standards and best practices, like Code freeze for release stability and standardized bug reporting. Regular assessments and focus on continuous learning and development within the QA team. ⚙️ TECHNOLOGIES, TOOLS, AND APPROACHES The project leveraged a range of technologies like Webdriver.io, Postman, Burp Suite, and more, alongside practices such as Code Freeze for Release Stability, Bug Reporting Standards, and Continuous Improvement. These initiatives have boosted application quality and user experience by providing a stable release platform and minimizing the risk of new issues. ✅ RESULTS — ️ Migration to the latest technologies proceeded smoothly, with minimal production issues reported by users. Any encountered problems were mostly isolated incidents. Customer requirements were effectively met, and the project continues to receive ongoing support. — During our collaboration, I developed 2200 TMS test cases covering all application roles, along with 2000 E2E tests featuring an Allure report deployed on the server. Test execution now takes about 30 minutes. Additionally, we have 1700 E2E API tests with contract testing. — Code freezes have established a stable foundation for releases, reducing the risk of introducing new issues. Standardized bug reports have enhanced communication between myself and developers, resulting in quicker bug resolution.
📝 BEFORE The project was in the development phase, and prior to my involvement, testing was conducted by members of the development team, including the project manager, through manual regression testing. There was a lack of test documentation for the project. ⁉️ CHALLENGES AND SOLUTIONS My primary responsibility was automating regression testing for cluster deployment and application connections, focusing on default and custom apps. Additionally, I prioritized identifying and addressing critical errors during deployment. — The source code contained numerous bugs and code smells ➡️ Utilized SonarScanner to update repositories, eliminate code smells, and fix major bugs. — Simultaneous deployment of identical applications on multiple clusters was problematic ➡️ Identified and reported the issue of simultaneous deployment to developers for resolution. — Cluster deployment times varied based on cloud providers (AWS, Azure, GCP, IBM) ➡️ Implemented two pipelines and pytest tags to facilitate weekly testing for approximately 50 deployment combinations. Adjusted launch scripts to consider day of the week and GMT time, ensuring minimal test coverage for each parameter set as per client's request. ⚙️ TECHNOLOGIES, TOOLS, AND APPROACHES Testing included manual regression, smoke testing, and automatic regression analysis via SonarScanner. Python and the Selenium framework were chosen for test automation to ensure efficient and reliable tests across diverse environments. PyCharm served as the Python IDE for comfortable coding, debugging, and testing. DBeaver facilitated interaction with various database management systems, aiding testing and debugging. Postman was used to test and validate API interactions, along with AWS and GCP services for deployment, testing, and project development. 🎯 FEATURES OF THE PROJECT The project utilized exploratory testing due to the absence of access to key documents such as High-Level Documents and Project Requirements Documents, as well as use cases or user stories. Although automated scenarios weren't formally documented, they were effectively integrated into the project. Test scenarios were designed to be self-contained to maintain independence from database state and potential UI changes, enabling context-agnostic testing and ensuring test stability throughout development stages. ✅ RESULTS — Testing now encompasses 90% of the system's functionality, enabling quicker and more reliable assessment at various development stages. This has bolstered the project's testing infrastructure and broadened its capabilities. — The introduction of automated testing for regression scenarios has fortified the project's testing framework and enhanced its overall functionality, with the integration of a CI/CD pipeline within GitLab. — Deployment durations across different clusters have been optimized based on the cloud provider, resulting in expedited and more effective deployment processes for the client. — ️ Despite the absence of comprehensive high-level and project requirement documents, the project adeptly executed exploratory testing, showcasing adaptability and efficient testing methodologies within a dynamic development milieu.
📝 BEFORE The product was launched without undergoing testing. The project manager performed smoke testing, while developers handled unit and integration tests. ⁉️ CHALLENGES AND SOLUTIONS I was responsible for the development of regression test cases and automated them for the platform. — Video and audio stream testing automation was complex and resource-intensive ➡️ Opted for manual testing to gain better control over video and audio content testing. — Automation complexity, especially for quizzes ➡️ Developed a method stack in a file, with each method tailored to a specific quiz question type, automatically determining interaction. — User interaction variability during quizzes ➡️ Generated and randomized data using specialized methods to enhance software testing across diverse conditions. ⚙️ TECHNOLOGIES, TOOLS, AND APPROACHES Testing was automated using TypeScript with WebdriverIO framework, utilizing the Page Object Model for efficient code organization. Regression testing was documented in Google Sheets, while smoke testing was done manually. GitHub Actions improved collaboration between testers and developers by automating testing and integrating with other tools, enhancing problem identification and resolution for high product quality and efficient development. 🎯 FEATURES OF THE PROJECT The project involved testing diverse user tests and quizzes, posing challenges for automation. To address this, I devised a specialized file enabling testing of complex processes, like drag-and-drop actions, drawing lines, multiple choice, and essay responses. Tests were meticulously chosen, covering positive and negative scenarios, and results were rigorously verified to ensure product quality and reliability. ✅ RESULTS — ️ 200 bug reports were logged in Jira, demonstrating our meticulous attention to detail and ensuring the stability and dependability of the project. — Prior to release, 10-15 non-critical bugs were discovered and rectified, encompassing issues related to video playback, security, and course editing. This proactive approach significantly bolstered the product's quality and heightened customer satisfaction. — Out of 200 test cases, 126 were automated, leading to enhanced efficiency and consistency in testing. This automation facilitated swifter issue detection and contributed to overall product refinement. — Usability testing was conducted to gauge user experience, pinpoint interface issues, and enhance the product's user-friendliness and efficacy.
📝 BEFORE There wasn't a tester assigned to the project, leading to the unavailability of test documentation. There was a necessity to establish and structure the recording of test coverage. My objective for the project was to automate and thoroughly test all functionalities from the ground up. ⁉️ CHALLENGES AND SOLUTIONS — Lack of tester and test documentation ➡️ I provided skilled Automation QA, ensuring comprehensive test coverage from scratch. — Ensuring application meets requirements across different browsers ➡️ Implemented autotests for multiple browsers, ensuring compatibility and consistency. — Testing application in a clear environment ➡️ Utilized Docker with Docker Compose for isolated test environments. ⚙️ TECHNOLOGIES, TOOLS, AND APPROACHES I utilized JS with WebdriverIO for automated web interface tests. GitHub stored and managed our test code, streamlining collaboration and version control. Allure Report generated informative test progress reports. Docker facilitated isolated test environments, reducing conflicts and simplifying testing across different environments. ✅ RESULTS — 80% covered by tests — testing framework was created for the project, which included a set of testing tools, test data and test scripts. — tests for automation are written according to precise instructions and requirements of the client. 💡 IMPLEMENTATION STEPS 1. Onboarding: Studied application features and functionality, aiming for test automation. 2. Test Plan: Developed checklist for systematic testing, especially for new features. 3. Communication: Open collaboration with client enhanced checklist through continuous feedback. 4. Testing Approaches: Conducted various tests including exploratory, checklist, cross-browser, and functional testing. 5. Reporting: Established result reporting processes and implemented CI for code functionality validation pre-integration.
📝 BEFORE There was a lack of testing procedures, and the project lacked accompanying test documentation, highlighting the necessity for additional software development. ⁉️ CHALLENGES AND SOLUTIONS Tasked with backend software development, I was also charged with enhancing existing unit tests for efficient scraper operations. Amidst this, encountered challenges necessitating additional efforts. — Cypress framework's architecture hindered storing dynamic data effectively ➡️ Adopted Singleton instance approach for efficient dynamic data management, ensuring global accessibility. — Extensive app integration led to prolonged testing durations ➡️ Proposing API testing reduced overall test execution time and minimized instability. — Certain web scrapers suffered from slow execution due to data volume and website/API issues ➡️ Integration of new sync and asynchronous aiohttp libraries optimized web application performance. — Parallelizing and expediting scraper operations was imperative ➡️ Introduced and initiated implementation of a proxy service for enhanced performance. ⚙️ TECHNOLOGIES, TOOLS, AND APPROACHES New functionality is thoroughly tested using automated and manual methods, employing tools like JavaScript, Cypress, and GitHub Actions for seamless integration. Jenkins facilitates automated build and deployment to test and production environments. Backend and crawler architecture rely on Python and MongoDB. Development focuses on JavaScript and Vue.js framework for front-end tasks, alongside rigorous unit testing to uphold reliability and product quality. 🎯 FEATURES OF THE PROJECT In testing with Cypress, I encountered challenges that demanded meticulous navigation and innovative solutions due to certain limitations. Each test on the Cypress platform necessitated the creation and deletion of a user on the external Keycloak web resource. In development, I focused on enhancing the app's features, including the successful implementation of an innovative function suggested by our specialist. This feature allows users to import a Bom file, enabling them to discover new items with similar characteristics to their selections, thereby significantly enhancing the user experience and product coverage. ✅ RESULTS — The tests now encompass 95% of the functionality, greatly enhancing project reliability and quality assurance. — Approximately 50 defects were discovered, aiding in error identification and elimination, showcasing the testing effectiveness and software quality enhancement. — UI testing instilled confidence in the customer regarding the correct functioning of their software's interface, ensuring smooth interaction and an exceptional user experience. — Back-end development targets enhancing existing features and introducing new ones to fulfill user needs.
Master of Computer Applications (MCA) in English
2006-01-01-2012-01-01
Foundation degree in Computer science
2004-01-01-2006-01-01