We can organize an interview with Aldin or any of our 25,000 available candidates within 48 hours. How would you like to proceed?
Schedule Interview NowMy name is Rene Juliano M. and I have over 2 years of experience in the tech industry. I specialize in the following technologies: Web Development, JavaScript, Python, React, node.js, etc.. I hold a degree in Bachelor of Science in Information Technology, Master of Information Technology (MIT), Master of Business Administration (MBA). Some of the notable projects I’ve worked on include: Web Scraping & Image Downloading for Beverage E-Commerce, Automated Web Scraper for E-commerce Catalog with 40,000+ Products, Public Job Data Extraction from 700 Firms – Python + Selenium Project, FoodTruck Flow, AWS Quota Manager, etc.. I am based in Balneario Barra do Sul, Brazil. I've successfully completed 11 projects while developing at Softaims.
I am a business-driven professional; my technical decisions are consistently guided by the principle of maximizing business value and achieving measurable ROI for the client. I view technical expertise as a tool for creating competitive advantages and solving commercial problems, not just as a technical exercise.
I actively participate in defining key performance indicators (KPIs) and ensuring that the features I build directly contribute to improving those metrics. My commitment to Softaims is to deliver solutions that are not only technically excellent but also strategically impactful.
I maintain a strong focus on the end-goal: delivering a product that solves a genuine market need. I am committed to a development cycle that is fast, focused, and aligned with the ultimate success of the client's business.
Main technologies
2 years
1 Year
1 Year
1 Year
Potentially possible
Mercado Livre
Developed a custom Python web scraper using Selenium to extract product data from a beverage e-commerce site. The scraper handled JavaScript-rendered pages, collecting over 11,800 products with details like name, code, type, size, origin, price, rating, and image URL, storing them in Supabase. It also downloaded more than 12,600 product images, as some items had multiple photos. Portfolio images show the complete dataset in Excel, the folder with all downloaded images, and the scraper's live execution in the terminal.
This project involved building a Python web scraper to extract product data from the e-commerce site shoppingchina.com.py. The script navigates all categories and paginated product listings, accessing each product’s page to collect structured data including title, description, price, category, image URL, and product URL. The extracted data is stored in a Supabase database, with update logic to refresh records only when changes are detected. Delays and retries ensure a stable and site-friendly process. The solution handles over 40,000 products and completes a full scrape in about 12 hours.
Built a Python scraper using Selenium to extract public job listings for executive sales roles across 700 companies. Data fields included company name, job title, location, work type, and job URLs, sourced from publicly accessible job boards on a professional network. The data was stored in Supabase and exported to Excel. Resulted in 5,000+ structured listings collected. Screenshots show the script in action and a sample of the final Excel summary. This project sharpened my automation, data structuring, and real-world data handling skills effectively.
FoodTruck Flow is a real-time analytics dashboard developed for a U.S.-based client to monitor food truck operations across San Francisco. I built interactive visualizations using Qlik Sense mashups, React, and Node.js (NestJS), fully integrated with Snowflake, RESTful APIs, and Docker for streamlined development and deployment. The system features heatmaps, geospatial data overlays, and Sankey diagrams to display truck activity by day, time, location, and cuisine type. The project enabled the client to gain instant insights into vendor behavior, customer trends, and operational hotspots city.
AWS Quota Manager is a serverless solution I designed to monitor and control AWS cloud usage for a research lab. Built with AWS Lambda and Python, it enables administrators to assign monthly quotas to users, sends real-time alerts as thresholds are reached, and automatically shuts down resources when quotas are exceeded. The system integrates AWS CloudWatch, S3, and Amazon RDS with MariaDB to track consumption and generate detailed reports. This solution significantly improved operational efficiency, prevented budget overruns, and showcases my skills in cloud automation and cost optimization.
Bachelor of Science in Information Technology in Systems Analysis with a focus on software development.
1997-01-01-2002-01-01
Master of Information Technology (MIT) in Postgraduate in Application Development for Mobile Devices.
2020-01-01-2022-01-01
Master of Business Administration (MBA) in MBA in Big Data.
2020-01-01-2022-01-01