We can organize an interview with Aldin or any of our 25,000 available candidates within 48 hours. How would you like to proceed?
Schedule Interview NowAt Softaims, I’ve found a workplace that thrives on collaboration and purposeful creation. The work we do here is about more than technology—it’s about transforming ideas into results that matter. Every project brings a mix of challenges and opportunities, and I approach them with a mindset of continuous learning and improvement. My philosophy centers around three principles: clarity, sustainability, and impact. Clarity means designing systems that are understandable, adaptable, and easy to maintain. Sustainability is about building with the future in mind, ensuring that the work we do today can evolve gracefully over time. And impact means creating something that genuinely improves how people work, connect, or experience the world. One of the most rewarding aspects of working at Softaims is the diversity of thought that every team member brings. We share insights, question assumptions, and push each other to think differently. It’s this culture of curiosity and openness that drives the quality of what we produce. Every solution we deliver is a reflection of that shared dedication. I’m proud to contribute to projects that not only meet client expectations but also exceed them through thoughtful execution and attention to detail. As I continue to grow in this journey, I remain focused on delivering meaningful outcomes that align technology with purpose.
Main technologies
14 years
9 Years
9 Years
3 Years
Potentially possible
I built RescueVision, an end-to-end, multi-modal AI system to accelerate search and rescue (SAR) operations. I trained a YOLOv8n object detection model on over 2,200 aerial images, achieving a Precision of 85.3%, Recall of 76.2%, and mAP50 of 0.833. The system also includes a private RAG pipeline built with LangChain and ChromaDB. The entire project was developed with a robust microservices architecture, integrating Flask-based AI services with a React/TypeScript frontend.
Built an end-to-end, full-stack platform to transform unstructured customer feedback into actionable business intelligence. I developed an advanced Retrieval-Augmented Generation (RAG) pipeline using Sentence-Transformers for embeddings and a persistent ChromaDB knowledge base (10,000+ items). The platform features Intelligent Document Processing with LLM-based chunking and Tesseract OCR for comprehensive data ingestion (.pdf, .docx). The Groq LPU-powered LLaMA 3 model provides high-speed conversational analysis (average time-to-first-token <150ms).
I built a Prescriptive Maintenance system that goes beyond prediction by using a Retrieval-Augmented Generation (RAG) pipeline to prescribe solutions via a conversational AI assistant. It includes an end-to-end MLOps lifecycle, a Human-in-the-Loop (HITL) framework, and Docker containerization. I achieved an RMSE of 15.82 and 95% Recall. The system's private RAG pipeline uses Ollama, ChromaDB, and LangChain.
I built AuraScanAI, an end-to-end computer vision system demonstrating a full MLOps lifecycle for vehicle damage assessment. I custom-trained a Vision Transformer (ViT) on over 15,500 images, fine-tuning a vit_base_patch16_224 model to achieve a best validation loss of 248.27. The system features a professional MLOps workflow using Docker and Git LFS for model management, and a full-stack architecture with a Flask/PyTorch API on Hugging Face Spaces and a React/TypeScript frontend on Vercel. The backend also includes a Business Rule Engine for severity classification and repair costs.
Developed a full-stack, end-to-end AI agent capable of answering complex questions with up-to-date, sourced information. This project utilizes modern AI agent architecture, achieving a 100% task success rate and 67% faster responses by leveraging a robust, orchestrated Retrieval-Augmented Generation (RAG) workflow with a local LLM (Phi-3).
Bachelor of Science (BS) in Electrical and Electronics Engineering
2010-01-01-2014-01-01