We can organize an interview with Aldin or any of our 25,000 available candidates within 48 hours. How would you like to proceed?
Schedule Interview NowMy name is Mahendra O. and I have over 3 years of experience in the tech industry. I specialize in the following technologies: Data Science, Machine Learning, Deep Learning, Generative AI, Python, etc.. I hold a degree in Bachelor of Technology (BTech). Some of the notable projects I’ve worked on include: Llama Index Azure AI Search Advanced accurate Production level RAG, LangGraph Multi AI Agent Routing between (RAG + SQL + Normal Q&A), Accurate Advanced Agentic RAG, Fine Tuning LLM, Cost effective Earnings Call Transcript LLM Summarizer, etc.. I am based in Tadikonda, India. I've successfully completed 14 projects while developing at Softaims.
I am a dedicated innovator who constantly explores and integrates emerging technologies to give projects a competitive edge. I possess a forward-thinking mindset, always evaluating new tools and methodologies to optimize development workflows and enhance application capabilities. Staying ahead of the curve is my default setting.
At Softaims, I apply this innovative spirit to solve legacy system challenges and build greenfield solutions that define new industry standards. My commitment is to deliver cutting-edge solutions that are both reliable and groundbreaking.
My professional drive is fueled by a desire to automate, optimize, and create highly efficient processes. I thrive in dynamic environments where my ability to quickly master and deploy new skills directly impacts project delivery and client satisfaction.
Main technologies
3 years
2 Years
1 Year
1 Year
Potentially possible
Fractal Analytics
This advanced production RAG system uses Docling for accurate unstructured content extraction + chunking and used Azure OpenAI and text-embedding model. It retrieves documents from Azure AI Search with hybrid HNSW vector search, filtering by folder (relative_path) and metadata (file_name). A ChatEngine answers queries using only retrieved context, supported by Memory for long sessions. The system auto-detects relevant folders, dynamically constructs retrievers with hybrid search + reranking, logs, and filters sources by score threshold for accurate, clickable sources respect page numbers .
i have built an intelligent agent system using LangGraph’s rag_sql_normal_Q&A_router to route natural language queries dynamically between a Retrieval-Augmented Generation (RAG) Agent , SQL Agent and normal Q&A Handler execution. This hybrid system interprets questions, identifies if they require database queries or pdf document retrieval, and returns rich, context-aware answers. Integrated LangChain, SQL databases, and vector stores to provide accurate, explainable results. Significantly improved data access, semantic understanding, and reduced hallucination in answer
Designed and deployed an intelligent document-based chatbot powered by Azure AI Search and GPT-4o. The system enables users to ask context-specific questions over selected PDF files. I integrated hybrid search (dense vector + keyword) and a semantic reranker for high-precision retrieval. The app leverages vectorized queries using OpenAI embeddings, filters content via parent_id, and delivers grounded answers using a RAG (Retrieval-Augmented Generation) approach and the UI was built using Streamlit for fast prototyping and interaction.
Project Goals: Fine-tune the LLaMA model for instruction-based conversational AI using the Guanaco dataset, incorporating advanced techniques like QLoRA for memory efficiency and supervised fine-tuning to optimize performance. My Solution: Implemented LoRA for parameter-efficient fine-tuning, quantized the model to 4-bit precision for reduced GPU memory usage, and customized training configurations using `BitsAndBytesConfig` and cosine learning rate schedules. Successfully handled a structured dataset of instruction-response pairs to align the model for enhanced conversational tasks.
I developed an LLM-based summarization app Earnings call transcripts into concise summaries, highlighting financial metrics and strategic insights. 1) Data Preprocessing: - Scraped transcripts from sources like Motley Fool using BeautifulSoup. 2) Summarization Pipeline: - Utilized Azure OpenAI's GPT-3.5-turbo with token-based chunking. - Applied "stuffing" for chunk summaries and "refining" for cohesion. 3) Interface & Deployment: - Built a Streamlit app with cost tracking and URL-based inputs. 4) Cost Optimization: - Monitored token usage for budget-friendly operations.
Bachelor of Technology (BTech) in Computer science Specialized in data science
2020-01-01-2024-01-01