We can organize an interview with Aldin or any of our 25,000 available candidates within 48 hours. How would you like to proceed?
Schedule Interview NowMy name is Aqeel S. and I have over 11 years of experience in the tech industry. I specialize in the following technologies: Google Analytics 4, Machine Learning, Keras, Deep Learning, TensorFlow, etc.. I hold a degree in Master's degree. Some of the notable projects I’ve worked on include: Item Demand Forecasting, Extensive EDA - Python, DBT Marketing Data Engineering Project, LSTM Stock, Bank Loan Approval System. I am based in Lahore, Pakistan. I've successfully completed 5 projects while developing at Softaims.
Information integrity and application security are my highest priorities in development. I implement robust validation, encryption, and authorization mechanisms to protect sensitive data and ensure compliance. I am experienced in identifying and mitigating common security vulnerabilities in both new and existing applications.
My work methodology involves rigorous testing—at the unit, integration, and security levels—to guarantee the stability and trustworthiness of the solutions I build. At Softaims, this dedication to security forms the basis for client trust and platform reliability.
I consistently monitor and improve system performance, utilizing metrics to drive optimization efforts. I’m motivated by the challenge of creating ultra-reliable systems that safeguard client assets and user data.
Main technologies
11 years
9 Years
1 Year
10 Years
Potentially possible
Systems Limited
This is an item demand forecasting project using NeuralProphet model in Python that involves building a predictive model that can forecast the demand for a particular item in the future based on historical data of a national bakery. The NeuralProphet model is a deep learning-based forecasting model that uses neural networks to capture complex patterns and dependencies in the data. This project has required skills in Python programming, data analysis, and machine learning. The key deliverables of the project were to include data cleaning and preprocessing, model training and validation, and generating forecasts for future demand and data insights and analytics. The model's accuracy and performance will be evaluated using metrics such as mean absolute error and root mean squared error. The project's ultimate goal was to provide actionable insights to decision-makers and help optimize inventory management and supply chain operations.
This project contains an exploratory data analysis (EDA) of housing price data. The analysis is conducted using Python and its data analysis libraries, including Pandas, NumPy, and Matplotlib. The dataset contains 79 explanatory variables describing (almost) every aspect of residential homes in Ames, Iowa. The dataset includes information about various features of the homes such as the number of bedrooms, square footage, location, etc. The analysis includes data cleaning, feature engineering, and data visualization. The aim is to explore the dataset, understand the relationships between variables, identify patterns, and uncover insights. The EDA is structured into the following sections: Loading and understanding the data. Data cleaning and preprocessing. Exploratory data analysis. Feature engineering. Summary and conclusions.
This project aims to provide a clean and structured database for marketing data that can be easily queried and analyzed. It leverages the data modelling and transformation tool, DBT (Data Build Tool), to build a scalable and maintainable data pipeline. . ├── dbt_project.yml # DBT project configuration file ├── models # Directory to store DBT models │ ├── raw # Directory to store base models │ ├── transormations # Directory to store transformation models | ├── reporting # Directory to store reporting models │ └── README.md # Documentation for models ├── README.md # Project documentation └── schema.yml # YAML file to define database schema dbt_project.yml: This file is the configuration file for the DBT project. It includes project-level settings such as project name, version, and required dependencies. models: This directory contains the DBT models that transform and load the data. It is divided into three subdirectories: raw, transformation and reporting. The raw directory contains the raw models that define the database schema, such as tables and views. The transformaiton directory contains the models that transform and load the marketing data into Google Big Query. The reporting models then contains the models that are build on above of transformation models. README.md: This file contains the documentation for the entire project. schema.yml: This file defines the database schema. It specifies the tables, columns, and data types. Data Visualization: After transforming Facebook data using DBT Cloud, data visualization can be an effective way to gain insights and communicate the results to stakeholders. With a clean and structured database, it becomes easier to create visualizations that highlight trends, patterns, and anomalies in the data. By using tools like Tableau, Power BI, or Python libraries such as Matplotlib or Seaborn, you can create interactive and dynamic visualizations that enable users to explore the data in different ways. Data visualization can also help to identify areas for improvement, such as targeting underperforming campaigns or identifying new opportunities for growth. Overall, data visualization is a crucial component of any data engineering project, as it helps to bridge the gap between technical and non-technical stakeholders, and enables better decision-making based on data-driven insights.
I have used LSTM models to predict the stock price. This is an example of a model with a publicly available dataset. In the real-world example, I have used my organizational dataset along with short-selling and insider transactional data.
The dataset I have been provided with is based on the LendingClub company. The company is a US-based lending company that provides lending services of various types of loans to its customers. When someone applies for a loan, the company has to make a decision: whether to approve the loan or not based on the applicant's profile and financial history. Two types of risks are associated with the bank’s decision: 1. If the applicant is likely to repay the loan, then not approving the loan results in a loss of business to the company 2. If the applicant is not likely to repay the loan, i.e. he/she is likely to default, then approving the loan may lead to a financial loss for the company. Our Target Variable is loan_status, having four values 1. Fully Paid (Applicant has fully paid the loan) 2. Charged Off (Applicant has not paid the installments in due time for a long period of time, i.e. he/she has defaulted on the loan) 3. Does not meet the credit policy. Status: Fully Paid 4. Does not meet the credit policy. Status: Charged Off
Master's degree in Data Science
2016-01-01-2020-01-01