Zakaria J. looks like a good fit?

We can organize an interview with Aldin or any of our 25,000 available candidates within 48 hours. How would you like to proceed?

Schedule Interview Now

Zakaria J. AI, Machine Learning and General Development

My name is Zakaria J. and I have over 2 years of experience in the tech industry. I specialize in the following technologies: Computer Vision, Natural Language Processing, Machine Learning, Deep Learning, Time Series Forecasting, etc.. I hold a degree in Master's degree. Some of the notable projects I’ve worked on include: prompt engineering for web development using langchain and Templates, Face Recognition Deep Learning, Time Series analysis, Fine-Tune LLAMA-2 with QLoRA, RAG Implementation using Mistral 7B, etc.. I am based in Casablanca, Morocco. I've successfully completed 18 projects while developing at Softaims.

My passion is building solutions that are not only technically sound but also deliver an exceptional user experience (UX). I constantly advocate for user-centered design principles, ensuring that the final product is intuitive, accessible, and solves real user problems effectively. I bridge the gap between technical possibilities and the overall product vision.

Working within the Softaims team, I contribute by bringing a perspective that integrates business goals with technical constraints, resulting in solutions that are both practical and innovative. I have a strong track record of rapidly prototyping and iterating based on feedback to drive optimal solution fit.

I’m committed to contributing to a positive and collaborative team environment, sharing knowledge, and helping colleagues grow their skills, all while pushing the boundaries of what's possible in solution development.

Main technologies

  • AI, Machine Learning and General Development

    2 years

  • Computer Vision

    1 Year

  • Natural Language Processing

    1 Year

  • Machine Learning

    1 Year

Additional skills

  • Computer Vision
  • Natural Language Processing
  • Machine Learning
  • Deep Learning
  • Time Series Forecasting
  • LLM Prompt
  • GPT API
  • GPT Chatbot
  • Agent GPT
  • Predictive Analytics
  • Trading Automation
  • Artificial Intelligence

Direct hire

Potentially possible

Previous Company

IBM Morocco

Ready to get matched with vetted developers fast?

Let's get started today!

Hire Remote Developer

Experience Highlights

prompt engineering for web development using langchain and Templates

An open source framework to simplify and manage the use of AI claims in web application development. By using claims templates provided by LangChain, developers can simplify the claims creation process using dynamic inputs. LangChain offers claim template classes that make it easy to create claims using placeholders for input variables. These templates allow for the creation of dynamic prompts, enabling developers to create more flexible and interactive web applications. By integrating agile engineering techniques with LangChain and its template system, developers can enhance the user experience and functionality of their web applications by harnessing the power of AI-generated responses. It should be noted that LangChain is designed to work with large language models (LLMs), such as GPT-3, and makes it easy to incorporate agile engineering practices into your web development workflow.

Face Recognition Deep Learning

A facial recognition system is a technology capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.

Time Series analysis

You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session

Fine-Tune LLAMA-2 with QLoRA

As an AI language model, I don't have access to my training data or the ability to fine-tune on specific datasets. I apologize for any confusion, but I cannot directly assist with fine-tuning Llama2 or any other model with Qlora or any specific dataset. However, if you're interested in fine-tuning Llama2 or another language model with Qlora, you can follow the general process of fine-tuning a language model that I mentioned earlier. This involves preparing your dataset, setting up the environment, fine-tuning the model, evaluating its performance, and deploying it for your specific use case. To fine-tune a language model like Llama2 with Qlora, you would need to have access to the model and the necessary computational resources. Additionally, you would need to ensure that you comply with any licensing or legal requirements associated with the dataset or the model you are using. It's important to note that fine-tuning a language model is a complex task that requires expertise in machine learning and natural language processing. It may also require significant computational resources and time. I recommend referring to the documentation and guidelines provided by the creators of the specific language model you are interested in for more detailed instructions on the fine-tuning process.

RAG Implementation using Mistral 7B

RAG implementation refers to the use of the Retrieval Augmented Generation (RAG) model, specifically using the 7B variant. RAG is a framework that combines retrieval-based and generative-based approaches for natural language processing tasks, such as question answering and text generation. The RAG model consists of two main components: Retriever: The retriever component is responsible for retrieving relevant documents or passages from a large corpus of text based on a given query or question. It uses information retrieval techniques to identify the most relevant information. Generator: The generator component takes the retrieved information and generates a response or answer to the query. It can be a language model like GPT (Generative Pre-trained Transformer) that generates text based on the input. The "7B" in the question refers to the size or capacity of the language model. In this case, it indicates that the RAG model is using a variant with 7 billion parameters. The number of parameters in a language model is often correlated with its capacity to understand and generate text. The implementation of RAG using the 7B variant involves training or fine-tuning the model on a large dataset, configuring the retriever component to retrieve relevant information, and integrating it into an application or system for question answering or text generation tasks.

Education

  • kaggle

    Master's degree in Machine Learning

    2020-01-01-2023-01-01

Languages

  • Arabic
  • English