Applied Generative AI
Coming SoonUnlock the power of Generative AI. Learn to build intelligent, retrieval-augmented, and tool-using systems with LLMs, transformers, and agents that power the next generation of AI-driven applications.
Duration
40+ Hours
Level
Beginner to Intermediate
Location
Onsite Training
What you'll learn
Course Content
Foundations of LLMs
Understand the core principles behind Large Language Models and how they are built
- • What are LLMs?
- • Pretraining, Fine-tuning, Instruction Tuning
- • Popular Model Families: GPT, Claude, LLaMA, Mistral
Transformer Architecture
Dive into the architecture powering modern generative AI
- • Encoder, Decoder, and Encoder-Decoder Models
- • Self-Attention vs Cross-Attention
- • Positional Embeddings and Tokenization (BPE, SentencePiece)
Embeddings & Representations
Learn how embeddings power similarity, search, and language understanding
- • What Are Embeddings?
- • Text-to-Vector Representations
- • Similarity, Clustering, Semantic Search
- • Using OpenAI / Hugging Face Embedding APIs
Fine-Tuning & Prompt Engineering
Customize LLMs and craft effective prompts for specific tasks
- • Fine-tuning vs Prompt-tuning
- • LoRA, PEFT, OpenAI Fine-tuning Endpoint
- • Prompt Principles: Specificity, Examples, Roles
- • Prompt Patterns: Zero-shot, Few-shot, Chain of Thought
Optimization & Evaluation
Optimize model size and evaluate performance with real-world benchmarks
- • Quantization: GGUF, bitsandbytes, llama.cpp, vLLM
- • Model Evaluation: Perplexity, ROUGE, MMLU, HELM, TruthfulQA
- • Benchmarking RAG and Agent Systems
Retrieval-Augmented Generation (RAG)
Build intelligent systems by combining search to complement generation
- • What is RAG?
- • Vector Databases: FAISS, Chroma, Pinecone, Qdrant
- • Chunking Techniques: Fixed, Semantic, Recursive
- • Advanced RAG Techniques: Query Transformation, Hybrid Search, Metadata Filtering, Reranking
Caching & Latency Optimization
Reduce cost and latency by caching prompts, responses, and retrievals
- • Why Cache? Cost, Rate Limits, Latency
- • Prompt & Embedding Caching Strategies
- • Tools: Redis, SQLite, LangChain Cache, LlamaIndex Cache, Semantic Cache
Agents & Tool-Use
Create autonomous agents that can plan, reason, and interact with tools
- • What Are Agents? Autonomy and Tool Use
- • Function Calling with OpenAI and LangChain
- • Multi-Tool Agents and Routing Logic
- • Memory: Short-term (Chat Buffer) and Long-term (Vector Stores)
- • Planning and Execution with LangGraph and CrewAI
Prerequisites
- Basic programming knowledge
- Understanding of SQL basics
- Problem-solving aptitude
Meet Your Instructors
Asser Mazin is a versatile Data Scientist with 3 years of hands-on experience in Machine Learning, Deep Learning, and Generative AI. With a strong focus on building end-to-end AI solutions and advanced analytics systems, Asser specializes in solving complex business challenges through data-driven innovation. Known for delivering scalable and impactful technologies, Asser leverages cutting-edge tools to generate actionable insights and drive meaningful outcomes across industries.
Mina is an experienced senior Data Scientist with a proven track record in leveraging Gen AI, machine learning, advanced analytics and storytelling to drive business growth. Passionate about solving complex problems and delivering actionable insights that empower decision making.