This 6-week hands-on Generative AI and Agentic AI Masterclass by AiCouncil — in partnership with NVIDIA Inception Program, Microsoft (Solution Partner) & AWS (Partner Network), and the — is designed to equip you with cutting-edge skills in LLMs, prompt engineering, Retrieval-Augmented Generation (RAG), autonomous agents, and AI application development.
You’ll gain practical expertise in tools like LangChain, OpenAI, Hugging Face, Pinecone, Streamlit, and Gradio, while building real-world projects ranging from AI-powered chatbots and smart resume builders to agentic assistants and LLM-driven search engines.
On completion, you’ll receive a globally recognized certification by AiCouncil partnered with Microsoft, AWS, and NVIDIA, along with lifetime job assistance, 3 years of technical support, and career-ready skills to enter the AI workforce with confidence.
Master Generative AI Models – Understand transformers, BERT, and GPT for real-world NLP use cases like sentiment analysis and content generation.
Build Smart Chatbots – Create document-based, database-driven, and form-to-output bots with real-time query resolution.
Explore Multimodal AI – Use DALL·E and Stable Diffusion to generate stunning images from text prompts.
Develop Custom AI Agents – Learn RAG, embeddings, LangChain/Flowise, and function calling to build LLM-based assistants.
Web-Ready AI Solutions – Deploy AI agents to the cloud, embed them into web apps, and create your own real-time chat assistant for end users.
Capstone & Certification – Complete an industry-style project and earn credentials from AiCouncil partnered with NVIDIA Inception Program, Microsoft Solutions, and AWS Partner Network.
Exercise 1: Text Classifier using Embeddings
Objective:
Build a basic neural network to classify movie reviews (positive/negative) using word embeddings.
Concepts Used:
Tokenization, padding, Keras Tokenizer, Embedding layer, binary classification.
Steps:
Exercise 2: Character-Level Text Generator with LSTM
Objective:
Generate new names, words, or short poems using a character-level LSTM network.
Concepts Used:
Sequence modeling, one-hot encoding, LSTM architecture, text generation.
Steps:
Exercise 3: Fine-Tune BERT & Generate Text with GPT
Objective:
Explore transfer learning by fine-tuning a BERT model for classification and using GPT for creative text generation.
Concepts Used:
Pretraining vs fine-tuning, Transformers, Prompt engineering, Hugging Face Transformers API.
Part A – BERT:
Part B – GPT:
Exercise 1: FAQ-Based Document Chatbot
Objective:
Create a chatbot that can answer questions from an uploaded FAQ document or manual using semantic similarity.
Concepts Used:
Document parsing, sentence embeddings (e.g., Sentence Transformers), cosine similarity, Streamlit interface.
Steps:
Exercise 2: Database-Driven Attendance Bot
Objective:
Build a chatbot that parses user queries like "Who was present on May 5?" and answers using a connected SQL database.
Concepts Used:
Natural language to SQL mapping, DB integration (MySQL/PostgreSQL), Flask API, query handling logic.
Steps:
Exercise 3: Quotation Generator Bot with PDF Output
Objective:
Create a chatbot interface that takes user input through a form and generates a formatted quotation as a downloadable PDF.
Concepts Used:
Form handling, templating with Jinja2, PDF generation with ReportLab or python-docx, Streamlit/Gradio frontend.
Steps:
Exercise 1: Technical Q&A Bot from Uploaded PDFs
Objective:
Build a chatbot that can answer technical or academic questions from uploaded multi-page PDFs (e.g., research papers, manuals).
Concepts Used:
PDF parsing (PyMuPDF or pdfminer), text chunking, sentence embeddings (e.g., Sentence Transformers), vector similarity search, Streamlit interface.
Steps:
Exercise 2: Image Generator with DALL·E / Stable Diffusion
Objective:
Build an application that generates creative images from text prompts using DALL·E or open-source diffusion models.
Concepts Used:
Prompt engineering, text-to-image API (OpenAI or HuggingFace), Gradio interface, prompt variation logic.
Steps:
Exercise 3: RAG-Based Knowledge Assistant
Objective:
Build a Retrieval-Augmented Generation (RAG) chatbot that answers questions using a document knowledge base + LLMs.
Concepts Used:
Embeddings, vector databases (FAISS, Pinecone, or Chroma), document retrieval, LLM prompting with retrieved context.
Steps:
Exercise 1: Build a Multi-Tool AI Agent with Flowise
Objective:
Create a visual AI agent using Flowise that performs multiple tasks such as content generation, question answering, and creative writing.
Concepts Used:
LangChain components via Flowise, tool integration, agent chaining, memory components, API orchestration.
Steps:
Exercise 2: Function-Calling Agent with Calculator & Python Code Interpreter
Objective:
Build a custom agent that dynamically calls specific functions (e.g., calculator, Python executor) based on user query.
Concepts Used:
Function calling in OpenAI GPT, JSON schema, code execution from Python environment, memory-based routing.
Steps:
Exercise 3: Lead Generation AI Agent with Internet Search (Flowise + LangChain)
Objective:
Create a lead generation agent that performs basic research on a person or business and drafts a custom outreach email.
Concepts Used:
LLMs, memory, function calling, search tools, prompt chaining, text summarisation.
Steps:
Exercise 1: Set Up and Run a Local LLaMA 3.1 Agent
Objective:
Install and configure a local instance of the LLaMA 3.1 model and build a simple chatbot using it.
Concepts Used:
Open-source LLMs, local model deployment, inference APIs, resource management.
Steps:
Exercise 2: Build a Privacy-Focused RAG Chatbot with Flowise + Ollama
Objective:
Combine local LLM with a private knowledge base for document Q&A without sending data to the cloud.
Concepts Used:
Retrieval-Augmented Generation, vector databases, local embeddings, privacy in AI.
Steps:
Exercise 3: Ethical AI Scenario Analysis & Mitigation
Objective:
Analyze a given scenario involving AI bias or security threats, then propose ethical mitigation strategies.
Concepts Used:
AI ethics, bias in datasets, prompt injection, data poisoning, IP concerns.
Steps: