Generative and Agentic AI Masterclass - Internship Assured

  • Course Duration 6 weeks - 45 Hours
  • Course Mode Instructor Led Online Training
  • Date & Time 16-June-2023

About The Course

This 6-week hands-on Generative AI and Agentic AI Masterclass by AiCouncil — in partnership with NVIDIA Inception Program, Microsoft (Solution Partner) & AWS (Partner Network), and the  — is designed to equip you with cutting-edge skills in LLMs, prompt engineering, Retrieval-Augmented Generation (RAG), autonomous agents, and AI application development. 

You’ll gain practical expertise in tools like LangChain, OpenAI, Hugging Face, Pinecone, Streamlit, and Gradio, while building real-world projects ranging from AI-powered chatbots and smart resume builders to agentic assistants and LLM-driven search engines. 

On completion, you’ll receive a globally recognized certification by AiCouncil partnered with Microsoft, AWS, and NVIDIA, along with lifetime job assistance, 3 years of technical support, and career-ready skills to enter the AI workforce with confidence.

Key Features

Instructor-Led, Interactive Training

Live, expert-led sessions for hands-on learning

Lifetime Access to Recordings

Revisit recorded classes anytime, at your convenience

Assignments & Real-Time Projects

Apply skills through practical projects after every module

Lifetime Job Assistance

Ongoing support for AI and Data Science job opportunities

3 Years of Technical Support

24/7 query resolution and technical help for 3 years

Globally Recognized Certification

Certified by AiCouncil, backed by Microsoft & AWS

Highlights

Master Generative AI Models – Understand transformers, BERT, and GPT for real-world NLP use cases like sentiment analysis and content generation.

Build Smart Chatbots – Create document-based, database-driven, and form-to-output bots with real-time query resolution.

Explore Multimodal AI – Use DALL·E and Stable Diffusion to generate stunning images from text prompts.

Develop Custom AI Agents – Learn RAG, embeddings, LangChain/Flowise, and function calling to build LLM-based assistants.

Web-Ready AI Solutions – Deploy AI agents to the cloud, embed them into web apps, and create your own real-time chat assistant for end users.

Capstone & Certification – Complete an industry-style project and earn credentials from AiCouncil partnered with NVIDIA Inception Program, Microsoft Solutions, and AWS Partner Network.


Course Agenda

  • 1 – Introduction to Generative AI & NLP
    • What is Generative AI?
    • Evolution and timeline of generative models (RNN → GAN → Transformer → Foundation Models)
    • Discriminative vs Generative models
    • What is Natural Language Processing (NLP)?
    • Key NLP applications in real life (translation, Q&A, sentiment, chatbots)
    • Ethical considerations and biases in NLP
  • 2 – Text Preprocessing & Embeddings
    • Tokenisation and integer encoding (Keras Tokenizer)
    • Handling out-of-vocabulary tokens
    • Padding and truncating sequences
    • Why embeddings are important for NLP and Generative AI
    • Hands-on: Text pre-processing and building a simple embedding-based text classifier
  • 3 – Sequence Models: RNN & LSTM
    • Basics of neural networks for sequences
    • RNN architecture and limitations (vanishing gradients, short memory)
    • LSTM internals: input, forget, output gates
    • When and why LSTMs are still relevant (e.g., music, speech, sequential prediction)
    • Hands-on: Character-level LSTM for text generation (e.g., generate names or poems)
  • 4 – Transformers: The Backbone of Modern Generative AI
    • Transformer architecture fundamentals
    • Encoder–decoder models
    • Attention and self-attention mechanisms
    • Multi-head attention and positional encodings
    • Comparison: RNN/LSTM vs Transformer models
    • Hands-on: Visualising attention weights with a pre-trained Transformer
  • 5 – BERT, GPT & Prompt Engineering
    • Introduction to BERT: Bidirectional Encoder Representations from Transformers
    • Pretraining vs. Fine-tuning in BERT
    • Parameter-efficient fine-tuning (LoRA, Adapters, Prefix-tuning)
    • Introduction to GPT (e.g., GPT-4): Autoregressive Transformers
    • Applications: summarisation, chatbots, storytelling
    • Prompt engineering: zero-shot, few-shot, and chain-of-thought (CoT) prompting
    • Hands-on: Fine-tune a BERT model for sentiment analysis or text classification
    • Generate creative text using GPT API (story generation, summarisation)

Exercise 1: Text Classifier using Embeddings

Objective:

Build a basic neural network to classify movie reviews (positive/negative) using word embeddings.

Concepts Used:

Tokenization, padding, Keras Tokenizer, Embedding layer, binary classification.

Steps:

  • Preprocessing text (lowercasing, tokenization, padding)
  • Creating word embeddings with Keras
  • Building a Sequential model in Keras
  • Training and evaluating a basic classifier on a small dataset like IMDB or a CSV-based custom dataset

Exercise 2: Character-Level Text Generator with LSTM

Objective:

Generate new names, words, or short poems using a character-level LSTM network.

Concepts Used:

Sequence modeling, one-hot encoding, LSTM architecture, text generation.

Steps:

  • Preparing character sequences from a sample corpus (e.g., names or poems)
  • Building and training a simple LSTM-based model
  • Using sampling techniques to generate new text from learned character patterns

Exercise 3: Fine-Tune BERT & Generate Text with GPT

Objective:

Explore transfer learning by fine-tuning a BERT model for classification and using GPT for creative text generation.

Concepts Used:

Pretraining vs fine-tuning, Transformers, Prompt engineering, Hugging Face Transformers API.

Part A – BERT:

  • Load a pretrained BERT model using Hugging Face
  • Fine-tune it on a sentiment dataset or text classification dataset
  • Evaluate predictions

Part B – GPT:

  • Use OpenAI's GPT or Hugging Face GPT to:
  • Generate a blog intro
  • Summarize an article
  • Create a story based on a few-shot prompt
  • 6 – Basics of Chatbots & Architecture
    • What are chatbots? Rule-based vs AI-based
    • Intent recognition and dialogue flow
    • Example workflows for different business use cases
    • Introduction to chatbot development tools and libraries
  • 7 – Manual-Based Chatbots (FAQ/Document)
    • Building chatbots from FAQs and document data
    • Techniques: keyword search, cosine similarity, semantic search
    • Hands-on: Upload documents and create basic Q&A chatbot
    • Use Sentence Transformers for semantic retrieval
    • Basic interactive chatbot using Streamlit
  • 8 – Query-Driven Chatbots with Database Integration
    • Parsing user queries into SQL
    • Integration with relational (MySQL) and NoSQL (MongoDB) databases
    • Hands-on: Build an attendance or report bot that queries a live database
  • 9 – Document/Quotation Generator Bots
    • Collecting structured inputs via forms
    • Generating dynamic documents using Jinja2 (HTML templates → PDF), python-docx, ReportLab
    • Hands-on: Build a quotation/document generator chatbot
    • Create REST API endpoints using Flask
  • 10 – Practical RNN Applications
    • RNN for text classification (sentiment analysis)
    • Word-level RNN for next-word prediction
    • Hands-on: Compare outputs from RNN vs LSTM vs Transformer
    • Explore model behaviour on small vs large datasets

Exercise 1: FAQ-Based Document Chatbot

Objective:

Create a chatbot that can answer questions from an uploaded FAQ document or manual using semantic similarity.

Concepts Used:

Document parsing, sentence embeddings (e.g., Sentence Transformers), cosine similarity, Streamlit interface.

Steps:

  • Uploading a .txt or .pdf file
  • Breaking the document into chunks
  • Converting chunks into embeddings
  • Matching user queries to the most relevant chunk using cosine similarity
  • Displaying the response using a basic Streamlit chatbot interface

Exercise 2: Database-Driven Attendance Bot

Objective:

Build a chatbot that parses user queries like "Who was present on May 5?" and answers using a connected SQL database.

Concepts Used:

Natural language to SQL mapping, DB integration (MySQL/PostgreSQL), Flask API, query handling logic.

Steps:

  • Create a sample attendance table in MySQL/PostgreSQL
  • Map user-friendly queries to SQL queries (hardcoded or via pattern matching)
  • Fetch answers from the DB
  • Build a simple Flask API to return the results

Exercise 3: Quotation Generator Bot with PDF Output

Objective:

Create a chatbot interface that takes user input through a form and generates a formatted quotation as a downloadable PDF.

Concepts Used:

Form handling, templating with Jinja2, PDF generation with ReportLab or python-docx, Streamlit/Gradio frontend.

Steps:

  • Input form for customer name, items, prices, taxes
  • Template rendering with Jinja2
  • Generate a styled PDF quotation
  • Build API with Flask and frontend with Streamlit or Gradio for user interaction
  • 11 – Answering from Uploaded Documents
    • OCR and PDF processing using Tesseract & PyMuPDF
    • Extracting text and using semantic search
    • Hands-on: Build a Q&A bot that answers technical queries from uploaded PDFs
  • 12 – Image Generation with DALL·E or Stable Diffusion
    • How text-to-image models work
    • Prompt engineering for vision generation
    • Variations: image edits, inpainting, style transfer
    • Hands-on: Generate creative images from user prompts
    • Build a mini Gradio/Streamlit app for showcasing generated images
    • Deploy your app and link backend APIs
  • 13 – Chatbot Structuring & Best Practices
    • Planning chatbot scope, goals, and limitations
    • Designing conversation flows for real-world deployment
    • Conducting user testing and feedback loops
    • Branding, UI/UX for public-facing chatbot platforms
  • 14 – Introduction to Vector Databases, Embeddings & RAG
    • What are embeddings? Generation using Sentence Transformers
    • Vector stores: FAISS, Pinecone, Chroma
    • Introduction to Retrieval-Augmented Generation (RAG)
    • RAG Architecture and working
  • 15 – Embeddings in Practice
    • Generative vs Discriminative use of embeddings
    • Similarity search, recommendation systems using embeddings
    • Building hybrid pipelines combining search + generation
    • Hands-on: FAISS-based semantic search demo
    • Mini app to return semantically closest text from a knowledge base

Exercise 1: Technical Q&A Bot from Uploaded PDFs

Objective:

Build a chatbot that can answer technical or academic questions from uploaded multi-page PDFs (e.g., research papers, manuals).

Concepts Used:

PDF parsing (PyMuPDF or pdfminer), text chunking, sentence embeddings (e.g., Sentence Transformers), vector similarity search, Streamlit interface.

Steps:

  • Upload and parse PDF documents
  • Break text into chunks with overlap
  • Create embeddings and store in-memory using FAISS or Chroma
  • Accept user queries and return relevant matched content
  • Deploy the bot using Streamlit

Exercise 2: Image Generator with DALL·E / Stable Diffusion

Objective:

Build an application that generates creative images from text prompts using DALL·E or open-source diffusion models.

Concepts Used:

Prompt engineering, text-to-image API (OpenAI or HuggingFace), Gradio interface, prompt variation logic.

Steps:

  • Input: Text prompt from user
  • Generate image using DALL·E API or local Stable Diffusion model
  • Support for prompt templates like "A futuristic city in the style of..."
  • Interface built with Gradio for interactive experience

Exercise 3: RAG-Based Knowledge Assistant

Objective:

Build a Retrieval-Augmented Generation (RAG) chatbot that answers questions using a document knowledge base + LLMs.

Concepts Used:

Embeddings, vector databases (FAISS, Pinecone, or Chroma), document retrieval, LLM prompting with retrieved context.

Steps:

  • Upload multiple knowledge documents
  • Chunk, embed, and store in vector store
  • User query triggers retrieval of top-k chunks
  • Retrieved text is sent as context to LLM (e.g., GPT-4)
  • Return and display final generated answer
  • 16 – Introduction to AI Agents & LLM Ecosystem
    • What are AI Agents? Core concepts
    • Understanding LLM ecosystem: GPT, Claude, Gemini, LLaMA, Mistral
    • Function calling in LLMs: How models "act" using external tools
    • Introduction to LangChain, LangGraph, Autogen, CrewAI
  • 17 – Vector Databases, Embeddings & RAG
    • Embedding generation techniques
    • Connecting to and querying Vector DBs
    • RAG for AI Agents – memory and knowledge retrieval
    • Using LangChain with FAISS, Pinecone or Chroma
  • 18 – Setting Up & Creating Your First AI Agent
    • Installing Node.js and Flowise
    • Flowise Interface for building LangChain graphs
    • Hands-on: Build a "Boss AI", Creative Writer, and Title Generator agent
    • Visualise LLM call chains in Flowise
    • Troubleshoot Flowise setup
  • 19 – AI Agent Use Cases with Function Calling
    • Agent 2: Social Media Strategy Generator
    • Agent 3: Lead Research + Personalised Email Generator
    • Agent 4: Function Calling with Calculator, Python Code Interpreter
    • Project: Build a custom AI chat assistant (multi-agent)
  • 20 – Hosting & Deploying Your AI Agents
    • Hosting on Render, Replit or cloud VMs
    • Embed into websites or run as standalone apps
    • Hands-on: UI/UX polishing using Gradio, Streamlit
    • Branding your Agent for clients or portfolio

Exercise 1: Build a Multi-Tool AI Agent with Flowise

Objective:

Create a visual AI agent using Flowise that performs multiple tasks such as content generation, question answering, and creative writing.

Concepts Used:

LangChain components via Flowise, tool integration, agent chaining, memory components, API orchestration.

Steps:

  • Set up Flowise and create a new agent
  • Add tools: LLM (OpenAI), Prompt Template, Document Loader, Output Parser
  • Build 2–3 use cases like: Content writing assistant (e.g., blog title + body), Product Q&A from knowledge base
  • Deploy the agent using Flowise frontend

Exercise 2: Function-Calling Agent with Calculator & Python Code Interpreter

Objective:

Build a custom agent that dynamically calls specific functions (e.g., calculator, Python executor) based on user query.

Concepts Used:

Function calling in OpenAI GPT, JSON schema, code execution from Python environment, memory-based routing.

Steps:

  • Define function schema for: calculator, Python code execution
  • Configure agent to parse user input and choose the right function
  • Examples: "What is 27% of 3800?", "Write Python code to find prime numbers from 1 to 100"
  • Show result from appropriate function and return final output

Exercise 3: Lead Generation AI Agent with Internet Search (Flowise + LangChain)

Objective:

Create a lead generation agent that performs basic research on a person or business and drafts a custom outreach email.

Concepts Used:

LLMs, memory, function calling, search tools, prompt chaining, text summarisation.

Steps:

  • Use a research tool plugin (DuckDuckGo, SerpAPI, or static dummy search for demo)
  • Prompt agent to search for a person/business
  • Extract useful data (name, role, company, recent work)
  • Generate a personalized email using GPT-4
  • Wrap in Flowise interface for input/output
  • 21 – Open-Source AI Agents with LLaMA, Mistral & Ollama
    • Pros and cons of open-source LLMs
    • Installing and running Ollama locally
    • Building LLaMA 3.1 agents on your PC
    • Private RAG chatbot using Flowise + Ollama
  • 22 – Advanced AI Agents & Tools
    • Fast inference with Groq API
    • DeepSeek R1 model: Overview and hands-on
    • Use DeepSeek to create a long-form response agent
  • 23 – Build Your Own AI Copilot (Python Project)
    • Install Git, VS Code, and dependencies
    • Clone GitHub repo and understand directory structure
    • Walk through Python code for Vision + Language Copilot
    • Hands-on: Build a desktop recording Copilot app
  • 24 – Security, Privacy & Legal Concerns
    • Jailbreaking and prompt injection
    • Data poisoning and backdoor risks
    • Copyright/IP issues in AI-generated content
    • Responsible deployment practices and safety nets
  • 25 – Final Recap + Capstone Support
    • Final wrap-up and best practices
    • Q&A: Capstone troubleshooting
    • Demo polishing and deployment readiness
    • Showcase preparation and certification support

Exercise 1: Set Up and Run a Local LLaMA 3.1 Agent

Objective:

Install and configure a local instance of the LLaMA 3.1 model and build a simple chatbot using it.

Concepts Used:

Open-source LLMs, local model deployment, inference APIs, resource management.

Steps:

  • Download and install Ollama or relevant LLaMA runtime
  • Load LLaMA 3.1 model locally
  • Build a simple chat interface to interact with the model
  • Test with example queries for knowledge retrieval or casual chat

Exercise 2: Build a Privacy-Focused RAG Chatbot with Flowise + Ollama

Objective:

Combine local LLM with a private knowledge base for document Q&A without sending data to the cloud.

Concepts Used:

Retrieval-Augmented Generation, vector databases, local embeddings, privacy in AI.

Steps:

  • Prepare a private document dataset (PDFs, docs)
  • Create embeddings locally and set up a vector store
  • Connect local LLaMA model with the vector store via Flowise
  • Build chatbot interface that answers queries from local docs only

Exercise 3: Ethical AI Scenario Analysis & Mitigation

Objective:

Analyze a given scenario involving AI bias or security threats, then propose ethical mitigation strategies.

Concepts Used:

AI ethics, bias in datasets, prompt injection, data poisoning, IP concerns.

Steps:

  • Given 2-3 real-world scenarios (e.g., biased hiring AI, prompt injection attack)
  • Identify the ethical challenges and risks
  • Suggest practical solutions or safeguards (e.g., data auditing, adversarial testing)
  • Present findings and recommendations in a report or presentation format
  • Capstone Project & Mentorship
    • Optional Week for Capstone Mentorship
    • Peer code review & feedback
    • Final polishing of capstone submission
    • Project showcase and evaluation
    • Career Preparation
    • Internship readiness checklist
    • Job assistance: resume, LinkedIn, GitHub prep

Projects

  • Objective: Create a Generative AI-powered resume builder that helps users craft job-specific resumes by extracting skills from job descriptions and suggesting bullet points based on user inputs.
  • Key Skills: Prompt engineering, GPT-based text generation, information extraction using NER, LangChain tools, Streamlit UI design.
  • Use Case: Enables job seekers to create professional, tailored resumes that align with specific job roles and industry standards, increasing their chances of landing interviews.
  • Extract required skills from job descriptions using NER or keyword extraction
  • Generate bullet points for experience sections with GPT prompts
  • Implement prompt templates for resume formatting
  • Build a Streamlit app to input profile data and output downloadable resumes
  • Integrate export to PDF functionality
  • Objective: Design a Generative AI tool that generates brand-aligned social media posts, hashtags, captions, and email newsletters based on brief inputs and target audience.
  • Key Skills: LLM prompt engineering, tone adaptation, image generation with DALL·E or SDXL, content summarization, persona design.
  • Use Case: Supports startups and marketers by generating consistent, high-quality content that matches the brand voice and saves time on daily content creation.
  • Create prompt chains to generate posts in different tones (professional, witty, informative)
  • Use OpenAI function calling to generate images for posts (e.g., DALL·E)
  • Fine-tune output styles for different platforms (LinkedIn, Instagram, Twitter)
  • Create a Streamlit/Gradio UI for content generation
  • Build a post-scheduler stub using date/time tools
  • Objective: Develop an AI Agent that summarizes lengthy legal contracts, highlights key clauses, and answers user queries on terms and implications.
  • Key Skills: Retrieval-Augmented Generation (RAG), LangChain document loaders, vector store (ChromaDB), OpenAI/GPT API integration, clause segmentation with custom prompts.
  • Use Case: Helps individuals, startups, and legal teams understand complex legal documents without the need for full legal training or manual review.
  • Load and chunk legal PDFs using LangChain loaders
  • Extract clauses such as payment terms, liabilities, confidentiality using tailored prompts
  • Summarize agreements in plain English
  • Build a Q&A interface using vector search
  • Deploy as a chatbot using Gradio or Streamlit
  • Objective: Build an LLM-based healthcare assistant that takes patient symptoms as input and provides possible conditions, urgency level, and basic advice using publicly available medical knowledge.
  • Key Skills: LLM function calling, knowledge retrieval (RAG), prompt optimization, safety guardrails, tool integration (Web search, calendar), multilingual response generation.
  • Use Case: Provides first-level support in remote areas or for patients seeking preliminary advice before visiting a doctor—while clearly disclaiming that it is not a substitute for medical advice.
  • Implement symptom-to-condition mapping using prompt templates
  • Add structured symptom intake forms and optional RAG-based medical document search
  • Use OpenAI tools or LangChain agents to suggest nearby clinics (stub/mock)
  • Integrate safety and disclaimer functions
  • Build a Streamlit/Gradio interface for interaction
  • Objective: Build a Generative AI-powered chatbot that acts as a personalized tutor. It answers questions, summarizes lessons, generates quizzes, and assists with homework across multiple subjects using LLMs and retrieval-augmented generation (RAG).
  • Key Skills: Prompt engineering, LLM fine-tuning, embeddings, RAG, LangChain/Flowise, vector databases, chatbot development, function calling, and deployment.
  • Use Case: Empowers students with 24/7 academic support and helps educators create interactive learning experiences. The system mimics human-like tutoring and study planning, enhancing self-paced learning.
  • Build a Document Q&A Bot using LangChain or Flowise with FAISS/ChromaDB
  • Automatically summarize chapters and generate topic-based quizzes
  • Incorporate tools like calculator and formula explainer using function calling
  • Create an interactive web-based chatbot using Streamlit or Gradio
  • Containerize and deploy the application to cloud platforms
  • Voice-enabled tutoring (using Whisper or TTS APIs)
  • Multi-subject agent chaining
  • Homework planner with calendar integration

Certification

Career Support

We have a dedicated team which is taking care of our learners learning objectives.


FAQ

There is no such prerequisite if you are enrolling for Master’s Course as everything will start from scratch. Whether you are a working IT professional or a fresher you will find a course well planned and designed to incorporate trainee from various professional backgrounds.
AI Council offers 24/7 query resolution, you can raise a ticket with a dedicated support team and expect a revert within 24 Hrs. Email support can resolve all your query but if still it wasn’t resolved then we can schedule one-on-one session with our instructor or dedicated team. You can even contact our support after completing the training as well. There are no limits on number of tickets raised.
AI council provide two different modes for training one can choose for instructor lead training or learning with prerecorded video on demand. We also offer faculty development programs for college and schools. apart from this corporate training for organization/companies to enhance and update technical skills of the employees. We have highly qualified trainers who are working in the training industry from a very long time and have delivered the sessions and training for top colleges/schools and companies.
We are providing a 24/7 assistance for the ease of the student. Any query can be raised through the interface itself as well as can be communicated through email also. If someone is facing difficulties with above methods mentioned above we can arrange a one on one session with the trainer to help you with difficulties faced in learning. You can raise the query throughout the total training period as well as after the completion of the training.
AI Council offers you the latest, appropriate and most importantly the real-world projects throughout your training period. This makes student to gain industry level experience and converting the learning’s into solution to create the projects. Each Training Module is having Task or projects designed for the students so that you can evaluate your learning’s. You will be working on projects related to different industries such as marketing, e-commerce, automation, sales etc.
Yes, we do provide the job assistance so that a learner can apply for a job directly after the completion of the training. We have tied-ups with companies so when required we refers our students to those companies for interviews. Our team will help you to build a good resume and will trained you for your job interview.
After the successful completion of the training program and the submission of assignments/quiz, projects you have to secure at least B grade in qualifying exam, AI Council certified certificate will be awarded to you. Every certificate will be having a unique number through which same can be verified on our site.
To be very professional and transparent No, we don’t guarantee the job. the job assistance will help to provide you an opportunity to grab a dream job. The selection totally depends upon the performance of the candidate in the interview and the demand of the recruiter.
Our most of the programs are having both the modes of training i.e. instructor led and self-paced. One can choose any of the modes depending upon their work schedule. We provide flexibility to choose the type of training modes. While registering for courses you will be asked to submit your preference to select any of the modes. If any of the course is not offered in both modes so you can check in which mode, the training is going on and then you can register for the same. In any case if you feel you need any other training mode you can contact our team.
Yes, definitely you can opt for multiple courses at a time. We provide flexible timings. If you are having a desire for learning different topics while continuing with your daily hectic schedule our course timing and modes will help you a lot to carry on the learning’s.
Whenever you are enrolling in any of the courses we will send the notification you on your contact details. You will be provided with unique registration id and after successful enrollment all of the courses will be added to your account profile on our website.AI Council provides lifetime access to course content whenever needed.
A Capstone project is an outcome of the culminating learning throughout the academic years. It is the final project that represents your knowledge, efforts in the field of educational learning. It can be chosen by the mentor or by the students to come with a solution.
Yes, for obtaining the certificate of diploma programmer you have to submit the capstone project.