This course, offered by AiCouncil in association with the NVIDIA Inception Program, Microsoft Solution Partner, and AWS Partner Network, provides a comprehensive journey into the world of Agentic AI. You'll explore the foundational concepts of AI agents, large language models, and essential technologies like function calling and vector databases. Through hands-on projects, you'll learn to build and deploy intelligent agents using modern frameworks. The curriculum also delves into advanced topics such as monetizing AI agents, creating custom AI assistants akin to Microsoft Copilot, and leveraging open-source solutions. Furthermore, it addresses critical considerations around security, privacy, and ethical practices in AI, preparing you to develop responsible and impactful agentic AI solutions.
Master Generative AI Models – Understand transformers, BERT, and GPT for real-world NLP use cases like sentiment analysis and content generation.
Build Powerful AI Agents – Learn to design, develop, and deploy intelligent AI agents using leading frameworks like Flowise, LangChain, LangGraph, AutoGen, and CrewAI.
Master LLMs & Function Calling – Go deep into Large Language Models (LLMs) like GPT, Gemini, and Llama, and discover how function calling enables agents to interact with external tools and APIs.
Implement Advanced RAG Systems – Understand Vector Databases, Embeddings, and Retrieval-Augmented Generation (RAG) to create AI agents with enhanced knowledge and reduced hallucinations.
Develop Custom AI Solutions – Get hands-on with projects ranging from content generation and lead research to building your own Python-based Vision Copilot and privacy-focused local RAG chatbots.
Deploy & Monetize Your Agents – Learn practical strategies for hosting AI agents on cloud platforms like Render, integrating them into websites, and turning your creations into sellable solutions.
Explore Open-Source AI – Dive into the world of private AI by running open-source LLMs like Llama 3.1 and Mistral locally using Ollama, and integrate them into your agent workflows.
Address AI Security & Ethics – Gain crucial knowledge on jailbreaks, prompt injections, data poisoning, copyrights, and data privacy to build responsible and secure AI agents.
Capstone & Certification – Complete an industry-style project and earn credentials from AiCouncil, partnered with the NVIDIA Inception Program, Microsoft Solution Partner, and AWS Partner Network.
Exercise 1: Exploring LLM Capabilities
Objective:
Interact with various LLMs to understand their strengths and weaknesses in different tasks (e.g., creative writing, factual recall, code generation).
Concepts Used:
LLM prompting, comparative analysis.
Steps:
Exercise 2: Simple Function Calling Simulation
Objective:
Understand the concept of function calling by simulating a basic interaction where an LLM "calls" an external tool.
Concepts Used:
Function calling logic, mock API responses.
Steps:
Exercise 1: Build a Multi-Tool AI Agent with Flowise
Objective:
Create a visual AI agent using Flowise that performs multiple tasks such as content generation, question answering, and creative writing.
Concepts Used:
LangChain components via Flowise, tool integration, agent chaining, memory components, API orchestration.
Steps:
Exercise 2: Function-Calling Agent with Calculator & Python Code Interpreter
Objective:
Build a custom agent that dynamically calls specific functions (e.g., calculator, Python executor) based on user query.
Concepts Used:
Function calling in OpenAI GPT, JSON schema, code execution from Python environment, memory-based routing.
Steps:
Exercise 3: Lead Generation AI Agent with Internet Search (Flowise + LangChain)
Objective:
Create a lead generation agent that performs basic research on a person or business and drafts a custom outreach email.
Concepts Used:
LLMs, memory, function calling, search tools, prompt chaining, text summarisation.
Steps:
Exercise 1: Deploying a Flowise Agent to a Cloud Platform
Objective:
Host a Flowise-built AI agent on a public cloud service (e.g., Render, Heroku) to make it accessible online.
Concepts Used:
Cloud deployment, port forwarding, environmental variables, CI/CD basics.
Steps:
Exercise 2: Integrating an Agent into a Simple Webpage
Objective:
Embed your deployed AI agent's chat interface into a basic HTML webpage.
Concepts Used:
HTML, JavaScript, iFrames or API calls for embedding.
Steps:
Exercise 1: Set Up and Run a Local LLaMA 3.1 Agent
Objective:
Install and configure a local instance of the LLaMA 3.1 model and build a simple chatbot using it.
Concepts Used:
Open-source LLMs, local model deployment, inference APIs, resource management.
Steps:
Exercise 2: Build a Privacy-Focused RAG Chatbot with Flowise + Ollama
Objective:
Combine local LLM with a private knowledge base for document Q&A without sending data to the cloud.
Concepts Used:
Retrieval-Augmented Generation, vector databases, local embeddings, privacy in AI.
Steps:
Exercise 3: Ethical AI Scenario Analysis & Mitigation
Objective:
Analyze a given scenario involving AI bias or security threats, then propose ethical mitigation strategies.
Concepts Used:
AI ethics, bias in datasets, prompt injection, data poisoning, IP concerns.
Steps:
These projects are excellent starting points for understanding core Agentic AI concepts using visual tools like **Flowise**. They focus on direct interaction with **Large Language Models (LLMs)**, chaining simple steps, and generating text-based output within a user-friendly visual interface.
These projects build on beginner skills by introducing more complex logic, external tool integration (even if simulated), and more nuanced outputs. They provide a deeper understanding of **function calling** and multi-step agent reasoning.
These projects require a deeper technical understanding, involve more complex setups, or move into code-centric development and advanced AI architectures. They offer a comprehensive understanding of building, deploying, and optimizing powerful **AI agents**.