Module 1: RAG Framework
In this module, you will learn how RAG is used to generate responses for different applications such as chatbots. You’ll then learn about the RAG process, the Dense Passage Retrieval (DPR) context encoder and question encoder with their tokenizers, and the Faiss library developed by Facebook AI Research for searching high-dimensional vectors.
In hands-on labs, you will Use RAG with PyTorch to evaluate content appropriateness and with Hugging Face to retrieve information from the dataset.
Module 2: Prompt Engineering and LangChain
In this module, you will learn about in-context learning and advanced methods of prompt engineering to design and refine the prompts for generating relevant and accurate responses from AI. You’ll then be introduced to the LangChain framework, which is an open-source interface for simplifying the application development process using LLM. You’ll learn about its tools, components, and chat models. The module also includes concepts such as prompt templates, example selectors, and output parsers. You’ll then explore the LangChain document loader and retriever, LangChain chains and agents for building applications.
In hands-on labs, you will enhance LLM applications and develop an agent that uses integrated LLM, LangChain, and RAG technologies for interactive and efficient document retrieval.
The lecture introduces Retrieval Augmented Generation (RAG), an AI framework designed to enhance the output of large language models (LLMs) by integrating external knowledge sources. Here are the key points:
This process allows chatbots to provide accurate and contextually relevant answers, especially for domain-specific inquiries.
In the lecture on RAG Encoders and Faiss, you learned about: