During this course, you’ll explore transformers, model frameworks, and platforms such as Hugging Face and PyTorch. You’ll begin with a general framework for optimizing LLMs and quickly move on to fine-tuning generative AI models. Plus, you’ll learn about parameter-efficient fine-tuning (PEFT), low-rank adaptation (LoRA), quantized low-rank adaptation (QLoRA), and prompting.

Screenshot 2025-04-12 at 16.11.31.png

Outline

Module 1: Transformers and Fine-Tuning

In this module, you will be introduced to Fine Tuning. You’ll get an overview of generative models and compare Hugging Face and PyTorch frameworks. You’ll also gain insights into model quantization and learn to use pre-trained transformers and then fine-tune them using Hugging Face and PyTorch.

Learning Objectives

1. Hugging Face vs. Pytorch

Intro. PyTorch

Intro. Hugging Face

image.png