After completing this course, you will be able to:
- Explain the significance of generative AI in various domains.
- Differentiate between various generative AI architectures and models.
- Describe the use of LLMs for NLP tasks.
- Describe the key features and significance of the libraries and tools used in generative AI for language processing.
- Use Hugging Face libraries in a Jupyter environment to explore generative AI techniques and build a simple chatbot using the Transformers library.
- Explain the tokenization process, tokenization methods, and the use of tokenizers.
- Implement tokenization.
- Explain how data loaders are used for training generative AI models.
- Create an NLP data loader.
M1: Gen AI architecture
1.1. Significance and evolution of generative AI
- Gen AI refers to DL models that can generate various types of content such as text, images, audio, 3D objects, and music.
- These models generate contextually relevent text. An example of a text generation model is GPI.
- These models can generate images from text input and seed images. Examples of models for image generation are DALL-E and GAN.
- You can use these models to generate natural-sounding speech and text2speech synthesis. An example of this type of model is WaveNet.
- Generative AI has specfic applications in industries such as healthcare, finance, gaming, and IT.
1.2. Generative AI architectures and models
