From the course: Fundamentals of AI Engineering: Principles and Practical Applications
Unlock this course with a free trial
Join today to access over 24,300 courses taught by industry experts.
Putting the LLM pipeline together
From the course: Fundamentals of AI Engineering: Principles and Practical Applications
Putting the LLM pipeline together
- [Instructor] Welcome, everyone. Today we're going to demystify how large language models actually work under the hood. To get started, open chapter_2 and open up the notebook titled, 02_04.ipynb. As always, in the upper right hand side of your notebook, make sure you've selected the .vn virtual environment. Today we'll break down the entire process of text generation step by step from the moment you provide an input prompt until you get the final output. This understanding is crucial whether you're building AI applications or simply trying to get better results from your prompts. Let's dive into this fascinating process. Today, we'll also be taking a practical approach. Instead of just talking about theory, we'll actually see each step in action with code examples, and visualizations. We'll start with a simple prompt and watch as our model generates a response one token at a time. The process will follow mirrors exactly what happens inside tools like ChatGPT, or any other text…