
Building LLM Apps & the Challenges that come with it. The What's AI Podcast Episode 16: Jay Alammar
What's AI Podcast by Louis-François Bouchard
How to Explain Text Generation Models to Different Audiences
The question on how I've been explaining the transformers to different audiences over the last five years. And so that's how it runs on inference. How does it generate one word at a time? We give it the inputs. This is how these text generation models work now if you give them input. That doesn't answer everything. How are they able to do this? What happens under the hood that makes them do that? But in the beginning I like to give people a sense of, okay, when you're dealing with it at inference time, this is what it's doing. So I will have you choose your destiny and sort of steer which way would you like us to go next
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.