The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Mental Models for Advanced ChatGPT Prompting with Riley Goodside - #652

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

NOTE

Understanding text generation and RLHF in AI modeling

Text generation in AI involves interpolating the pre-trained dataset to fill gaps between training examples, and sculpting the interpolation by prompting, shaving off dimensions, and reducing the space of possibilities. RLHF in AI is akin to the Reynolds and McDonald view of multiverse of fiction, where the modeled text is the policy rollout. During RLHF tuning, multiple completions are generated and evaluated under the reward model, weighted by the reward model's evaluation to predict a subset of all text that the model can generate and that would be approved of.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner