This Day in AI Podcast cover image

EP29: Meta's Code Llama, Unnatural Instruction, Phishing Our Mother & OpenAI's GPT3.5 Fine Tuning

This Day in AI Podcast

00:00

Training AI Models with Human Alignment Examples

This chapter discusses the process of training AI models using human alignment examples. They explore the traditional RHLF method and introduce a new technique of generating alignment examples. The chapter also addresses the implications of AI training data and the potential for models to degenerate over time.

Play episode from 08:50
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app