This Day in AI Podcast cover image

EP29: Meta's Code Llama, Unnatural Instruction, Phishing Our Mother & OpenAI's GPT3.5 Fine Tuning

This Day in AI Podcast

00:00

Training GPT Models and Alignment

This chapter provides a detailed explanation of how GPT models like GPT 3.5 or GPT 4 are trained, including the use of convolutional neural nets, unsupervised training, and the alignment phase. It discusses the concept of alignment and how it eliminates the need for proprietary data sets. The chapter also explores the potential of AI-generated unit tests for code and the concept of retraining neural nets for exponential improvement.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app