The Bayesian Conspiracy cover image

213 – Are Transformer Models Aligned By Default?

The Bayesian Conspiracy

00:00

Introduction

Exploring the potential for Transformers to achieve alignment as they grow in complexity, with a focus on improved behavior and reliability highlighted through anecdotes like Claude's helpful assistance. Despite past misalignments, progress in training and fine-tuning is recognized amidst ongoing skepticism.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app