
213 – Are Transformer Models Aligned By Default?
The Bayesian Conspiracy
00:00
Introduction
Exploring the potential for Transformers to achieve alignment as they grow in complexity, with a focus on improved behavior and reliability highlighted through anecdotes like Claude's helpful assistance. Despite past misalignments, progress in training and fine-tuning is recognized amidst ongoing skepticism.
Transcript
Play full episode