
213 – Are Transformer Models Aligned By Default?
The Bayesian Conspiracy
Exploring a Language Model's Self-Representation
A discussion on a language model's portrayal of itself, including humor and self-awareness, and interactions in generating responses like naming a capital city. Themes of recent relevance, data randomness, and unconscious bias towards lesser-known cities are explored, along with the anthropomorphization of AI and issues in understanding neural network math.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.