
213 – Are Transformer Models Aligned By Default?
The Bayesian Conspiracy
00:00
Exploring a Language Model's Self-Representation
A discussion on a language model's portrayal of itself, including humor and self-awareness, and interactions in generating responses like naming a capital city. Themes of recent relevance, data randomness, and unconscious bias towards lesser-known cities are explored, along with the anthropomorphization of AI and issues in understanding neural network math.
Transcript
Play full episode