The Bayesian Conspiracy cover image

213 – Are Transformer Models Aligned By Default?

The Bayesian Conspiracy

00:00

Exploring a Language Model's Self-Representation

A discussion on a language model's portrayal of itself, including humor and self-awareness, and interactions in generating responses like naming a capital city. Themes of recent relevance, data randomness, and unconscious bias towards lesser-known cities are explored, along with the anthropomorphization of AI and issues in understanding neural network math.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app