The Bayesian Conspiracy cover image

213 – Are Transformer Models Aligned By Default?

The Bayesian Conspiracy

00:00

The Power of Transformers in Understanding Deception

This chapter delves into how transformer models process text and images to comprehend complex ideas like deception, showcasing their remarkable ability to extract intrinsic concepts despite lacking physical senses. The conversation also covers early image recognition technology, abstract recognition, and personal reactions to recent advancements in AI, reflecting both optimism and concerns in the field.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app