The chapter delves into a recent breakthrough in AI interpretability regarding large language models, focusing on the mapping of the mind of a specific AI model, Claude III by Anthropic. The speakers discuss the technical aspects of interpretability research, the implications for AI safety, and compare AI models to black boxes processing data without full understanding.
This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.
Guests:
- Josh Batson, research scientist at Anthropic
Additional Reading:
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.