AI systems, particularly in social networks, use complex ranking systems to show content, but these systems are often not interpretable. This lack of interpretability has led to challenges for social networks, such as difficulties in explaining to users why certain content is shown. Achieving interpretability could address both long-term safety and immediate concerns. Despite the potential positive commercial applications, understanding AI language models remains challenging due to the complexity of neural networks and the scaling of data.
Dario Amodei has been anxious about A.I. since before it was cool to be anxious about A.I. After a few years working at OpenAI, he decided to do something about that anxiety. The result was Claude: an A.I.-powered chatbot built by Anthropic, Mr. Amodei’s A.I. start-up.
Today, Mr. Amodei joins Kevin and Casey to talk about A.I. anxiety and why it’s so difficult to build A.I. safely.
Plus, we watched Netflix’s “Deep Fake Love.”
Today’s Guest:
- Dario Amodei is the chief executive of Anthropic, a safety-focused A.I. start-up
Additional Reading:
- Kevin spent several weeks at Anthropic’s San Francisco headquarters. Read about his experience here.
- Claude is Anthropic’s safety-focused chatbot.