Anthropic has created a body called the long term benefit trust. The idea is to create some kind of neutrality or some kind of separation that allows our decisions to be checked by those who don't have conflicts of interest. It's very nerve wracking. I think almost every major decision we've made, I've second guessed both on the basis that it's too commercial and that it’s too impractically focused on safety. All stuff that's serious can't sit around every day and contemplate how weighty they are.
Dario Amodei has been anxious about A.I. since before it was cool to be anxious about A.I. After a few years working at OpenAI, he decided to do something about that anxiety. The result was Claude: an A.I.-powered chatbot built by Anthropic, Mr. Amodei’s A.I. start-up.
Today, Mr. Amodei joins Kevin and Casey to talk about A.I. anxiety and why it’s so difficult to build A.I. safely.
Plus, we watched Netflix’s “Deep Fake Love.”
Today’s Guest:
- Dario Amodei is the chief executive of Anthropic, a safety-focused A.I. start-up
Additional Reading:
- Kevin spent several weeks at Anthropic’s San Francisco headquarters. Read about his experience here.
- Claude is Anthropic’s safety-focused chatbot.