Alec Radford: I remember being impressed that you could take a model and get it to do two or three of these very simple, very stilted language tasks at once. But we made this realization near the end of 2018 that, hey, you just keep scaling these things. They just kind of can absorb the capacity. And then they can do all kinds of things just as a side effect of you feeding this data into them to get them to predict the next word. It's completely crazy if you look at it, which is why even though the extrapolations were clear as day, it seemed crazy to me.
Dario Amodei has been anxious about A.I. since before it was cool to be anxious about A.I. After a few years working at OpenAI, he decided to do something about that anxiety. The result was Claude: an A.I.-powered chatbot built by Anthropic, Mr. Amodei’s A.I. start-up.
Today, Mr. Amodei joins Kevin and Casey to talk about A.I. anxiety and why it’s so difficult to build A.I. safely.
Plus, we watched Netflix’s “Deep Fake Love.”
Today’s Guest:
- Dario Amodei is the chief executive of Anthropic, a safety-focused A.I. start-up
Additional Reading:
- Kevin spent several weeks at Anthropic’s San Francisco headquarters. Read about his experience here.
- Claude is Anthropic’s safety-focused chatbot.