

Dario Amodei
CEO of Anthropic, an AI safety and research company. He is focused on steering AI toward positive applications and warning about potential downsides.
Top 10 podcasts with Dario Amodei
Ranked by the Snipd community

6,613 snips
Nov 11, 2024 • 5h 22min
#452 – Dario Amodei: Anthropic CEO on Claude, AGI & the Future of AI & Humanity
Dario Amodei, CEO of Anthropic, discusses the groundbreaking AI model Claude, alongside Amanda Askell and Chris Olah, both researchers at Anthropic. They dive into the ethical dimensions of AI, emphasizing responsibility in innovation and safety. The conversation also explores the intricacies of building AI personalities, the challenges of mechanistic interpretability, and the future of integrating AI into society. They discuss the delicate balance between AI capabilities and human values, positioning AI as a partner rather than a competitor.

433 snips
Jul 30, 2025 • 1h 9min
Anthropic CEO Dario Amodei: AI's Potential, OpenAI Rivalry, GenAI Business, Doomerism
Dario Amodei, CEO of Anthropic, shares his insights on the urgent need for responsible AI development. He discusses his rivalry with OpenAI and critiques the hype around concepts like AGI. The conversation touches on the challenges of scaling AI while ensuring safety and balancing business needs. Amodei reflects on his personal journey, emphasizing the importance of impact and responsible innovation in shaping the future of technology. His candid thoughts on the risks of losing control over AI add depth to the dialogue, making it a must-listen for tech enthusiasts.

425 snips
Aug 6, 2025 • 1h 3min
Anthropic CEO Dario Amodei on designing AGI-pilled products, model economics, and 19th-century vitalism
Dario Amodei, CEO of Anthropic and a former AI pioneer at OpenAI, shares insights into his company's remarkable journey to $4 billion in annual recurring revenue. He discusses the capitalistic tendencies of AI models and their potential as standalone profit centers. The conversation touches on the interplay between AI advancements and safety regulations while exploring the historical concept of vitalism and its implications for AI's future. Amodei also reveals strategies for building AGI-focused products and navigating the evolving landscape of AI technology.

255 snips
Aug 8, 2023 • 1h 59min
Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress
Dario Amodei, CEO of Anthropic, shares his insights on the mind-bending complexities of AI scaling and the emergence of intelligence. He humorously discusses AI's evolving capabilities, juxtaposed with human intelligence, and delves into the critical relationship between AI and cybersecurity. Dario also highlights the potential risks associated with AI misuse, particularly in bioterrorism, and emphasizes the importance of responsible governance in navigating these challenges. The conversation concludes with thought-provoking questions about AI consciousness.

141 snips
Jun 26, 2024 • 1h 11min
Dario Amodei CEO of Anthropic: Claude, New models, AI safety and Economic impact
Dario Amodei, CEO and co-founder of Anthropic, is a prominent voice in AI safety and ethics, celebrated for his work on the Claude language model. He discusses the future of AI, including exciting advancements and the economic impacts of these technologies. Dario emphasizes the importance of responsible scaling and ethical design in AI models. The conversation also touches on AI's influence on democracy, the challenges of regulation, and the need for collaboration and innovative hiring practices to build a responsible AI landscape.

131 snips
Feb 5, 2025 • 44min
Anthropic's Dario Amodei on AI Competition
Dario Amodei, CEO of Anthropic and an expert in AI safety, dives deep into the AI innovation race between the US and China. He discusses the implications of China's DeepSeek and the need for updated export controls to manage AI risks. Dario highlights potential threats like AI espionage and bioweapon discussions. He raises compelling questions about how AI could influence democracy—balancing innovation with ethical responsibilities. The conversation underscores the complexities of global AI governance and the imperative for equitable distribution.

83 snips
Aug 29, 2024 • 1h 3min
Anthropic CEO Dario Amodei on AI's Moat, Risk, and SB 1047
Dario Amodei, CEO and co-founder of Anthropic, discusses the economics of AI and its implications for global power dynamics. He highlights how AI companies like Anthropic hold a comparative advantage in development. The conversation dives into the risks of AI, particularly the competition between the U.S. and China. Amodei also shares insights on California's SB 1047 bill and its potential effects on the industry. The talk emphasizes the dual-edge nature of AI, presenting both transformative opportunities and significant challenges for labor and inequality.

46 snips
Jan 14, 2025 • 28min
Tech in 2025: Hi, I’m your AI-powered assistant
In this engaging discussion, Madhumita Murgia, FT's AI editor, and Dario Amodei, CEO of Anthropic, dive into the future of AI by 2025. They explore how AI agents might soon autonomously handle tasks like email replies and grocery shopping. The conversation emphasizes the importance of AI safety and building consumer trust, as well as the rapid advancements in generative AI and the need for innovative solutions. They also touch on how AI could tackle significant challenges in healthcare and its implications for everyday life.

33 snips
Apr 25, 2025 • 31min
The Urgency of Interpretability - By Dario Amodei
Dario Amodei, CEO of Anthropic and an expert in AI safety, delves into the urgency of AI interpretability. He emphasizes the need to understand opaque AI systems to foster positive growth. The conversation tackles the complexity of AI behaviors and the ethical concerns tied to AI sentience. Dario advocates for bridging theory with practical tools to enhance AI reliability. He also discusses resistance within academia and the role of government in promoting interpretability, stressing that transparency is crucial to mitigate emerging AI risks.

29 snips
Mar 4, 2022 • 2h 1min
Daniela and Dario Amodei on Anthropic
Daniela and Dario Amodei join us to discuss Anthropic: a new AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Topics discussed in this episode include:
-Anthropic's mission and research strategy
-Recent research and papers by Anthropic
-Anthropic's structure as a "public benefit corporation"
-Career opportunities
You can find the page for the podcast here: https://futureoflife.org/2022/03/04/daniela-and-dario-amodei-on-anthropic/
Watch the video version of this episode here: https://www.youtube.com/watch?v=uAA6PZkek4A
Careers at Anthropic: https://www.anthropic.com/#careers
Anthropic's Transformer Circuits research: https://transformer-circuits.pub/
Follow Anthropic on Twitter: https://twitter.com/AnthropicAI
microCOVID Project: https://www.microcovid.org/
Follow Lucas on Twitter: https://twitter.com/lucasfmperry
Have any feedback about the podcast? You can share your thoughts here:
www.surveymonkey.com/r/DRBFZCT
Timestamps:
0:00 Intro
2:44 What was the intention behind forming Anthropic?
6:28 Do the founders of Anthropic share a similar view on AI?
7:55 What is Anthropic's focused research bet?
11:10 Does AI existential safety fit into Anthropic's work and thinking?
14:14 Examples of AI models today that have properties relevant to future AI existential safety
16:12 Why work on large scale models?
20:02 What does it mean for a model to lie?
22:44 Safety concerns around the open-endedness of large models
29:01 How does safety work fit into race dynamics to more and more powerful AI?
36:16 Anthropic's mission and how it fits into AI alignment
38:40 Why explore large models for AI safety and scaling to more intelligent systems?
43:24 Is Anthropic's research strategy a form of prosaic alignment?
46:22 Anthropic's recent research and papers
49:52 How difficult is it to interpret current AI models?
52:40 Anthropic's research on alignment and societal impact
55:35 Why did you decide to release tools and videos alongside your interpretability research
1:01:04 What is it like working with your sibling?
1:05:33 Inspiration around creating Anthropic
1:12:40 Is there an upward bound on capability gains from scaling current models?
1:18:00 Why is it unlikely that continuously increasing the number of parameters on models will lead to AGI?
1:21:10 Bootstrapping models
1:22:26 How does Anthropic see itself as positioned in the AI safety space?
1:25:35 What does being a public benefit corporation mean for Anthropic?
1:30:55 Anthropic's perspective on windfall profits from powerful AI systems
1:34:07 Issues with current AI systems and their relationship with long-term safety concerns
1:39:30 Anthropic's plan to communicate it's work to technical researchers and policy makers
1:41:28 AI evaluations and monitoring
1:42:50 AI governance
1:45:12 Careers at Anthropic
1:48:30 What it's like working at Anthropic
1:52:48 Why hire people of a wide variety of technical backgrounds?
1:54:33 What's a future you're excited about or hopeful for?
1:59:42 Where to find and follow Anthropic
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.