Future of Life Institute Podcast cover image

Future of Life Institute Podcast

Latest episodes

undefined
Dec 5, 2024 • 3h 20min

Nathan Labenz on the State of AI and Progress since GPT-4

Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4. You can find Nathan's podcast here: https://www.cognitiverevolution.ai   Timestamps: 00:00 AI progress since GPT-4  10:50 Multimodality  19:06 Low-cost models  27:58 Coding versus medicine/law  36:09 AI agents  45:29 How much are people using AI?  53:39 Open source  01:15:22 AI industry analysis  01:29:27 Are some AI models kept internal?  01:41:00 Money is not the limiting factor in AI  01:59:43 AI and biology  02:08:42 Robotics and self-driving  02:24:14 Inference-time compute  02:31:56 AI governance  02:36:29 Big-picture overview of AI progress and safety
undefined
Nov 22, 2024 • 1h 59min

Connor Leahy on Why Humanity Risks Extinction from AGI

Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.   Here's the document we discuss in the episode:   https://www.thecompendium.ai  Timestamps: 00:00 The Compendium 15:25 The motivations of AGI corps  31:17 AI is grown, not written  52:59 A science of intelligence 01:07:50 Jobs, work, and AGI  01:23:19 Superintelligence  01:37:42 Open-source AI  01:45:07 What can we do?
undefined
Nov 8, 2024 • 1h 3min

Suzy Shepherd on Imagining Superintelligence and "Writing Doom"

Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world.   Here's Writing Doom:   https://www.youtube.com/watch?v=xfMQ7hzyFW4   Timestamps: 00:00 Writing Doom  08:23 Humor in Writing Doom 13:31 Concise writing  18:37 Getting feedback 27:02 Alternative characters 36:31 Popular video formats 46:53 AI in filmmaking49:52 Meaning in the future
undefined
Oct 25, 2024 • 1h 28min

Andrea Miotti on a Narrow Path to Safe, Transformative AI

Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence would look like.   Here's the document we discuss in the episode:   https://www.narrowpath.co  Timestamps: 00:00 A Narrow Path 06:10 Can we predict future AI capabilities? 11:10 Risks from current AI development 17:56 The benefits of narrow AI  22:30 Against self-improving AI  28:00 Cybersecurity at AI companies  33:55 Unbounded AI  39:31 Global coordination on AI safety 49:43 Monitoring training runs  01:00:20 Benefits of cooperation  01:04:58 A science of intelligence  01:25:36 How you can help
undefined
Oct 11, 2024 • 1h 30min

Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents

Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode:  https://epochai.org/blog/can-ai-scaling-continue-through-2030  Timestamps: 00:00 How important is scaling?  08:03 How capable will AIs be in 2030?  18:33 AI agents, reasoning, and planning 23:39 Automating coding and mathematics  31:26 Uncertainty about investing in AI 40:34 Gap between investment and returns  45:30 Compute, software and data 51:54 Inference-time compute 01:08:49 Returns to software R&D  01:19:22 Limits to expanding compute
undefined
Sep 27, 2024 • 2h 9min

Ryan Greenblatt on AI Control, Timelines, and Slowing Down Around Human-Level AI

Ryan Greenblatt, a researcher focused on AI control and safety, dives deep into the complexities of AI alignment. He discusses the critical challenges of ensuring that powerful AI systems align with human values, stressing the need for robust safeguards against potential misalignments. Greenblatt explores the implications of AI's rapid advancements, including the risks of deception and manipulation. He emphasizes the importance of transparency in AI development while contemplating the timeline and takeoff speeds toward achieving human-level AI.
undefined
Sep 12, 2024 • 1h 20min

Tom Barnes on How to Build a Resilient World

Tom Barnes, an expert on AI capabilities and safety, shares insights on the critical imbalance in funding between AI safety and capabilities. He discusses the importance of robust safety protocols amidst rapid advancements. Barnes also explores global coordination challenges, particularly between the US and China, in navigating AI governance. He emphasizes the value of preparedness through war gaming, highlights the psychological defenses needed against AI manipulation, and advocates for patient philanthropy to foster a resilient world against AI risks.
undefined
Aug 22, 2024 • 2h 16min

Samuel Hammond on why AI Progress is Accelerating - and how Governments Should Respond

Samuel Hammond, a leading expert on AI implications, dives into the rapid acceleration of AI advancements. He discusses the balancing act of regulation amidst national security concerns surrounding AGI. Hammond also explores the ideological pursuit of superintelligence and compares AI's growth with historical economic transformations. He emphasizes the need for ethical frameworks in tech governance and the potential for AI to redefine human cognition and relationships. Join this enlightening conversation about the future of intelligence!
undefined
Aug 9, 2024 • 1h 3min

Anousheh Ansari on Innovation Prizes for Space, AI, Quantum Computing, and Carbon Removal

Anousheh Ansari, a pioneer in promoting innovation through competitions, discusses how innovation prizes can drive advancements in space, AI, quantum computing, and carbon removal. She explains the effectiveness of these prizes in attracting private investment for sustainable technologies and the intricacies of designing impactful competitions. Anousheh highlights the transformative potential of quantum computing in solving complex problems and shares her insights on the future of carbon removal strategies. Her passion for problem-solving shines through as she reflects on her journey from space explorer to innovation advocate.
undefined
Jul 25, 2024 • 30min

Mary Robinson (Former President of Ireland) on Long-View Leadership

Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org  Timestamps: 00:00 Mary's journey to presidency  05:11 Long-view leadership 06:55 Prioritizing global problems 08:38 Risks from artificial intelligence 11:55 Climate change 15:18 Barriers to global gender equality  16:28 Risk of nuclear war  20:51 Advice to future leaders  22:53 Humor in politics 24:21 Barriers to international cooperation  27:10 Institutions and technological change

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode