Future of Life Institute Podcast

Future of Life Institute
undefined
Jan 19, 2023 • 1h 4min

Connor Leahy on AI Progress, Chimps, Memes, and Markets

Connor Leahy from Conjecture joins the podcast to discuss AI progress, chimps, memes, and markets. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 01:00 Defining artificial general intelligence 04:52 What makes humans more powerful than chimps? 17:23 Would AIs have to be social to be intelligent? 20:29 Importing humanity's memes into AIs 23:07 How do we measure progress in AI? 42:39 Gut feelings about AI progress 47:29 Connor's predictions about AGI 52:44 Is predicting AGI soon betting against the market? 57:43 How accurate are prediction markets about AGI?
undefined
Jan 12, 2023 • 37min

Sean Ekins on Regulating AI Drug Discovery

On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about regulating AI drug discovery. Timestramps: 00:00 Introduction 00:31 Ethical guidelines and regulation of AI drug discovery 06:11 How do we balance innovation and safety in AI drug discovery? 13:12 Keeping dangerous chemical data safe 21:16 Sean’s personal story of voicing concerns about AI drug discovery 32:06 How Sean will continue working on AI drug discovery
undefined
Jan 5, 2023 • 39min

Sean Ekins on the Dangers of AI Drug Discovery

On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about the dangers of AI drug discovery. They talk about how Sean discovered an extremely toxic chemical (VX) by reversing an AI drug discovery algorithm. Timestamps: 00:00 Introduction 00:46 Sean’s professional journey 03:45 Can computational models replace animal models? 07:24 The risks of AI drug discovery 12:48 Should scientists disclose dangerous discoveries? 19:40 How should scientists handle dual-use technologies? 22:08 Should we open-source potentially dangerous discoveries? 26:20 How do we control autonomous drug creation? 31:36 Surprising chemical discoveries made by black-box AI systems 36:56 How could the dangers of AI drug discovery be mitigated?
undefined
Dec 29, 2022 • 50min

Anders Sandberg on the Value of the Future

Anders Sandberg joins the podcast to discuss various philosophical questions about the value of the future. Learn more about Anders' work: https://www.fhi.ox.ac.uk Timestamps: 00:00 Introduction 00:54 Humanity as an immature teenager 04:24 How should we respond to our values changing over time? 18:53 How quickly should we change our values? 24:58 Are there limits to what future morality could become? 29:45 Could the universe contain infinite value? 36:00 How do we balance weird philosophy with common sense? 41:36 Lightning round: mind uploading, aliens, interstellar travel, cryonics
undefined
Dec 22, 2022 • 1h 3min

Anders Sandberg on Grand Futures and the Limits of Physics

Anders Sandberg joins the podcast to discuss how big the future could be and what humanity could achieve at the limits of physics. Learn more about Anders' work: https://www.fhi.ox.ac.uk Timestamps: 00:00 Introduction 00:58 Does it make sense to write long books now? 06:53 Is it possible to understand all of science now? 10:44 What is exploratory engineering? 15:48 Will humanity develop a completed science? 21:18 How much of possible technology has humanity already invented? 25:22 Which sciences have made the most progress? 29:11 How materially wealthy could humanity become? 39:34 Does a grand futures depend on space travel? 49:16 Trade between proponents of different moral theories 53:13 How does physics limit our ethical options? 55:24 How much could our understanding of physics change? 1:02:30 The next episode
undefined
Dec 15, 2022 • 58min

Anders Sandberg on ChatGPT and the Future of AI

Anders Sandberg from The Future of Humanity Institute joins the podcast to discuss ChatGPT, large language models, and what he's learned about the risks and benefits of AI. Timestamps: 00:00 Introduction 00:40 ChatGPT 06:33 Will AI continue to surprise us? 16:22 How do language models fail? 24:23 Language models trained on their own output 27:29 Can language models write college-level essays? 35:03 Do language models understand anything? 39:59 How will AI models improve in the future? 43:26 AI safety in light of recent AI progress 51:28 AIs should be uncertain about values
undefined
Dec 8, 2022 • 48min

Vincent Boulanin on Military Use of Artificial Intelligence

Vincent Boulanin joins the podcast to explain how modern militaries use AI, including in nuclear weapons systems. Learn more about Vincent's work: https://sipri.org Timestamps: 00:00 Introduction 00:45 Categorizing risks from AI and nuclear 07:40 AI being used by non-state actors 12:57 Combining AI with nuclear technology 15:13 A human should remain in the loop 25:05 Automation bias 29:58 Information requirements for nuclear launch decisions 35:22 Vincent's general conclusion about military machine learning 37:22 Specific policy measures for decreasing nuclear risk Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
undefined
Dec 1, 2022 • 45min

Vincent Boulanin on the Dangers of AI in Nuclear Weapons Systems

Vincent Boulanin joins the podcast to explain the dangers of incorporating artificial intelligence in nuclear weapons systems. Learn more about Vincent's work: https://sipri.org Timestamps: 00:00 Introduction 00:55 What is strategic stability? 02:45 How can AI be a positive factor in nuclear risk? 10:17 Remote sensing of nuclear submarines 19:50 Using AI in nuclear command and control 24:21 How does AI change the game theory of nuclear war? 30:49 How could AI cause an accidental nuclear escalation? 36:57 How could AI cause an inadvertent nuclear escalation? 43:08 What is the most important problem in AI nuclear risk? 44:39 The next episode
undefined
Nov 24, 2022 • 52min

Robin Hanson on Predicting the Future of Artificial Intelligence

Robin Hanson joins the podcast to discuss AI forecasting methods and metrics. Timestamps: 00:00 Introduction 00:49 Robin's experience working with AI 06:04 Robin's views on AI development 10:41 Should we care about metrics for AI progress? 16:56 Is it useful to track AI progress? 22:02 When should we begin worrying about AI safety? 29:16 The history of AI development 39:52 AI progress that deviates from current trends 43:34 Is this AI boom different than past booms? 48:26 Different metrics for predicting AI
undefined
Nov 17, 2022 • 60min

Robin Hanson on Grabby Aliens and When Humanity Will Meet Them

Robin Hanson joins the podcast to explain his theory of grabby aliens and its implications for the future of humanity. Learn more about the theory here: https://grabbyaliens.com Timestamps: 00:00 Introduction 00:49 Why should we care about aliens? 05:58 Loud alien civilizations and quiet alien civilizations 08:16 Why would some alien civilizations be quiet? 14:50 The moving parts of the grabby aliens model 23:57 Why is humanity early in the universe? 28:46 Could't we just be alone in the universe? 33:15 When will humanity expand into space? 46:05 Will humanity be more advanced than the aliens we meet? 49:32 What if we discovered aliens tomorrow? 53:44 Should the way we think about aliens change our actions? 57:48 Can we reasonably theorize about aliens? 53:39 The next episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app