Future of Life Institute Podcast

Future of Life Institute
undefined
Nov 10, 2022 • 45min

Ajeya Cotra on Thinking Clearly in a Rapidly Changing World

Ajeya Cotra joins us to talk about thinking clearly in a rapidly changing world. Learn more about the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:44 The default versus the accelerating picture of the future 04:25 The role of AI in accelerating change 06:48 Extrapolating economic growth 08:53 How do we know whether the pace of change is accelerating? 15:07 How can we cope with a rapidly changing world? 18:50 How could the future be utopian? 22:03 Is accelerating technological progress immoral? 25:43 Should we imagine concrete future scenarios? 31:15 How should we act in an accelerating world? 34:41 How Ajeya could be wrong about the future 41:41 What if change accelerates very rapidly?
undefined
Nov 3, 2022 • 54min

Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe

Ajeya Cotra joins us to discuss how artificial intelligence could cause catastrophe. Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:53 AI safety research in general 02:04 Realistic scenarios for AI catastrophes 06:51 A dangerous AI model developed in the near future 09:10 Assumptions behind dangerous AI development 14:45 Can AIs learn long-term planning? 18:09 Can AIs understand human psychology? 22:32 Training an AI model with naive safety features 24:06 Can AIs be deceptive? 31:07 What happens after deploying an unsafe AI system? 44:03 What can we do to prevent an AI catastrophe? 53:58 The next episode
undefined
Oct 27, 2022 • 48min

Ajeya Cotra on Forecasting Transformative Artificial Intelligence

Ajeya Cotra joins us to discuss forecasting transformative artificial intelligence. Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:53 Ajeya's report on AI 01:16 What is transformative AI? 02:09 Forecasting transformative AI 02:53 Historical growth rates 05:10 Simpler forecasting methods 09:01 Biological anchors 16:31 Different paths to transformative AI 17:55 Which year will we get transformative AI? 25:54 Expert opinion on transformative AI 30:08 Are today's machine learning techniques enough? 33:06 Will AI be limited by the physical world and regulation? 38:15 Will AI be limited by training data? 41:48 Are there human abilities that AIs cannot learn? 47:22 The next episode
undefined
Oct 20, 2022 • 41min

Alan Robock on Nuclear Winter, Famine, and Geoengineering

Alan Robock joins us to discuss nuclear winter, famine and geoengineering. Learn more about Alan's work: http://people.envsci.rutgers.edu/robock/ Follow Alan on Twitter: https://twitter.com/AlanRobock Timestamps: 00:00 Introduction 00:45 What is nuclear winter? 06:27 A nuclear war between India and Pakistan 09:16 Targets in a nuclear war 11:08 Why does the world have so many nuclear weapons? 19:28 Societal collapse in a nuclear winter 22:45 Should we prepare for a nuclear winter? 28:13 Skepticism about nuclear winter 35:16 Unanswered questions about nuclear winter
undefined
Oct 13, 2022 • 49min

Brian Toon on Nuclear Winter, Asteroids, Volcanoes, and the Future of Humanity

Brian Toon joins us to discuss the risk of nuclear winter. Learn more about Brian's work: https://lasp.colorado.edu/home/people/brian-toon/ Read Brian's publications: https://airbornescience.nasa.gov/person/Brian_Toon Timestamps: 00:00 Introduction 01:02 Asteroid impacts 04:20 The discovery of nuclear winter 13:56 Comparing volcanoes and asteroids to nuclear weapons 19:42 How did life survive the asteroid impact 65 million years ago? 25:05 How humanity could go extinct 29:46 Nuclear weapons as a great filter 34:32 Nuclear winter and food production 40:58 The psychology of nuclear threat 43:56 Geoengineering to prevent nuclear winter 46:49 Will humanity avoid nuclear winter?
undefined
Oct 6, 2022 • 47min

Philip Reiner on Nuclear Command, Control, and Communications

Philip Reiner joins us to talk about nuclear, command, control and communications systems. Learn more about Philip’s work: https://securityandtechnology.org/ Timestamps: [00:00:00] Introduction [00:00:50] Nuclear command, control, and communications [00:03:52] Old technology in nuclear systems [00:12:18] Incentives for nuclear states [00:15:04] Selectively enhancing security [00:17:34] Unilateral de-escalation [00:18:04] Nuclear communications [00:24:08] The CATALINK System [00:31:25] AI in nuclear command, control, and communications [00:40:27] Russia's war in Ukraine
undefined
Mar 4, 2022 • 2h 1min

Daniela and Dario Amodei on Anthropic

Daniela and Dario Amodei join us to discuss Anthropic: a new AI safety and research company that's working to build reliable, interpretable, and steerable AI systems. Topics discussed in this episode include: -Anthropic's mission and research strategy -Recent research and papers by Anthropic -Anthropic's structure as a "public benefit corporation" -Career opportunities You can find the page for the podcast here: https://futureoflife.org/2022/03/04/daniela-and-dario-amodei-on-anthropic/ Watch the video version of this episode here: https://www.youtube.com/watch?v=uAA6PZkek4A Careers at Anthropic: https://www.anthropic.com/#careers Anthropic's Transformer Circuits research: https://transformer-circuits.pub/ Follow Anthropic on Twitter: https://twitter.com/AnthropicAI microCOVID Project: https://www.microcovid.org/ Follow Lucas on Twitter: https://twitter.com/lucasfmperry Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:44 What was the intention behind forming Anthropic? 6:28 Do the founders of Anthropic share a similar view on AI? 7:55 What is Anthropic's focused research bet? 11:10 Does AI existential safety fit into Anthropic's work and thinking? 14:14 Examples of AI models today that have properties relevant to future AI existential safety 16:12 Why work on large scale models? 20:02 What does it mean for a model to lie? 22:44 Safety concerns around the open-endedness of large models 29:01 How does safety work fit into race dynamics to more and more powerful AI? 36:16 Anthropic's mission and how it fits into AI alignment 38:40 Why explore large models for AI safety and scaling to more intelligent systems? 43:24 Is Anthropic's research strategy a form of prosaic alignment? 46:22 Anthropic's recent research and papers 49:52 How difficult is it to interpret current AI models? 52:40 Anthropic's research on alignment and societal impact 55:35 Why did you decide to release tools and videos alongside your interpretability research 1:01:04 What is it like working with your sibling? 1:05:33 Inspiration around creating Anthropic 1:12:40 Is there an upward bound on capability gains from scaling current models? 1:18:00 Why is it unlikely that continuously increasing the number of parameters on models will lead to AGI? 1:21:10 Bootstrapping models 1:22:26 How does Anthropic see itself as positioned in the AI safety space? 1:25:35 What does being a public benefit corporation mean for Anthropic? 1:30:55 Anthropic's perspective on windfall profits from powerful AI systems 1:34:07 Issues with current AI systems and their relationship with long-term safety concerns 1:39:30 Anthropic's plan to communicate it's work to technical researchers and policy makers 1:41:28 AI evaluations and monitoring 1:42:50 AI governance 1:45:12 Careers at Anthropic 1:48:30 What it's like working at Anthropic 1:52:48 Why hire people of a wide variety of technical backgrounds? 1:54:33 What's a future you're excited about or hopeful for? 1:59:42 Where to find and follow Anthropic This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
undefined
Feb 9, 2022 • 33min

Anthony Aguirre and Anna Yelizarova on FLI's Worldbuilding Contest

Anthony Aguirre and Anna Yelizarova join us to discuss FLI's new Worldbuilding Contest. Topics discussed in this episode include: -Motivations behind the contest -The importance of worldbuilding -The rules of the contest -What a submission consists of -Due date and prizes Learn more about the contest here: https://worldbuild.ai/ Join the discord: https://discord.com/invite/njZyTJpwMz You can find the page for the podcast here: https://futureoflife.org/2022/02/08/anthony-aguirre-and-anna-yelizarova-on-flis-worldbuilding-contest/ Watch the video version of this episode here: https://www.youtube.com/watch?v=WZBXSiyienI Follow Lucas on Twitter here: twitter.com/lucasfmperry Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps: 0:00 Intro 2:30 What is "worldbuilding" and FLI's Worldbuilding Contest? 6:32 Why do worldbuilding for 2045? 7:22 Why is it important to practice worldbuilding? 13:50 What are the rules of the contest? 19:53 What does a submission consist of? 22:16 Due dates and prizes? 25:58 Final thoughts and how the contest contributes to creating beneficial futures This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
undefined
Jan 26, 2022 • 1h 43min

David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy

David Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy. Topics discussed in this episode include: -Virtual reality as genuine reality -Why VR is compatible with the good life -Why we can never know whether we're in a simulation -Consciousness in virtual realities -The ethics of simulated beings You can find the page for the podcast here: https://futureoflife.org/2022/01/26/david-chalmers-on-reality-virtual-worlds-and-the-problems-of-philosophy/ Watch the video version of this episode here: https://www.youtube.com/watch?v=hePEg_h90KI Check out David's book and website here: http://consc.net/ Follow Lucas on Twitter here: https://twitter.com/lucasfmperry Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 2:43 How this books fits into David's philosophical journey 9:40 David's favorite part(s) of the book 12:04 What is the thesis of the book? 14:00 The core areas of philosophy and how they fit into Reality+ 16:48 Techno-philosophy 19:38 What is "virtual reality?" 21:06 Why is virtual reality "genuine reality?" 25:27 What is the dust theory and what's it have to do with the simulation hypothesis? 29:59 How does the dust theory fit in with arguing for virtual reality as genuine reality? 34:45 Exploring criteria for what it means for something to be real 42:38 What is the common sense view of what is real? 46:19 Is your book intended to address common sense intuitions about virtual reality? 48:51 Nozick's experience machine and how questions of value fit in 54:20 Technological implementations of virtual reality 58:40 How does consciousness fit into all of this? 1:00:18 Substrate independence and if classical computers can be conscious 1:02:35 How do problems of identity fit into virtual reality? 1:04:54 How would David upload himself? 1:08:00 How does the mind body problem fit into Reality+? 1:11:40 Is consciousness the foundation of value? 1:14:23 Does your moral theory affect whether you can live a good life in a virtual reality? 1:17:20 What does a good life in virtual reality look like? 1:19:08 David's favorite VR experiences 1:20:42 What is the moral status of simulated people? 1:22:38 Will there be unconscious simulated people with moral patiency? 1:24:41 Why we can never know we're not in a simulation 1:27:56 David's credences for whether we live in a simulation 1:30:29 Digital physics and what is says about the simulation hypothesis 1:35:21 Imperfect realism and how David sees the world after writing Reality+ 1:37:51 David's thoughts on God 1:39:42 Moral realism or anti-realism? 1:40:55 Where to follow David and find Reality+ This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
undefined
Nov 2, 2021 • 1h 44min

Rohin Shah on the State of AGI Safety Research in 2021

Rohin Shah, Research Scientist on DeepMind's technical AGI safety team, joins us to discuss: AI value alignment; how an AI Researcher might decide whether to work on AI Safety; and why we don't know that AI systems won't lead to existential risk.  Topics discussed in this episode include: - Inner Alignment versus Outer Alignment - Foundation Models - Structural AI Risks - Unipolar versus Multipolar Scenarios - The Most Important Thing That Impacts the Future of Life You can find the page for the podcast here: https://futureoflife.org/2021/11/01/rohin-shah-on-the-state-of-agi-safety-research-in-2021 Watch the video version of this episode here: https://youtu.be/_5xkh-Rh6Ec Follow the Alignment Newsletter here: https://rohinshah.com/alignment-newsletter/ Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT Timestamps:  0:00 Intro 00:02:22 What is AI alignment? 00:06:00 How has your perspective of this problem changed over the past year? 00:06:28 Inner Alignment 00:13:00 Ways that AI could actually lead to human extinction 00:18:53 Inner Alignment and MACE optimizers 00:20:15 Outer Alignment 00:23:12 The core problem of AI alignment 00:24:54 Learning Systems versus Planning Systems 00:28:10 AI and Existential Risk 00:32:05 The probability of AI existential risk 00:51:31 Core problems in AI alignment 00:54:46 How has AI alignment, as a field of research changed in the last year? 00:54:02 Large scale language models 00:54:50 Foundation Models 00:59:58 Why don't we know that AI systems won't totally kill us all? 01:09:05 How much of the alignment and safety problems in AI will be solved by industry? 01:14:44 Do you think about what beneficial futures look like? 01:19:31 Moral Anti-Realism and AI 01:27:25 Unipolar versus Multipolar Scenarios 01:35:33 What is the safety team at DeepMind up to? 01:35:41 What is the most important thing that impacts the future of life? This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app