Future of Life Institute Podcast

Future of Life Institute
undefined
Jun 7, 2024 • 1h 36min

Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light

Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com  Timestamps: 00:00 US-China competition and risk  18:01 The security dilemma  30:21 Official and unofficial diplomacy 39:53 Hotlines between countries  01:01:54 Preventing escalation after war  01:09:58 Catastrophic biological risks  01:20:42 Ultraviolet germicidal light 01:25:54 Ancient civilizational collapse
undefined
May 24, 2024 • 37min

Christian Nunes on Deepfakes (with Max Tegmark)

Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org Timestamps:00:00 The National Organisation for Women (NOW) 05:37 Deepfakes and women 10:12 Protecting ordinary victims of deepfakes 16:06 Deepfake legislation 23:38 Current harm from deepfakes 30:20 Bodily autonomy as a right 34:44 NOW's work on AI Here's FLI's recommended amendments to legislative proposals on deepfakes: https://futureoflife.org/document/recommended-amendments-to-legislative-proposals-on-deepfakes/
undefined
May 3, 2024 • 1h 45min

Dan Faggella on the Race to AGI

Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.com Timestamps: 00:00 Value differences in AI 12:07 Should we eventually create AGI? 28:22 What is a worthy successor? 43:19 AI changing power dynamics 59:00 Open source AI 01:05:07 What drives AI progress? 01:16:36 What limits AI progress? 01:26:31 Which industries are using AI?
undefined
Apr 19, 2024 • 1h 27min

Liron Shapira on Superintelligence Goals

Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?
undefined
Apr 5, 2024 • 1h 26min

Annie Jacobsen on Nuclear War - a Second by Second Timeline

Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 Submarines 51:06 How did we end up like this? 59:40 Interceptor missiles 1:11:25 Nuclear weapons and cyberattacks 1:17:35 Concentration of power
undefined
Mar 14, 2024 • 1h 8min

Katja Grace on the Largest Survey of AI Researchers

Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/. Timestamps: 0:20 AI Impacts surveys 18:11 What AI will look like in 20 years 22:43 Experts’ extinction risk predictions 29:35 Opinions on slowing down AI development 31:25 AI “arms races” 34:00 AI risk areas with the most agreement 40:41 Do “high hopes and dire concerns” go hand-in-hand? 42:00 Intelligence explosions 45:37 Discontinuous progress 49:43 Impacts of AI crossing the human-level intelligence threshold 59:39 What does AI learn from human culture? 1:02:59 AI scaling 1:05:04 What should we do?
undefined
Feb 29, 2024 • 1h 36min

Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info Timestamps: 00:00 Pausing AI 10:23 Risks during an AI pause 19:41 Hardware overhang 29:04 Technological progress 37:00 Safety research during a pause 54:42 Social dynamics of AI risk 1:10:00 What prevents cooperation? 1:18:21 What about China? 1:28:24 Protesting AGI corporations
undefined
Feb 16, 2024 • 58min

Sneha Revanur on the Social Effects of AI

Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org Timestamps: 00:00 Encode Justice 06:11 AI ethics and AI safety 15:49 Humans in the loop 23:59 AI in social media 30:42 Deteriorating social skills? 36:00 AIs identifying as AIs 43:36 AI influence in elections 50:32 AIs interacting with human systems
undefined
Feb 2, 2024 • 1h 31min

Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable

Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Is AI like a Shoggoth? 09:50 Scaling laws 16:41 Are humans more general than AIs? 21:54 Are AI models explainable? 27:49 Using AI to explain AI 32:36 Evidence for AI being uncontrollable 40:29 AI verifiability 46:08 Will AI be aligned by default? 54:29 Creating human-like AI 1:03:41 Robotics and safety 1:09:01 Obstacles to AI in the economy 1:18:00 AI innovation with current models 1:23:55 AI accidents in the past and future
undefined
Jan 19, 2024 • 48min

Special: Flo Crivello on AI as a New Form of Life

On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years. Timestamps: 00:00 Technological progress 07:59 Regulatory capture and AI 11:53 AI as a new form of life 15:44 Can AI development be paused? 20:12 Biden's executive order on AI 22:54 How would a GPU kill switch work? 27:00 Regulating models or applications? 32:13 AGI in 2-8 years 42:00 China and US collaboration on AI

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app