Future of Life Institute Podcast

Future of Life Institute
undefined
6 snips
Mar 14, 2024 • 1h 8min

Katja Grace on the Largest Survey of AI Researchers

Katja Grace discusses AI researchers' beliefs, discontinuous progress, impacts of AI crossing human-level intelligence, intelligence explosions, and mitigating AI risk in the largest survey of AI researchers. Topics include AI arms races, slowing down AI development, and intelligence and power dynamics. Grace explores high hopes and dire concerns, AI scaling, and what AI learns from human culture.
undefined
15 snips
Feb 29, 2024 • 1h 36min

Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

Discussion on pausing frontier AI, risks during a pause, hardware overhang, safety research, social dynamics of AI risk, and the challenges of cooperation among AGI corporations. Also, explores the impact on China and protesting AGI companies.
undefined
8 snips
Feb 16, 2024 • 58min

Sneha Revanur on the Social Effects of AI

Sneha Revanur, AI researcher, discusses the social effects of AI, ethics vs safety, humans in the loop, AI in social media, AIs identifying as AIs, AI influence in elections, and AIs interacting with human systems.
undefined
Feb 2, 2024 • 1h 31min

Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable

Roman Yampolskiy, AI researcher and expert, discusses whether AI is like a Shoggoth, scaling laws, evidence for AI being uncontrollable, and the safety of designing human-like AI. They also explore the limitations of AI explainability, verifiability, and alignment. The conversation touches on the challenges of integrating AI into society, deleting dangerous information from neural networks, building human-like AI for robotics, and potential obstacles to implementing language models in various industries. They conclude by discussing Good Heart's Law, a positive vision for AI, and the challenges of regulating AI investment.
undefined
14 snips
Jan 19, 2024 • 48min

Special: Flo Crivello on AI as a New Form of Life

Flo Crivello, an expert in AI and its implications for society, discusses AI as a new form of life, regulatory capture risks, the possibility of a GPU kill switch, and predicts AGI within 2-8 years. They also explore Biden's executive order on AI, regulating models or applications, and the collaboration between China and the US on AI. The podcast delves into the challenges of managing AI systems and the philosophical question of subjective experience in AI.
undefined
Jan 6, 2024 • 1h 39min

Carl Robichaud on Preventing Nuclear War

Carl Robichaud, an expert on nuclear arms race and nuclear risk, discusses topics such as the new nuclear arms race, the role of world leaders and ideology in nuclear risk, the impact of nuclear weapons on stable peace, North Korea's nuclear weapons, public perception of nuclear risk, and reaching a stable, low-risk era.
undefined
Dec 14, 2023 • 1h 43min

Frank Sauer on Autonomous Weapon Systems

Frank Sauer discusses autonomy in weapon systems, killer drones, low-tech defenses against drones, flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems.
undefined
10 snips
Dec 1, 2023 • 1h 41min

Darren McKee on Uncontrollable Superintelligence

Darren McKee, AI control and alignment expert, discusses the difficulty of controlling AI, the development of AI goals and traits, and the challenges of AI alignment. They explore the speed of AI cognition, the reliability of current and future AI systems, and the need to plan for multiple AI scenarios. Additionally, they discuss the possibility of AIs seeking self-preservation and whether there is a unified solution to AI alignment.
undefined
Nov 17, 2023 • 1h 49min

Mark Brakel on the UK AI Summit and the Future of AI Policy

Mark Brakel, Director of Policy at the Future of Life Institute, talks about the AI Safety Summit in the UK, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, autonomy in weapon systems, and the importance of companies conducting risk assessments and being held legally liable for their actions.
undefined
28 snips
Nov 3, 2023 • 2h 7min

Dan Hendrycks on Catastrophic AI Risks

Dan Hendrycks, AI risk expert, discusses X.ai, evolving AI risk thinking, malicious use of AI, AI race dynamics, making AI organizations safer, and representation engineering for understanding AI traits like deception.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app