For Humanity: An AI Risk Podcast

The AI Risk Network
undefined
Dec 6, 2023 • 44min

"Team Save Us vs Team Kill Us" For Humanity, An AI Safety Podcast Episode #6: The Munk Debate

In Episode #6, Team Save Us vs. Team Kill Us,, host John Sherman weaves together highlights and analysis of The Munk Debate on AI Safety to show the case for and against AI as a human extinction risk. The debate took place in Toronto in June 2023, and it remains entirely current and relevant today and stands alone as one of the most well-produced, well-argued debates on AI Safety anywhere. All of the issues debated remain unsolved. All of the threats debated only grow in urgency.In this Munk Debate, you’ll meet two teams: Max Tegmark and Yoshua Bengio on Team Save Us (John’s title not theirs), and Yann Lecun and Melanie Mitchell on Team Kill Us (they’re called pro/con in the debate, Kill v Save is all John). Host John Sherman adds in some current events and colorful analysis (and language) throughout.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. Let’s all it facts and analysis.For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCESTHE MUNK DEBATES: https://munkdebates.comMax Tegmark➡️X: / tegmark ➡️Max's Website: https://space.mit.edu/home/tegmark ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Future of Life Institute: https://futureoflife.orgYoshua Bengio➡️Website: https://yoshuabengio.org/Melanie Mitchell➡️Website: https://melaniemitchell.me/➡️X: https://x.com/MelMitchell1?s=20Yann Lecun➡️Google Scholar: https://scholar.google.com/citations?...➡️X: https://x.com/ylecun?s=20#AI #AISFAETY #AIRISK #OPENAI #ANTHROPIC #DEEPMIND #HUMANEXTINCTION #YANNLECUN #MELANIEMITCHELL #MAXTEGMARK #YOSHUABENGIO This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Dec 3, 2023 • 2min

Team Save Us vs Team Kill Us: For Humanity, An AI Safety Podcast Episode #6: The Munk Debate TRAILER

Want to see the most important issue in human history, extinction from AI, robustly debated, live and in person? It doesn’t happen nearly often enough.In our Episode #6, Team Save Us vs. Team Kill Us, TRAILER, John Sherman weaves together highlights and analysis of The Munk Debate on AI Safety to show the case for and against AI as a human extinction risk. The debate took place in June 2023, and it remains entirely current and relevant today and stands alone as one of the most well-produced, well-argued debates on AI Safety anywhere. All of the issues debated remain unsolved. All of the threats debated only grow in urgency.In this Munk Debate, you’ll meet two teams: Max Tegmark and Yoshua Bengio on Team Save Us (John’s title not theirs), and Yann Lecun and Melanie Mitchell on Team Kill Us (they’re called pro/con in the debate, Kill v Save is all John). Host John Sherman adds in some current events and colorful analysis (and language) throughout.This is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. Let’s all it facts and analysis.For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCESTHE MUNK DEBATES: https://munkdebates.comMax Tegmark➡️X: https://twitter.com/tegmark ➡️Max's Website: https://space.mit.edu/home/tegmark ➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Future of Life Institute: https://futureoflife.orgYoshua Bengio➡️Website: https://yoshuabengio.org/Melanie Mitchell➡️Website: https://melaniemitchell.me/➡️X: https://x.com/MelMitchell1?s=20Yann Lecun➡️Google Scholar: https://scholar.google.com/citations?...➡️X: https://x.com/ylecun?s=20#AI #AISFAETY #AIRISK #OPENAI #ANTHROPIC #DEEPMIND #HUMANEXTINCTION #YANNLECUN #MELANIEMITCHELL #MAXTEGMARK #YOSHUABENGIO This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Nov 27, 2023 • 41min

Dr. Roman Yampolskiy Interview, Part 2: For Humanity, An AI Safety Podcast Episode #5

In Episode #5 Part 2: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher.Among the many topics discussed in this episode:-what is at the core of AI safety risk skepticism-why AI safety research leaders themselves are so all over the map-why journalism is failing so miserably to cover AI safety appropriately-the drastic step the federal government could take to really slow Big AI downFor Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.ROMAN YAMPOLSKIY RESOURCESRoman Yampolskiy's Twitter: https://twitter.com/romanyam➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampolskiy➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Roman on Medium: https://romanyam.medium.com/#ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Nov 26, 2023 • 3min

Dr. Roman Yampolskiy Interview, Part 2: For Humanity, An AI Safety Podcast Episode #5 TRAILER

In Episode #5 Part 2, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher.Among the many topics discussed in this episode:-what is are the core of AI safety risk skepticism-why AI safety research leaders themselves are so all over the map-why journalism is failing so miserably to cover AI safety appropriately-the drastic step the federal government could take to really slow Big AI downFor Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.ROMAN YAMPOLSKIY RESOURCESRoman Yampolskiy's Twitter: https://twitter.com/romanyam➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampol...➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/... ➡️Roman on Medium: https://romanyam.medium.com/#ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Nov 22, 2023 • 35min

Dr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4

In Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher.Among the many topics discussed in this episode:-why more average people aren't more involved and upset about AI safety-how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day-how we can talk do our kids about these dark, existential issues-what if AI safety researchers concerned about human extinction over AI are just somehow wrong?For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Nov 20, 2023 • 2min

Dr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4 TRAILER

In Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher.Among the many topics discussed in this episode:-why more average people aren't more involved and upset about AI safety-how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day-how we can talk do our kids about these dark, existential issues-what if AI safety researchers concerned about human extinction over AI are just somehow wrong?For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Nov 15, 2023 • 28min

The Interpretability Problem: For Humanity, An AI Safety Podcast Episode #3

Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, and many more discussing how current AI systems are black boxes, no one has any clue how they work inside.For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Nov 13, 2023 • 1min

The Interpretability Problem: For Humanity, An AI Safety Podcast Episode #3 Trailer

This is the trailer for Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, and many more discussing how current AI systems are black boxes, no one has any clue how they work inside.For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.#AI #airisk #alignment #interpretability #doom #aisafety #openai #anthropic #eleizeryudkowsky #maxtegmark #connorleahy This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Nov 8, 2023 • 34min

For Humanity, An AI Safety Podcast Episode #2: The Alignment Problem

Did you know the makers of AI have no idea how to control their technology? They have no clue how to align it with human goals, values and ethics. You know, stuff like, don't kill humans.This the AI safety podcast for all people, no tech background required. We focus only on the threat of human extinction from AI.In Episode #2, The Alignment Problem, host John Sherman explores how alarmingly far AI safety researchers are from finding any way to control AI systems, much less their superintelligent children, who will arrive soon enough. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Nov 6, 2023 • 2min

For Humanity, An AI Safety Podcast: Episode #2, The Alignment Problem, Trailer

Did you know the makers of AI have no idea how to control their technology, while they admit it has the power to create human extinction? In For Humanity: An AI Safety Podcast, Episode #2 The Alignment Problem, we look into the fact no one has any clue how to align an AI system with human values, ethics and goals. Such as don't kill all the humans, for example. Episode #2 drops Wednesday, this is the trailer. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app