For Humanity: An AI Safety Podcast

John Sherman
undefined
Mar 20, 2024 • 1h 49min

0:01 / 3:52 “AI Risk Realist vs. Coding Cowboy” For Humanity: An AI Safety Podcast Episode #20

In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” John Sherman debates AI risk with lifelong coder and current Chief AI Officer Mark Tellez. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Community Note: So, after much commentary, I have done away with the Doom Rumble during the trailers. I like(d) it, I think it adds some drama, but the people have spoken and it is dead. RIP Doom Rumble, 2023--2024. Also I had a bit of a head cold at the time of some of the recording and sound a little nasal in the open and close, my apologies lol, but a few sniffles can’t stop this thing!! RESOURCES: Time Article on the New Report: AI Poses Extinction-Level Risk, State-Funded Report Says | TIME John's Upcoming Talk in Philadelphia! It is open to the public, you will need to make a free account at meetup.com https://www.meetup.com/philly-net/eve... FOLLOW DAVID SHAPIRO ON YOUTUBE! David Shapiro - YouTube Dave Shapiro’s New  Video where he talks about For Humanity AGI: What will the first 90 days be like? And more VEXING questions from the audience! 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS Pause AI Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord  
undefined
Mar 18, 2024 • 4min

“AI Risk Realist vs. Coding Cowboy” TRAILER For Humanity: An AI Safety Podcast Episode #20

In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” TRAILER, John Sherman debates AI risk with a lifelong coder and current Chief AI Officer. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Community Note: So after much commentary I have done away with the Doom Rumble during the trailers. I like(d) it, I think it adds some drama, but the people have spoken and it is dead. RIP Doom Rumble, 2023--2024. Also I had a bit of a head cold at the time of some of the recording and sound a little nasal in the open and close, my apologies lol, but a few sniffles can’t stop this thing!! RESOURCES: Time Article on the New Report: AI Poses Extinction-Level Risk, State-Funded Report Says | TIME FOLLOW DAVID SHAPIRO ON YOUTUBE! David Shapiro - YouTube Dave Shapiro’s New  Video where he talks about For Humanity AGI: What will the first 90 days be like? And more VEXING questions from the audience! 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS Pause AI Pause AI
undefined
9 snips
Mar 13, 2024 • 1h 41min

“David Shapiro AI-Risk Interview” For Humanity: An AI Safety Podcast Episode #19

Discussion on the dangers of AI surpassing human intelligence, exploring societal implications and the need for consent. Delving into risks of AI and ensuring a positive future, including building a digital super organism. Reflecting on the future impact of advanced AI on society, job automation, and economic shifts. Navigating the complexities of AI development, international cooperation, and geopolitical concerns.
undefined
Mar 11, 2024 • 7min

“David Shapiro AI-Risk Interview” For Humanity: An AI Safety Podcast Episode #19 TRAILER

In Episode #19 TRAILER, “David Shapiro Interview” John talks with AI/Tech YouTube star David Shapiro. David has several successful YouTube channels, his main channel (link below go follow him!), with more than 140k subscribers, is a constant source of new AI and AGI and post-labor economy-related video content. Dave does a great job breaking things down. But a lot Dave’s content is about a post-AGI future. And this podcast’s main concern is that we won’t get there, cuz AGI will kill us all first. So this show is a two-part conversation, first about if we can live past AGI, and second, about the issues we’d face in a world where humans and AGIs are co-existing. In this trailer, Dave gets to the edge of giving his (p)-doom. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. FOLLOW DAVID SHAPIRO ON YOUTUBE! https://youtube.com/@DaveShap?si=o_USH-v0fDyo23fm
undefined
Feb 28, 2024 • 1h 32min

"AI Risk=Jenga" For Humanity, An AI Safety Podcast Episode #17, Liron Shapira Interview

In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: Liron's Youtube Channel: https://youtube.com/@liron00?si=cqIo5... More on rationalism: https://www.lesswrong.com/ More on California State Senate Bill SB-1047: https://leginfo.legislature.ca.gov/fa...https://thezvi.substack.com/p/on-the-... Warren Wolf https://youtu.be/OZDwzBnn6uc?si=o5BjlRwfy7yuIRCL
undefined
Feb 26, 2024 • 3min

"AI Risk=Jenga" For Humanity, An AI Safety Podcast #17 TRAILER, Liron Shapira Interview

In Episode #17 TRAILER, "AI Risk=Jenga", Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He explains how something like Sora, seemingly just a video tool, is actually a significant, real Jenga piece, and could actually end all life on earth. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
undefined
Feb 21, 2024 • 44min

"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16

In Episode #16, AI Risk Denier Down, things get weird. This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays: https://www.understandingai.org/p/why... https://www.understandingai.org/p/why... Tim was not prepared to discuss this work, which is when things started to get off the rails. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. MY QUESTIONS FOR TIM (We didn’t even get halfway through lol, Youtube wont let me put all of them so I'm just putting the second essay questions) OK lets get into your second essay "Why I'm not afraid of superintelligent AI taking over the world" from 11/15/23 -You find chess as a striking example of how AI will not take over the world-But I’d like to talk about AI safety researcher Steve Omohundro’s take on chess-He says if you had an unaligned AGI you asked to get better at chess, it would first break into other servers to steal computing power so it would be better at Chess. Then when you discover this and try to stop it by turning it off, it sees your turning it off as a threat to it’s improving at chess, so it murders you. -Where is he wrong? -You wrote: “Think about a hypothetical graduate student. Let’s say that she was able to reach the frontiers of physics knowledge after reading 20 textbooks. Could she have achieved a superhuman understanding of physics by reading 200 textbooks? Obviously not. Those extra 180 textbooks contain a lot of words, they don’t contain very much knowledge she doesn’t already have. So too with AI systems. I suspect that on many tasks, their performance will start to plateau around human-level performance. Not because they “run out of data,” but because they reached the frontiers of human knowledge.” -In this you seem to assume that any one human is capable of mastering all of knowledge in a subject area better than any AI, because you seem to believe that one human is capable of holding ALL of the knowledge available on a given subject. -This is ludicrous to me. You think humans are far too special. -AN AGI WILL HAVE READ EVERY BOOK EVER WRITTEN. MILLIONS OF BOOKS. ACTIVELY CROSS-REFERENCING ACROSS EVERY DISCIPLINE. -How could any humans possibly compete with an AGI system than never sleeps and can read every word ever written in any language? No human could ever do this. -Are you saying humans are the most perfect vessels of knowledge consumption possible in the universe? -A human who has read 1000 books on one area can compete with an AGI who has read millions of books in thousands of areas for knowledge? Really? -You wrote: “AI safetyists assume that all problems can be solved with the application of enough brainpower. But for many problems, having the right knowledge matters more. And a lot of economically significant knowledge is not contained in any public data set. It’s locked up in the brains and private databases of millions of individuals and organizations spread across the economy and around the world.” -Why do you assume an unaligned AGI would not raid every private database on earth in a very short time and take in all this knowledge you find so special? -Does this claim rest on the security protocols of the big AI companies? -Security protocols, even at OpenAI, are seen to be highly vulnerable to large-scale nation-state hacking. If China could hack into OpenAI, and AGI could surely hack into either or anything. An AGI’s ability to spot and exploit vulnerabilities in human written code is widely predicted. -Lets see if we can leave this conversation with a note of agreement. Is there anything you think we can agree on?
undefined
Feb 20, 2024 • 3min

"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16 TRAILER

In Episode #16 TRAILER, AI Risk Denier Down, things get weird. This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays: https://www.understandingai.org/p/why-im-not-afraid-of-superintelligent https://www.understandingai.org/p/why-im-not-worried-about-ai-taking Tim was not prepared to discuss this work, which is when things started to get off the rails. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.
undefined
Feb 19, 2024 • 1h

"AI Risk Super Bowl I: Conner vs. Beff" For Humanity, An AI Safety Podcast Episode #15

In Episode #15, AI Risk Superbowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24. With a mix of highlights and analysis, John, with Beff’s help, reveals the truth about the e/acc movement: it’s anti-human at its core. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Resources: Machine Learning Street Talk - YouTube Full Debate, e/acc Leader Beff Jezos vs Doomer Connor Leahy e/acc Leader Beff Jezos vs Doomer Connor Leahy How Guillaume Verdon Became BEFF JEZOS, Founder of e/acc How Guillaume Verdon Became BEFF JEZOS, Founder of e/acc Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407 Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407 Next week’s guest Timothy Lee’s Website and related writing: https://www.understandingai.org/https://www.understandingai.org/p/why...https://www.understandingai.org/p/why...
undefined
Feb 12, 2024 • 2min

"AI Risk Super Bowl I: Conner vs. Beff" For Humanity, An AI Safety Podcast Episode #15 TRAILER

In Episode #15 TRAILER, AI Risk Super Bowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24. With a mix of highlights and analysis, John, with Beff’s help, reveals the truth about the e/acc movement: it’s anti-human at its core. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app