For Humanity: An AI Risk Podcast

The AI Risk Network
undefined
Feb 28, 2024 • 1h 32min

"AI Risk=Jenga" For Humanity, An AI Safety Podcast Episode #17, Liron Shapira Interview

In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Resources:Liron's Youtube Channel:https://youtube.com/@liron00?si=cqIo5...More on rationalism:https://www.lesswrong.com/More on California State Senate Bill SB-1047:https://leginfo.legislature.ca.gov/fa...https://thezvi.substack.com/p/on-the-...Warren Wolfhttps://youtu.be/OZDwzBnn6uc?si=o5BjlRwfy7yuIRCL This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Feb 26, 2024 • 3min

"AI Risk=Jenga" For Humanity, An AI Safety Podcast #17 TRAILER, Liron Shapira Interview

In Episode #17 TRAILER, "AI Risk=Jenga", Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He explains how something like Sora, seemingly just a video tool, is actually a significant, real Jenga piece, and could actually end all life on earth.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Feb 21, 2024 • 44min

"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16

In Episode #16, AI Risk Denier Down, things get weird.This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays:https://www.understandingai.org/p/why...https://www.understandingai.org/p/why...Tim was not prepared to discuss this work, which is when things started to get off the rails.For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.MY QUESTIONS FOR TIM (We didn’t even get halfway through lol, Youtube wont let me put all of them so I'm just putting the second essay questions)OK lets get into your second essay "Why I'm not afraid of superintelligent AI taking over the world" from 11/15/23-You find chess as a striking example of how AI will not take over the world-But I’d like to talk about AI safety researcher Steve Omohundro’s take on chess-He says if you had an unaligned AGI you asked to get better at chess, it would first break into other servers to steal computing power so it would be better at Chess. Then when you discover this and try to stop it by turning it off, it sees your turning it off as a threat to it’s improving at chess, so it murders you.-Where is he wrong? -You wrote: “Think about a hypothetical graduate student. Let’s say that she was able to reach the frontiers of physics knowledge after reading 20 textbooks. Could she have achieved a superhuman understanding of physics by reading 200 textbooks? Obviously not. Those extra 180 textbooks contain a lot of words, they don’t contain very much knowledge she doesn’t already have. So too with AI systems. I suspect that on many tasks, their performance will start to plateau around human-level performance. Not because they “run out of data,” but because they reached the frontiers of human knowledge.”-In this you seem to assume that any one human is capable of mastering all of knowledge in a subject area better than any AI, because you seem to believe that one human is capable of holding ALL of the knowledge available on a given subject. -This is ludicrous to me. You think humans are far too special. -AN AGI WILL HAVE READ EVERY BOOK EVER WRITTEN. MILLIONS OF BOOKS. ACTIVELY CROSS-REFERENCING ACROSS EVERY DISCIPLINE. -How could any humans possibly compete with an AGI system than never sleeps and can read every word ever written in any language? No human could ever do this.-Are you saying humans are the most perfect vessels of knowledge consumption possible in the universe?-A human who has read 1000 books on one area can compete with an AGI who has read millions of books in thousands of areas for knowledge? Really?-You wrote: “AI safetyists assume that all problems can be solved with the application of enough brainpower. But for many problems, having the right knowledge matters more. And a lot of economically significant knowledge is not contained in any public data set. It’s locked up in the brains and private databases of millions of individuals and organizations spread across the economy and around the world.”-Why do you assume an unaligned AGI would not raid every private database on earth in a very short time and take in all this knowledge you find so special?-Does this claim rest on the security protocols of the big AI companies?-Security protocols, even at OpenAI, are seen to be highly vulnerable to large-scale nation-state hacking. If China could hack into OpenAI, and AGI could surely hack into either or anything. An AGI’s ability to spot and exploit vulnerabilities in human written code is widely predicted. -Lets see if we can leave this conversation with a note of agreement. Is there anything you think we can agree on? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Feb 20, 2024 • 3min

"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16 TRAILER

In Episode #16 TRAILER, AI Risk Denier Down, things get weird.This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays:https://www.understandingai.org/p/why-im-not-afraid-of-superintelligenthttps://www.understandingai.org/p/why-im-not-worried-about-ai-takingTim was not prepared to discuss this work, which is when things started to get off the rails.For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Feb 19, 2024 • 1h

"AI Risk Super Bowl I: Conner vs. Beff" For Humanity, An AI Safety Podcast Episode #15

In Episode #15, AI Risk Superbowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24.With a mix of highlights and analysis, John, with Beff’s help, reveals the truth about the e/acc movement: it’s anti-human at its core.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Resources:Machine Learning Street Talk - YouTubeFull Debate, e/acc Leader Beff Jezos vs Doomer Connor Leahye/acc Leader Beff Jezos vs Doomer Connor LeahyHow Guillaume Verdon Became BEFF JEZOS, Founder of e/accHow Guillaume Verdon Became BEFF JEZOS, Founder of e/accGuillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407Next week’s guest Timothy Lee’s Website and related writing:https://www.understandingai.org/https://www.understandingai.org/p/why...https://www.understandingai.org/p/why... This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Feb 12, 2024 • 3min

"AI Risk Super Bowl I: Conner vs. Beff" For Humanity, An AI Safety Podcast Episode #15 TRAILER

In Episode #15 TRAILER, AI Risk Super Bowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24.With a mix of highlights and analysis, John, with Beff’s help, reveals the truth about the e/acc movement: it’s anti-human at its core.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Jan 30, 2024 • 1h 40min

"Uncontrollable AI" For Humanity: An AI Safety Podcast, Episode #13 , Darren McKee Interview

In Episode #13, “Uncontrollable AI” TRAILER John Sherman interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World.In this Trailer, Darren starts off on an optimistic note by saying AI Safety is winning. You don’t often hear it, but Darren says the world has moved on AI Safety with greater speed and focus and real promise than most in the AI community had thought was possible.Apologies for the laggy cam on Darren!Darren’s book is an excellent resource, like this podcast it is intended for the general public. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Resources:Darren’s Bookhttps://www.amazon.com/Uncontrollable...My Dad's Favorite Messiah Recording (3:22-6:-55 only lol!!)https://www.youtube.com/watch?v=lFjQ7...Sample letter/email to an elected official:Dear XXXX-I'm a constituent of yours, I have lived in your district for X years. I'm writing today because I am gravely concerned about the existential threat to humanity from Artificial Intelligence. It is the most important issue in human history, nothing else is close.Have you read the 22-word statement from the Future of Life Institute on 5/31/23 that Sam Altman and all the big AI CEOs signed? It reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."Do you believe them? If so, what are you doing to prevent human extinction? If not, why don't you believe them?Most prominent AI safety researchers say the default outcome, if we do not make major changes right now, is that AI will kill every living thing on earth, within 1-50 years. This is not science fiction or hyperbole. This is our current status quo.It's like a pharma company saying they have a drug they say can cure all diseases, but it hasn't been through any clinical trials and it may also kill anyone who takes it. Then, with no oversight or regulation, they have put the new drug in the public water supply. Big AI is making tech they openly admit they cannot control, do not understand how it works, and could kill us all. Their resources are 99:1 on making the tech stronger and faster, not safer. And yet they move forward, daily, with no oversight or regulation.I am asking you to become a leader in AI safety. Many policy ideas could help, and you could help them become law. Things like liability reform so AI companies are liable for harm, hard caps on compute power, and tracking and reporting of all chip locations at a certain level.I'd like to discuss this with you or someone from your office over the phone or a Zoom. Would that be possible?Thanks very much.XXXXXXAddressPhone This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Jan 29, 2024 • 2min

"Uncontrollable AI" For Humanity: An AI Safety, Podcast Episode #13, Author Darren McKee Interview

In Episode #13, “Uncontrollable AI” TRAILER John Sherman interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World.In this Trailer, Darren starts off on an optimistic note by saying AI Safety is winning. You don’t often hear it, but Darren says the world has moved on AI Safety with greater speed and focus and real promise than most in the AI community had thought was possible.Darren’s book is an excellent resource, like this podcast it is intended for the general public. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Resources:Darren’s Book This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Jan 25, 2024 • 1h 40min

"AI Risk Debate" For Humanity: An AI Safety Podcast Episode #12 Theo Jaffee Interview

In Episode #12, we have our first For Humanity debate!! John talks with Theo Jaffee, a fast-rising AI podcaster who is a self described “techno-optimist.” The debate covers a wide range of topics in AI risk.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.ResourcesTheo’s YouTube Channel : https://youtube.com/@theojaffee8530?si=aBnWNdViCiL4ZaEgGlossary: First Definitions by ChaptGPT4, I asked it to give answers simple enough elementary school student could understand( lol, I find this helpful often!)Reinforcement Learning with Human Feedback (RLHF): Definition: RLHF, or Reinforcement Learning with Human Feedback, is like teaching a computer to make decisions by giving it rewards when it does something good and telling it what's right when it makes a mistake. It's a way for computers to learn and get better at tasks with the help of guidance from humans, just like how a teacher helps students learn. So, it's like a teamwork between people and computers to make the computer really smart!Model WeightsDefiniton: Model weights are like the special numbers that help a computer understand and remember things. Imagine it's like a recipe book, and these weights are the amounts of ingredients needed to make a cake. When the computer learns new things, these weights get adjusted so that it gets better at its job, just like changing the recipe to make the cake taste even better! So, model weights are like the secret ingredients that make the computer really good at what it does.Foom/Fast Take-off: Definition: "AI fast take-off" or "foom" refers to the idea that artificial intelligence (AI) could become super smart and powerful really quickly. It's like imagining a computer getting super smart all of a sudden, like magic! Some people use the word "foom" to talk about the possibility of AI becoming super intelligent in a short amount of time. It's a bit like picturing a computer going from learning simple things to becoming incredibly smart in the blink of an eye! Foom comes from cartoons, it’s the sound a super hero makes in comic books when they burst off the ground into flight.Gradient Descent: Gradient descent is like a treasure hunt for the best way to do something. Imagine you're on a big hill with a metal detector, trying to find the lowest point. The detector beeps louder when you're closer to the lowest spot. In gradient descent, you adjust your steps based on these beeps to reach the lowest point on the hill, and in the computer world, it helps find the best values for a task, like making a robot walk smoothly or a computer learn better.Orthoginality: Orthogonality is like making sure things are independent and don't mess each other up. Think of a chef organizing ingredients on a table – if each ingredient has its own space and doesn't mix with others, it's easier to work. In computers, orthogonality means keeping different parts separate, so changing one thing doesn't accidentally affect something else. It's like having a well-organized kitchen where each tool has its own place, making it easy to cook without chaos! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
undefined
Jan 22, 2024 • 5min

"AI Risk Debate" For Humanity: An AI Safety Podcast Episode #12 Theo Jaffee Interview TRAILER

In Episode #12 TRAILER, we have our first For Humanity debate!! John talks with Theo Jaffee, a fast-rising AI podcaster who is a self described “techno-optimist.” The debate covers a wide range of topics in AI risk.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.ResourcesTheo’s YouTube Channel : https://youtube.com/@theojaffee8530?s... This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app