

For Humanity: An AI Safety Podcast
The AI Risk Network
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Episodes
Mentioned books

Jul 10, 2024 • 1h 25min
Episode #36 “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk Podcast
In Episode #36, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two.Gladstone AI Action Planhttps://www.gladstone.ai/action-planTIME MAGAZINE ON THE GLADSTONE REPORThttps://time.com/6898967/ai-extinction-national-security-risks-report/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebates Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesRESOURCES:BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesTIMESTAMPS:**The whistleblower's concerns (00:00:00)****Introduction to the podcast (00:01:09)****The urgency of addressing AI risk (00:02:18)****The potential consequences of falling behind in AI (00:04:36)****Transitioning to working on AI risk (00:06:33)****Engagement with the State Department (00:08:07)****Project assessment and public visibility (00:10:10)****Motivation for taking on the detective work (00:13:16)****Alignment with the government's safety culture (00:17:03)****Potential government oversight of AI labs (00:20:50)****The whistle blowers' concerns (00:21:52)****Shifting control to the government (00:22:47)****Elite group within the government (00:24:12)****Government competence and allocation of resources (00:25:34)****Political level and tech expertise (00:27:58)****Challenges in government engagement (00:29:41)****State department's engagement and assessment (00:31:33)****Recognition of government competence (00:34:36)****Engagement with frontier labs (00:35:04)****Whistleblower insights and concerns (00:37:33)****Whistleblower motivations (00:41:58)****Engagements with AI Labs (00:42:54)****Emotional Impact of the Work (00:43:49)****Workshop with Government Officials (00:44:46)****Challenges in Policy Implementation (00:45:46)****Expertise and Insights (00:49:11)****Future Engagement with US Government (00:50:51)****Flexibility of Private Sector Entity (00:52:57)****Impact on Whistleblowing Culture (00:55:23)****Key Recommendations (00:57:03)****Security and Governance of AI Technology (01:00:11)****Obstacles and Timing in Hardware Development (01:04:26)****The AI Lab Security Measures (01:04:50)****Nvidia's Stance on Regulations (01:05:44)****Export Controls and Governance Failures (01:07:26)****Concerns about AGI and Alignment (01:13:16)****Implications for Future Generations (01:16:33)****Personal Transformation and Mental Health (01:19:23)****Starting a Nonprofit for AI Risk Awareness (01:21:51)** This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jul 8, 2024 • 6min
Episode #36 Trailer “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk Podcast
In Episode #36 Trailer, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two.Gladstone AI Action Planhttps://www.gladstone.ai/action-planTIME MAGAZINE ON THE GLADSTONE REPORThttps://time.com/6898967/ai-extinction-national-security-risks-report/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebates Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesRESOURCES:BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesTIMESTAMPS:The assignment from the State Department (00:00:00) Discussion about the task given by the State Department team regarding the assessment of safety and security in frontier AI and advanced AI systems.Transition to detective work (00:00:30) The transition to a detective-like approach in gathering information and engaging with whistleblowers and clandestine meetings.Assessment of the AI safety community (00:01:05) A critique of the lack of action orientation and proactive approach in the AI safety community.Engagement with the Department of Defense (DoD) (00:02:57) Discussion about the engagement with the DoD, its existing safety culture, and the organizations involved in testing and evaluations.Shifting control to the government (00:03:54) Exploration of the need to shift control to the government and regulatory level for effective steering of the development of AI technology.Concerns about weaponization and loss of control (00:04:45) A discussion about concerns regarding weaponization and loss of control in AI labs and the need for more ambitious recommendations. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jul 3, 2024 • 1h 1min
Episode #35 “The AI Risk Investigators: Inside Gladstone AI, Part 1” For Humanity: An AI Risk Podcast
In Episode #35 host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows.Gladstone AI Action Planhttps://www.gladstone.ai/action-planTIME MAGAZINE ON THE GLADSTONE REPORThttps://time.com/6898967/ai-extinction-national-security-risks-report/SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!https://www.youtube.com/@DoomDebatesPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesRESOURCES:BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesTIMESTAMPS:Sincerity and Sam Altman (00:00:00) Discussion on the perceived sincerity of Sam Altman and his actions, including insights into his character and motivations.Introduction to Gladstone AI (00:01:14) Introduction to Gladstone AI, its involvement with the US government on AI risk, and the purpose of the podcast episode.Doom Debates on YouTube (00:02:17) Promotion of the "Doom Debates" YouTube channel and its content, featuring discussions on AI doom and various perspectives on the topic.YC Experience and Sincerity in Startups (00:08:13) Insight into the Y Combinator (YC) experience and the emphasis on sincerity in startups, with personal experiences and observations shared.OpenAI and Sincerity (00:11:51) Exploration of sincerity in relation to OpenAI, including evaluations of the company's mission, actions, and the challenges it faces in the AI landscape.The scaling story (00:21:33) Discussion of the scaling story related to AI capabilities and the impact of increasing data, processing power, and training models.The call about GPT-3 (00:22:29) Edward Harris receiving a call about the scaling story and the significance of GPT-3's capabilities, leading to a decision to focus on AI development.Transition from Y Combinator (00:24:42) Jeremy and Edward Harris leaving their previous company and transitioning from Y Combinator to focus on AI development.Security concerns and exfiltration (00:31:35) Discussion about the security vulnerabilities and potential exfiltration of AI models from top labs, highlighting the inadequacy of security measures.Government intervention and security (00:38:18) Exploration of the potential for government involvement in providing security assets to protect AI technology from exfiltration and the need for a pause in development until labs are secure.Resource reallocation for safety and security (00:40:03) Discussion about the need to reallocate resources for safety, security, and alignment technology to ensure the responsible development of AI.OpenAI's computational resource allocation (00:42:10) Concerns about OpenAI's failure to allocate computational resources for safety and alignment efforts, as well as the departure of a safety-minded board member.These are the timestamps and topics covered in the podcast episode transcription segment.China's Strategic Moves (00:43:07) Discussion on potential aggressive actions by China to prevent a permanent disadvantage in AI technology.China's Sincerity in AI Safety (00:44:29) Debate on the sincerity of China's commitment to AI safety and the influence of the CCP.Taiwan Semiconductor Manufacturing Company (TSMC) (00:47:47) Explanation of TSMC's role in fabricating advanced semiconductor chips and its impact on the AI race.US and China's Power Constraints (00:51:30) Comparison of the constraints faced by the US and China in terms of advanced chips and grid power.Nuclear Power and Renewable Energy (00:52:23) Discussion on the power sources being pursued by China and the US to address their respective constraints.Future Scenarios (00:56:20) Exploration of potential outcomes if China overtakes the US in AI technology. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jul 1, 2024 • 5min
Episode #35 TRAILER “The AI Risk Investigators: Inside Gladstone AI, Part 1” For Humanity: An AI Risk Podcast
In Episode #35 TRAILER:, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows.TIME MAGAZINE ON THE GLADSTONE REPORThttps://time.com/6898967/ai-extinction-national-security-risks-report/ Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesRESOURCES:BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesTIMESTAMPS:Sam Altman's intensity (00:00:10) Sam Altman's intense demeanor and competence, as observed by the speaker.Security risks of superintelligent AI (00:01:02) Concerns about the potential loss of control over superintelligent systems and the security vulnerabilities in top AI labs.Silicon Valley's security hubris (00:02:04)Critique of Silicon Valley's overconfidence in technology and lack of security measures, particularly in comparison to nation-state level cyber threats.China's AI capabilities (00:02:36) Discussion about the security deficiency in the United States and the potential for China to have better AI capabilities due to security leaks.Foreign actors' capacity for exfiltration (00:03:08)Foreign actors' incentives and capacity to exfiltrate frontier models, leading to the need to secure infrastructure before scaling and accelerating AI capabilities. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jun 26, 2024 • 1h 17min
Episode #34 - “The Threat of AI Autonomous Replication” For Humanity: An AI Risk Podcast
In Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit. Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesRESOURCES:Charbel-Raphaël Segerie’s Less Wrong Writing, much more on many topics we covered!https://www.lesswrong.com/users/charbel-raphaelBUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesTIMESTAMPS:**The threat of AI autonomous replication (00:00:43)****Introduction to France's Center for AI Security (00:01:23)****Challenges in AI risk awareness in France (00:09:36)****The influence of Yann LeCun on AI risk perception in France (00:12:53)****Autonomous replication and adaptation of AI (00:15:25)****The potential impact of autonomous replication (00:27:24)****The dead internet scenario (00:27:38)****The potential existential threat (00:29:02)****Fast takeoff scenario (00:30:54)****Dangers of autonomous replication and adaptation (00:34:39)****Difficulty in recognizing warning shots (00:40:00)****Defining red lines for AI development (00:42:44)****Effective education strategies (00:46:36)****Impact on computer science students (00:51:27)****AI safety summit in Paris (00:53:53)****The summit and AI safety report (00:55:02)****Potential impact of key figures (00:56:24)****Political influence on AI risk (00:57:32)****Accelerationism in political context (01:00:37)****Optimism and hope for the future (01:04:25)****Chances of a meaningful pause (01:08:43)** This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jun 24, 2024 • 5min
Episode #34 TRAILER - “The Threat of AI Autonomous Replication” For Humanity: An AI Risk Podcast
In Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit. Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.For Humanity Theme Music by Josef EbnerYoutube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlgWebsite: https://josef.picturesRESOURCES:BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesTIMESTAMPS:The exponential growth of AI (00:00:00) Discussion on the potential exponential growth of AI and its implications for the future.The mass of AI systems as an existential threat (00:01:05) Exploring the potential threat posed by the sheer mass of AI systems and its impact on existential risk.The concept of warning shots (00:01:32) Elaboration on the concept of warning shots in the context of AI safety and the need for public understanding.The importance of advocacy and public understanding (00:02:30) The significance of advocacy, public awareness, and the role of the safety community in creating and recognizing warning shots.OpenAI's super alignment team resignation (00:04:00) Analysis of the resignation of OpenAI's super alignment team and its potential significance as a warning shot. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jun 19, 2024 • 1h 23min
Episode #33 - “Dad vs. AGI” For Humanity: An AI Risk Podcast
In episode 33, host John Sherman talks with Dustin Burham, who is a dad, an anesthetist, an AI risk realist, and a podcast host himself about being a father while also understanding the realities of AI risk and the precarious moment we are living in.Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:Check out Dustin Burham’s fatherhood podcast: https://www.youtube.com/@thepresentfathersBUY STEPHEN HANSON’S BEAUTIFUL BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesTIMESTAMPS**The threat of AI to humanity (00:00:22)****Pope Francis's address at the G7 summit on AI risk (00:02:31)****Starting a dialogue on tough subjects (00:05:44)****The challenges and joys of fatherhood (00:10:47)****Concerns and excitement about AI technology (00:15:09)****The Present Fathers Podcast (00:16:58)****Personal experiences of fatherhood (00:18:56)****The impact of AI risk on future generations (00:21:11)****Elon Musk's Concerns (00:21:57)****Impact of Denial (00:23:40)****Potential AI Risks (00:24:27)****Psychopathy and Decision-Making (00:26:28)****Personal and Societal Impact (00:28:46)****AI Risk Awareness (00:30:12)****Ethical Considerations (00:31:46)****AI Technology and Human Impact (00:34:28)****Exponential Growth and Risk (00:36:06)****Emotion and Empathy in AI (00:37:58)****Antenatalism and Ethical Debate (00:41:04)****The antenatal ideas (00:42:20)****Psychopathic tendencies among CEOs and decision making (00:43:27)****The power of social media in influencing change (00:46:12)****The unprecedented threat of human extinction from AI (00:49:03)****Teaching large language models to love humanity (00:50:11)****Proposed measures for AI regulation (00:59:27)****China's approach to AI safety regulations (01:01:12)****The threat of open sourcing AI (01:02:50)****Protecting children from AI temptations (01:04:26)****Challenges of policing AI-generated content (01:07:06)****Hope for the future and engaging in AI safety (01:10:33)****Performance by YG Marley and Lauryn Hill (01:14:26)****Final thoughts and call to action (01:22:28)** This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jun 17, 2024 • 4min
Episode #33 TRAILER - “Dad vs. AGI” For Humanity: An AI Risk Podcast
In episode 33 Trailer, host John Sherman talks with Dustin Burham, who is a dad, an anesthetist, an AI risk realist, and a podcast host himself about being a father while also understanding the realities of AI risk and the precarious moment we are living in.Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastThis podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.TIMESTAMPSParental Concerns (00:00:00) A parent expresses worries about AI risks and emphasizes the need for cautious progress.Risk Acceptance Threshold (00:00:50) The speaker discusses the acceptability of doom and risk in AI and robotics, drawing parallels with medical risk assessment.Zero Risk Standard (00:01:34) The speaker emphasizes the medical industry's zero-risk approach and contrasts it with the industry's acceptance of potential doom.Human Denial and Nuclear Brinksmanship (00:02:25) The power of denial and its impact on decision-making, including the tendency to ignore catastrophic possibilities.Doom Prediction (00:03:17) The speakers express high levels of concern about potential doom in the future, with a 98% doom prediction for 50 years.RESOURCES:Check out Dustin Burham’s fatherhood podcast: https://www.youtube.com/@thepresentfathersBUY STEPHEN HANSON’S BEAUTIFUL BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomJOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jun 12, 2024 • 1h 37min
Episode #32 - “Humans+AIs=Harmony?” For Humanity: An AI Risk Podcast
Could humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef?(FULL INTERVIEW STARTS AT 00:23:21)Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastIn episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it’s possible humans and AGIs can co-exist in mutual symbiosis.This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.RESOURCES:BUY STEPHEN HANSON’S BEAUTIFUL BOOK!!!https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathomNYT: OpenAI Insiders Warn of a ‘Reckless’ Race for Dominancehttps://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html?unlocked_article_code=1.xE0._mTr.aNO4f_hEp2J4&smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cbDwarkesh Patel Interviews Another WhistleblowerLeopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of HistoryRoman Yampolskiy on Lex FridmanRoman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431Gladstone AI on Joe RoganJoe Rogan Experience #2156 - Jeremie & Edouard HarrisPeter Jenson’s Videos: HOW can AI Kill-us-All? So Simple, Even a Child can Understand (1:25) WHY do we want AI? For our Humanity (1:00) WHAT is the BIG Problem? Wanted: SafeAI Forever (3:00) FIRST do no harm. (Safe AI Blog)DECK. On For Humanity Podcast “Just the FACTS, please. WHY? WHAT? HOW?” (flip book)https://discover.safeaiforever.com/JOIN THE FIGHT, help Pause AI!!!!Pause AIJoin the Pause AI Weekly Discord Thursdays at 2pm EST / discord https://discord.com/invite/pVMWjddaW722 Word Statement from Center for AI SafetyStatement on AI Risk | CAIShttps://www.safe.ai/work/statement-on-ai-riskBest Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemesTIMESTAMPS:**The release of products that are safe (00:00:00)****Breakthroughs in AI research (00:00:41)****OpenAI whistleblower concerns (00:01:17)****Roman Yampolskiy's appearance on Lex Fridman podcast (00:02:27)****The capabilities and risks of AI systems (00:03:35)****Interview with Gladstone AI founders on Joe Rogan podcast (00:08:29)****OpenAI whistleblower's interview on Hard Fork podcast (00:14:08)****Peter Jensen's work on AI risk and media communication (00:20:01)****The interview with Peter Jensen (00:22:49)****Mutualistic Symbiosis and AI Containment (00:31:30)****The Probability of Catastrophic Outcome from AI (00:33:48)****The AI Safety Institute and Regulatory Efforts (00:42:18)****Regulatory Compliance and the Need for Safety (00:47:12)****The hard compute cap and hardware adjustment (00:47:47)****Physical containment and regulatory oversight (00:48:29)****Viewing the issue as a big business regulatory issue vs. a national security issue (00:50:18)****Funding and science for AI safety (00:49:59)****OpenAI's power allocation and ethical concerns (00:51:44)****Concerns about AI's impact on employment and societal well-being (00:53:12)****Parental instinct and the urgency of AI safety (00:56:32)** This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

Jun 10, 2024 • 3min
Episode #32 TRAILER - “Humans+AIs=Harmony?” For Humanity: An AI Risk Podcast
Could humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef?Please Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhumanitypodcastIn episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it’s possible humans and AGIs can co-exist in mutual symbiosis.This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.Peter Jensen’s Video: HOW can AI Kill-us-All? So Simple, Even a Child can Understand (1:25) https://www.youtube.com/watch?v=8yrIfCQBgdE This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com