For Humanity: An AI Safety Podcast cover image

For Humanity: An AI Safety Podcast

Latest episodes

undefined
Jul 29, 2024 • 4min

Episode #39 Trailer “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast

In Episode #39 Trailer, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation addresses the shifting political landscape around AI-risk legislation in America in July 2024. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Max Winga’s “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes Timestamps Republican Party's AI Regulation Stance (00:00:41)The GOP platform aims to eliminate existing AI regulations, reflecting a shift in political dynamics. Bipartisanship in AI Issues (00:01:21)AI is initially a bipartisan concern, but quickly becomes a partisan issue amidst political maneuvering. Tech Companies' Frustration with Legislation (00:01:55)Major tech companies express dissatisfaction with California's AI bills, indicating a push for regulatory rollback. Public Sentiment vs. Party Platform (00:02:42)Discrepancy between GOP platform on AI and average voter opinions, highlighting a disconnect in priorities. Polling on AI Regulation (00:03:26)Polling shows strong public support for AI regulation, raising questions about political implications and citizen engagement.
undefined
Jul 24, 2024 • 1h 20min

Episode #38 “France vs. AGI” For Humanity: An AI Risk Podcast

In Episode #38, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty? The conversation covers the potential for international treaties on AI safety, the psychological factors influencing public perception, and the power dynamics shaping AI's future.    Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. Max Winga’s “A Stark Warning About Extiction” https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22 For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: **Concerns about AI Risks in France (00:00:00)**   **Optimism in AI Solutions (00:01:15)**   **Introduction to the Episode (00:01:51)**   **Max Wingo's Powerful Clip (00:02:29)**   **AI Safety Summit Context (00:04:20)**   **Personal Journey into AI Safety (00:07:02)**   **Commitment to AI Risk Work (00:21:33)**   **France's AI Sacrifice (00:21:49)**   **Impact of Efforts (00:21:54)**   **Existential Risks and Choices (00:22:12)**   **Underestimating Impact (00:22:25)**   **Researching AI Risks (00:22:34)**   **Weak Counterarguments (00:23:14)**   **Existential Dread Theory (00:23:56)**   **Global Awareness of AI Risks (00:24:16)**   **France's AI Leadership Role (00:25:09)**   **AI Policy in France (00:26:17)**   **Influential Figures in AI (00:27:16)**   **EU Regulation Sabotage (00:28:18)**   **Committee's Risk Perception (00:30:24)**   **Concerns about France's AI Development (00:32:03)**   **International AI Treaties (00:32:36)**   **Sabotaging AI Safety Summit (00:33:26)**   **Quality of France's AI Report (00:34:19)**   **Misleading Risk Analyses (00:36:06)**   **Comparison to Historical Innovations (00:39:33)**   **Rhetoric and Misinformation (00:40:06)**   **Existential Fear and Rationality (00:41:08)**   **Position of AI Leaders (00:42:38)**   **Challenges of Volunteer Management (00:46:54)**  
undefined
Jul 22, 2024 • 7min

Episode #38 TRAILER “France vs. AGI” For Humanity: An AI Risk Podcast

In Episode #38 TRAILER, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty?   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS Trust in AI Awareness in France (00:00:00)Discussion on France being uninformed about AI risks compared to other countries with AI labs. International Treaty Concerns (00:00:46)Speculation on France's reluctance to sign an international AI safety treaty. Personal Reflections on AI Risks (00:00:57)Speaker reflects on the dilemma of believing in AI risks and choosing between action or enjoyment. Underestimating Impact (00:01:13)The tendency of people to underestimate their potential impact on global issues. Researching AI Risks (00:01:50)Speaker shares their journey of researching AI risks and finding weak counterarguments. Critique of Counterarguments (00:02:23)Discussion on the absurdity of opposing views on AI risks and societal implications. Existential Dread and Rationality (00:02:42)Connection between existential fear and irrationality in discussions about AI safety. Shift in AI Safety Focus (00:03:17)Concerns about the diminishing focus on AI safety in upcoming summits. Quality of AI Strategy Report (00:04:11)Criticism of a recent French AI strategy report and plans to respond critically. Optimism about AI Awareness (00:05:04)Belief that understanding among key individuals can resolve AI safety issues. Power Dynamics in AI Decision-Making (00:05:38)Discussion on the disproportionate influence of a small group on global AI decisions. Cultural Perception of Impact (00:06:01)Reflection on societal beliefs that inhibit individual agency in effecting change.
undefined
Jul 17, 2024 • 1h 21min

Episode #37 “Christianity vs. AGI” For Humanity: An AI Risk Podcast

In Episode #37, host John Sherman talks with writer Peter Biles. Peter is a Christian who often writes from that perspective. He is a prolific fiction writer and has written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and is a contributing writer and editor for Mind Matters. The conversation centers on the intersection between Christianity and AGI, questions like what is the role of faith in a world where no one works? And could religions unite to oppose AGI? Some of Peter Biles related writing: https://mindmatters.ai/2024/07/ai-is-becoming-a-mass-tool-of-persuasion/ https://mindmatters.ai/2022/10/technology-as-the-new-god-before-whom-all-others-bow/ https://substack.com/@peterbiles   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes Matt Andersen - 'Magnolia' (JJ Cale Cover) LIVE at SiriusXM JJ Cale Magnolia Flagstaff, AZ 2004 TIMESTAMPS: **Christianity versus AGI (00:00:39)** **Concerns about AI (00:02:45)** **Christianity and Technology (00:05:30)** **Interview with Peter Byles (00:11:09)** **Effects of Social Media (00:18:03)** **Religious Perspective on AI (00:23:57)** **The implications of AI on Christian faith (00:24:05)** **The Tower of Babel metaphor (00:25:09)** **The role of humans as sub-creators (00:27:23)** **The impact of AI on human culture and society (00:30:33)** **The limitations of AI in storytelling and human connection (00:32:33)** **The intersection of faith and AI in a future world (00:41:35)** **Religious Leaders and AI (00:45:34)** **Human Exceptionalism (00:46:51)** **Interfaith Dialogue and AI (00:50:26)** **Religion and Abundance (00:53:42)** **Apocalyptic Language and AI (00:58:26)** **Hope in Human-Oriented Culture (01:04:32)** **Worshipping AI (01:07:55)** **Religion and AI (01:08:17)** **Celebration of Life (01:09:49)**
undefined
Jul 15, 2024 • 9min

Episode #37 Trailer “Christianity vs. AGI” For Humanity: An AI Risk Podcast

In Episode #37 Trailer, host John Sherman talks with writer Peter Biles. Peter is a Christian who often writes from that perspective. He is a prolific fiction writer and has written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and is a contributing writer and editor for Mind Matters. The conversation centers on the intersection between Christianity and AGI, questions like what is the role of faith in a world where no one works? And could religions unite to oppose AGI? Some of Peter Biles related writing: https://mindmatters.ai/2024/07/ai-is-becoming-a-mass-tool-of-persuasion/ https://mindmatters.ai/2022/10/technology-as-the-new-god-before-whom-all-others-bow/ https://substack.com/@peterbiles   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: The impact of technology on human dignity (00:00:00) The speaker discusses the potential negative impact of technology on human dignity and the divine image. The embodiment of souls and human dignity (00:01:00) The speaker emphasizes the spiritual nature of human beings and the importance of human dignity, regardless of religion or ethnicity. The concept of a "sand god" and technological superiority (00:02:09) The conversation explores the cultural and religious implications of creating an intelligence superior to humans and the reference to a "sand god." The Tower of Babel and technology (00:03:25) The speaker references the story of the Tower of Babel from the book of Genesis and its metaphorical implications for technological advancements and human hubris. The impact of AI on communication and storytelling (00:05:26) The discussion delves into the impersonal nature of AI in communication and storytelling, highlighting the absence of human intention and soul. Human nature, materialism, and work (00:07:38) The conversation explores the deeper understanding of human nature, the restlessness of humans, and the significance of work and creativity.
undefined
Jul 10, 2024 • 1h 25min

Episode #36 “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk Podcast

In Episode #36, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two. Gladstone AI Action Plan https://www.gladstone.ai/action-plan TIME MAGAZINE ON THE GLADSTONE REPORT https://time.com/6898967/ai-extinction-national-security-risks-report/ SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: **The whistleblower's concerns (00:00:00)** **Introduction to the podcast (00:01:09)** **The urgency of addressing AI risk (00:02:18)** **The potential consequences of falling behind in AI (00:04:36)** **Transitioning to working on AI risk (00:06:33)** **Engagement with the State Department (00:08:07)** **Project assessment and public visibility (00:10:10)** **Motivation for taking on the detective work (00:13:16)** **Alignment with the government's safety culture (00:17:03)** **Potential government oversight of AI labs (00:20:50)** **The whistle blowers' concerns (00:21:52)** **Shifting control to the government (00:22:47)** **Elite group within the government (00:24:12)** **Government competence and allocation of resources (00:25:34)** **Political level and tech expertise (00:27:58)** **Challenges in government engagement (00:29:41)** **State department's engagement and assessment (00:31:33)** **Recognition of government competence (00:34:36)** **Engagement with frontier labs (00:35:04)** **Whistleblower insights and concerns (00:37:33)** **Whistleblower motivations (00:41:58)** **Engagements with AI Labs (00:42:54)** **Emotional Impact of the Work (00:43:49)** **Workshop with Government Officials (00:44:46)** **Challenges in Policy Implementation (00:45:46)** **Expertise and Insights (00:49:11)** **Future Engagement with US Government (00:50:51)** **Flexibility of Private Sector Entity (00:52:57)** **Impact on Whistleblowing Culture (00:55:23)** **Key Recommendations (00:57:03)** **Security and Governance of AI Technology (01:00:11)** **Obstacles and Timing in Hardware Development (01:04:26)** **The AI Lab Security Measures (01:04:50)** **Nvidia's Stance on Regulations (01:05:44)** **Export Controls and Governance Failures (01:07:26)** **Concerns about AGI and Alignment (01:13:16)** **Implications for Future Generations (01:16:33)** **Personal Transformation and Mental Health (01:19:23)** **Starting a Nonprofit for AI Risk Awareness (01:21:51)**
undefined
Jul 8, 2024 • 6min

Episode #36 Trailer “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk Podcast

In Episode #36 Trailer, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two. Gladstone AI Action Plan https://www.gladstone.ai/action-plan TIME MAGAZINE ON THE GLADSTONE REPORT https://time.com/6898967/ai-extinction-national-security-risks-report/ SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: The assignment from the State Department (00:00:00) Discussion about the task given by the State Department team regarding the assessment of safety and security in frontier AI and advanced AI systems. Transition to detective work (00:00:30) The transition to a detective-like approach in gathering information and engaging with whistleblowers and clandestine meetings. Assessment of the AI safety community (00:01:05) A critique of the lack of action orientation and proactive approach in the AI safety community. Engagement with the Department of Defense (DoD) (00:02:57) Discussion about the engagement with the DoD, its existing safety culture, and the organizations involved in testing and evaluations. Shifting control to the government (00:03:54) Exploration of the need to shift control to the government and regulatory level for effective steering of the development of AI technology. Concerns about weaponization and loss of control (00:04:45) A discussion about concerns regarding weaponization and loss of control in AI labs and the need for more ambitious recommendations.
undefined
Jul 3, 2024 • 1h 1min

Episode #35 “The AI Risk Investigators: Inside Gladstone AI, Part 1” For Humanity: An AI Risk Podcast

In Episode #35  host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows. Gladstone AI Action Plan https://www.gladstone.ai/action-plan TIME MAGAZINE ON THE GLADSTONE REPORT https://time.com/6898967/ai-extinction-national-security-risks-report/ SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: Sincerity and Sam Altman (00:00:00) Discussion on the perceived sincerity of Sam Altman and his actions, including insights into his character and motivations. Introduction to Gladstone AI (00:01:14) Introduction to Gladstone AI, its involvement with the US government on AI risk, and the purpose of the podcast episode. Doom Debates on YouTube (00:02:17) Promotion of the "Doom Debates" YouTube channel and its content, featuring discussions on AI doom and various perspectives on the topic. YC Experience and Sincerity in Startups (00:08:13) Insight into the Y Combinator (YC) experience and the emphasis on sincerity in startups, with personal experiences and observations shared. OpenAI and Sincerity (00:11:51) Exploration of sincerity in relation to OpenAI, including evaluations of the company's mission, actions, and the challenges it faces in the AI landscape. The scaling story (00:21:33) Discussion of the scaling story related to AI capabilities and the impact of increasing data, processing power, and training models. The call about GPT-3 (00:22:29) Edward Harris receiving a call about the scaling story and the significance of GPT-3's capabilities, leading to a decision to focus on AI development. Transition from Y Combinator (00:24:42) Jeremy and Edward Harris leaving their previous company and transitioning from Y Combinator to focus on AI development. Security concerns and exfiltration (00:31:35) Discussion about the security vulnerabilities and potential exfiltration of AI models from top labs, highlighting the inadequacy of security measures. Government intervention and security (00:38:18) Exploration of the potential for government involvement in providing security assets to protect AI technology from exfiltration and the need for a pause in development until labs are secure. Resource reallocation for safety and security (00:40:03) Discussion about the need to reallocate resources for safety, security, and alignment technology to ensure the responsible development of AI. OpenAI's computational resource allocation (00:42:10) Concerns about OpenAI's failure to allocate computational resources for safety and alignment efforts, as well as the departure of a safety-minded board member. These are the timestamps and topics covered in the podcast episode transcription segment. China's Strategic Moves (00:43:07) Discussion on potential aggressive actions by China to prevent a permanent disadvantage in AI technology. China's Sincerity in AI Safety (00:44:29) Debate on the sincerity of China's commitment to AI safety and the influence of the CCP. Taiwan Semiconductor Manufacturing Company (TSMC) (00:47:47) Explanation of TSMC's role in fabricating advanced semiconductor chips and its impact on the AI race. US and China's Power Constraints (00:51:30) Comparison of the constraints faced by the US and China in terms of advanced chips and grid power. Nuclear Power and Renewable Energy (00:52:23) Discussion on the power sources being pursued by China and the US to address their respective constraints. Future Scenarios (00:56:20) Exploration of potential outcomes if China overtakes the US in AI technology.
undefined
Jul 1, 2024 • 5min

Episode #35 TRAILER “The AI Risk Investigators: Inside Gladstone AI, Part 1” For Humanity: An AI Risk Podcast

In Episode #35 TRAILER:, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows. TIME MAGAZINE ON THE GLADSTONE REPORT https://time.com/6898967/ai-extinction-national-security-risks-report/   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: Sam Altman's intensity (00:00:10) Sam Altman's intense demeanor and competence, as observed by the speaker. Security risks of superintelligent AI (00:01:02) Concerns about the potential loss of control over superintelligent systems and the security vulnerabilities in top AI labs. Silicon Valley's security hubris (00:02:04)Critique of Silicon Valley's overconfidence in technology and lack of security measures, particularly in comparison to nation-state level cyber threats. China's AI capabilities (00:02:36) Discussion about the security deficiency in the United States and the potential for China to have better AI capabilities due to security leaks. Foreign actors' capacity for exfiltration (00:03:08)Foreign actors' incentives and capacity to exfiltrate frontier models, leading to the need to secure infrastructure before scaling and accelerating AI capabilities.
undefined
Jun 26, 2024 • 1h 17min

Episode #34 - “The Threat of AI Autonomous Replication” For Humanity: An AI Risk Podcast

In Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit.   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. For Humanity Theme Music by Josef Ebner Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg Website: https://josef.pictures RESOURCES: Charbel-Raphaël Segerie’s Less Wrong Writing, much more on many topics we covered! https://www.lesswrong.com/users/charbel-raphael BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 2pm EST   / discord   https://discord.com/invite/pVMWjddaW7 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes TIMESTAMPS: **The threat of AI autonomous replication (00:00:43)** **Introduction to France's Center for AI Security (00:01:23)** **Challenges in AI risk awareness in France (00:09:36)** **The influence of Yann LeCun on AI risk perception in France (00:12:53)** **Autonomous replication and adaptation of AI (00:15:25)** **The potential impact of autonomous replication (00:27:24)** **The dead internet scenario (00:27:38)** **The potential existential threat (00:29:02)** **Fast takeoff scenario (00:30:54)** **Dangers of autonomous replication and adaptation (00:34:39)** **Difficulty in recognizing warning shots (00:40:00)** **Defining red lines for AI development (00:42:44)** **Effective education strategies (00:46:36)** **Impact on computer science students (00:51:27)** **AI safety summit in Paris (00:53:53)** **The summit and AI safety report (00:55:02)** **Potential impact of key figures (00:56:24)** **Political influence on AI risk (00:57:32)** **Accelerationism in political context (01:00:37)** **Optimism and hope for the future (01:04:25)** **Chances of a meaningful pause (01:08:43)**

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode