For Humanity: An AI Safety Podcast

John Sherman
undefined
Apr 24, 2024 • 1h 51min

Episode #25 - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast

Episode #25  - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast FULL INTERVIEW STARTS AT (00:08:20) DONATE HERE TO HELP PROMOTE THIS SHOW https://www.paypal.com/paypalme/forhumanitypodcast In episode #25, host John Sherman and Dr. Emile Torres explore the concept of humanity's future and the rise of artificial general intelligence (AGI) and machine superintelligence. Dr. Torres lays out his view that the AI safety movement has it all wrong on existential threat.  Concerns are voiced about the potential risks of advanced AI, questioning the effectiveness of AI safety research and the true intentions of companies like OpenAI. Dr. Torres supports a full "stop AI" movement, doubting the benefits of pursuing such powerful AI technologies and highlighting the potential for catastrophic outcomes if AI systems become misaligned with human values or not. The discussion also touches on the urgency of solving AI control problems to avoid human extinction. Émile P. Torres is a philosopher whose research focuses on existential threats to civilization and humanity. They have published widely in the popular press and scholarly journals, with articles appearing in the Washington Post, Aeon, Bulletin of the Atomic Scientists, Metaphilosophy, Inquiry, Erkenntnis, and Futures. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. TIMESTAMPS: **The definition of human extinction and AI Safety Podcast Introduction (00:00:00)** **Paul Christiano's perspective on AI risks and debate on AI safety (00:03:51)** **Interview with Dr. Emil Torres on transhumanism, AI safety, and historical perspectives (00:08:17)** **Challenges to AI safety concerns and the speculative nature of AI arguments (00:29:13)** **AI's potential catastrophic risks and comparison with climate change (00:47:49)** **Defining intelligence, AGI, and unintended consequences of AI (00:56:13)** **Catastrophic Risks of Advanced AI and perspectives on AI Safety (01:06:34)** **Inconsistencies in AI Predictions and the Threats of Advanced AI (01:15:19)** **Curiosity in AGI and the ethical implications of building superintelligent systems (01:22:49)** **Challenges of discussing AI safety and effective tools to convince the public (01:27:26)** **Tangible harms of AI and hopeful perspectives on the future (01:37:00)** **Parental instincts and the need for self-sacrifice in AI risk action (01:43:53)** RESOURCES: THE TWO MAIN PAPERS ÉMILE LOOKS TO MAKING HIS CASE: Against the singularity hypothesis By David Thorstad:  https://philpapers.org/archive/THOATS-5.pdf Challenges to the Omohundro—Bostrom framework for AI motivations By Oleg Häggstrom: https://www.math.chalmers.se/~olleh/ChallengesOBframeworkDeanonymized.pdf Paul Christiano on Bankless How We Prevent the AI’s from Killing us with Paul Christiano Emile Torres TruthDig Articles: https://www.truthdig.com/author/emile-p-torres/ https://www.amazon.com/Human-Extinction-Annihilation-Routledge-Technology/dp/1032159065 Best Account on Twitter: AI Notkilleveryoneism Memes  https://twitter.com/AISafetyMemes JOIN THE FIGHT, help Pause AI!!!! Pause AI
undefined
Apr 22, 2024 • 3min

Episode #25 TRAILER  - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast

DONATE HERE TO HELP PROMOTE THIS SHOW Episode #25 TRAILER  - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast In episode #25 TRAILER, host John Sherman and Dr. Emile Torres explore the concept of humanity's future and the rise of artificial general intelligence (AGI) and machine superintelligence. Dr. Torres lays out his view that the AI safety movement has it all wrong on existential threat.  Concerns are voiced about the potential risks of advanced AI, questioning the effectiveness of AI safety research and the true intentions of companies like OpenAI. Dr. Torres supports a full "stop AI" movement, doubting the benefits of pursuing such powerful AI technologies and highlighting the potential for catastrophic outcomes if AI systems become misaligned with human values or not. The discussion also touches on the urgency of solving AI control problems to avoid human extinction. Émile P. Torres is a philosopher whose research focuses on existential threats to civilization and humanity. They have published widely in the popular press and scholarly journals, with articles appearing in the Washington Post, Aeon, Bulletin of the Atomic Scientists, Metaphilosophy, Inquiry, Erkenntnis, and Futures. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. TIMESTAMPS: Defining Humanity and Future Descendants (00:00:00) Discussion on the concept of humanity, future descendants, and the implications of artificial general intelligence (AGI) and machine superintelligence. Concerns about AI Safety Research (00:01:11) Expressing concerns about the approach of AI safety research and skepticism about the intentions of companies like OpenAI. Questioning the Purpose of Building Advanced AI Systems (00:02:23) Expressing skepticism about the purpose and potential benefits of building advanced AI systems and being sympathetic to the "stop AI" movement. RESOURCES: Emile Torres TruthDig Articles: https://www.truthdig.com/author/emile-p-torres/ Emile Torres Latest Book: Human Extinction (Routledge Studies in the History of Science, Technology and Medicine) 1st Edition https://www.amazon.com/Human-Extinction-Annihilation-Routledge-Technology/dp/1032159065 Best Account on Twitter: AI Notkilleveryoneism Memes  JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS
undefined
Apr 17, 2024 • 1h 22min

Episode #24 - “YOU can help save the world from AI Doom” For Humanity: An AI Safety Podcast

In episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. In this conversation, Kat and John discuss the topic of AI safety and the potential risks associated with artificial superintelligence. Kat shares her personal transformation from being a skeptic to becoming an advocate for AI safety. They explore the idea that AI could pose a near-term threat rather than just a long-term concern. They also discuss the importance of prioritizing AI safety over other philanthropic endeavors and the need for talented individuals to work on this issue. Kat highlights potential ways in which AI could harm humanity, such as creating super viruses or starting a nuclear war. They address common misconceptions, including the belief that AI will need humans or that it will be human-like.  Overall, the conversation emphasizes the urgency of addressing AI safety and the need for greater awareness and action. The conversation delves into the dangers of AI and the need for AI safety. The speakers discuss the potential risks of creating superintelligent AI that could harm humanity. They highlight the ethical concerns of creating AI that could suffer and the moral responsibility we have towards these potential beings. They also discuss the importance of funding AI safety research and the need for better regulation. The conversation ends on a hopeful note, with the speakers expressing optimism about the growing awareness and concern regarding AI safety. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. TIMESTAMPS: AI Safety Urgency (00:00:00) Emphasizing the immediate need to focus on AI safety. Superintelligent AI World (00:00:50) Considering the impact of AI smarter than humans. AI Safety Charities (00:02:37) The necessity for more AI safety-focused charities. Personal AI Safety Advocacy Journey (00:10:10) Kat Woods' transformation into an AI safety advocate. AI Risk Work Encouragement (00:16:03) Urging skilled individuals to tackle AI risks. AI Safety's Global Impact (00:17:06) AI safety's pivotal role in global challenges. AI Safety Prioritization Struggles (00:18:02) The difficulty of making AI safety a priority. Wealthy Individuals and AI Safety (00:19:55) Challenges for the wealthy in focusing on AI safety. Superintelligent AI Threats (00:23:12) Potential global dangers posed by superintelligent AI. Limits of Imagining Superintelligent AI (00:28:02) The struggle to fully grasp superintelligent AI's capabilities. AI Containment Risks (00:32:19) The problem of effectively containing AI. AI's Human-Like Risks (00:33:53) Risks of AI with human-like qualities. AI Dangers (00:34:20) Potential ethical and safety risks of AI. AI Ethical Concerns (00:37:03) Ethical considerations in AI development. Nonlinear's Role in AI Safety (00:39:41) Nonlinear's contributions to AI safety work. AI Safety Donations (00:41:53) Guidance on supporting AI safety financially. Effective Altruism and AI Safety (00:49:43) The relationship between effective altruism and AI safety. AI Safety Complexity (00:52:12) The intricate nature of AI safety issues. AI Superintelligence Urgency (00:53:52) The critical timing and power of AI superintelligence. AI Safety Work Perception (00:56:06) Changing the image of AI safety efforts. AI Safety and Government Regulation (00:59:23) The potential for regulatory influence on AI safety. Entertainment's AI Safety Role (01:04:24) How entertainment can promote AI safety awareness. AI Safety Awareness Progress (01:05:37) Growing recognition and response to AI safety. AI Safety Advocacy Funding (01:08:06) The importance of financial support for AI safety advocacy. Effective Altruists and Rationalists Views (01:10:22) The stance of effective altruists and rationalists on AI safety. AI Risk Marketing (01:11:46) The case for using marketing to highlight AI risks. RESOURCES: Nonlinear: https://www.nonlinear.org/ Best Account on Twitter: AI Notkilleveryoneism Memes  JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   22 Word Statement from Center for AI Safety Statement on AI Risk | CAISco
undefined
Apr 15, 2024 • 5min

Episode #24 TRAILER - “YOU can help save the world from AI Doom” For Humanity: An AI Safety Podcast

In episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. She compares the challenge to the Titanic's course towards an iceberg, stressing the difficulty in convincing people of the urgency. Woods argues that AI safety is a matter of both altruism and self-preservation. She uses human-animal relations to illustrate the potential consequences of a disparity in intelligence between humans and AI. She notes a positive shift in the perception of AI risks, from fringe to mainstream concern, and shares a personal anecdote from her time in Africa, which informed her views on the universal aversion to death and the importance of preventing harm. Woods's realization of the increasing probability of near-term AI risks further emphasizes the immediate need for action in AI safety. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Nonlinear: https://www.nonlinear.org/ Best Account on Twitter: AI Notkilleveryoneism Memes  JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   22 Word Statement from Center for AI Safety Statement on AI Risk | CAISco
undefined
Apr 10, 2024 • 2h 2min

Episode #23 - “AI Acceleration Debate” For Humanity: An AI Safety Podcast

AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, existential risks, and AI alignment with human values. They discuss the philosophy of accelerationism, human conditioning's influence on AI understanding, and the potential consequences of AI safety efforts. The podcast delves into the existential threat of human extinction from AGI, exploring the worst-case scenario of AI killing all humans.
undefined
Apr 8, 2024 • 5min

Episode #23 TRAILER - “AI Acceleration Debate” For Humanity: An AI Safety Podcast

Suicide or Salvation? In episode #23 TRAILER, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They ponder whether AI should align with human values and the potential consequences of such alignment. Paul suggests that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the philosophy of accelerationism and the influence of human conditioning on our understanding of AI. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. TIMESTAMPS: Is AI an existential threat to humanity? (00:00:00) Debate on the potential risks of AI and its impact on humanity. The AI safety movement (00:00:42) Discussion on the perception of AI safety as a religion and the philosophy of accelerationism. Human conditioning and perspectives on AI (00:02:01) Exploration of how human conditioning shapes perspectives on AI and the concept of AGI as a human creation. Aligning AI and human values (00:04:24) Debate on the dangers of aligning AI with human ideologies and the potential implications for humanity. RESOURCES: Paul’s Youtube Channel: Accel News Network Best Account on Twitter: AI Notkilleveryoneism Memes  JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS
undefined
Apr 3, 2024 • 39min

Episode #22 - “Sam Altman: Unelected, Unvetted, Unaccountable” For Humanity: An AI Safety Podcast

In Episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: Vanity Fair Gushes in 2015 Business Insider: Sam Altman’s Act May Be Wearing Thin Oprah and Maya Angelou Best Account on Twitter: AI Notkilleveryoneism Memes  JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS Timestamps: The man who holds the power (00:00:00) Discussion about Sam Altman's power and its implications for humanity. The safety crisis (00:01:11) Concerns about safety in AI technology and the need for protection against potential risks. Sam Altman's decisions and vision (00:02:24) Examining Sam Altman's role, decisions, and vision for AI technology and its impact on society. Sam Altman's actions and accountability (00:04:14) Critique of Sam Altman's actions and accountability regarding the release of AI technology. Reflections on getting fired (00:11:01) Sam Altman's reflections and emotions after getting fired from OpenAI's board. Silencing of concerns (00:19:25) Discussion about the silencing of individuals concerned about AI safety, particularly Ilya Sutskever. Relationship with Elon Musk (00:20:08) Sam Altman's sentiments and hopes regarding his relationship with Elon Musk amidst tension and legal matters. Legal implications of AI technology (00:22:23) Debate on the fairness of training AI under copyright law and its legal implications. The value of data (00:22:32) Sam Altman discusses the compensation for valuable data and its use. Safety concerns (00:23:41) Discussion on the process for ensuring safety in AI technology. Broad definition of safety (00:24:24) Exploring the various potential harms and impacts of AI, including technical, societal, and economic aspects. Lack of trust and control (00:27:09) Sam Altman's admission about the power and control over AGI and the need for governance. Public apathy towards AI risk (00:31:49) Addressing the common reasons for public inaction regarding AI risk awareness. Celebration of life (00:34:20) A personal reflection on the beauty of music and family, with a message about the celebration of life. Conclusion (00:38:25) Closing remarks and a preview of the next episode.
undefined
Apr 1, 2024 • 2min

“Sam Altman: Unelected, Unvetted, Unaccountable” For Humanity: An AI Safety Podcast Episode #22 TRAILER

In episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS n this AI Safety Podcast episode, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS
undefined
Mar 27, 2024 • 1h 14min

“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21

“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21 Interview starts at 20:10 Some highlights of John’s news career start at 9:14 In In Episode #21 “Why AI Killing You Isn’t On The News” Casey Clark Interview,, host John Sherman and WJZY-TV News Director Casey Clark explore the significant underreporting of AI's existential risks in the media. They recount a disturbing incident where AI bots infiltrated a city council meeting, spewing hateful messages. The conversation delves into the challenges of conveying the complexities of artificial general intelligence to the public and the media's struggle to present such abstract concepts compellingly. They predict job losses as the first major AI-related news story to break through and speculate on the future of AI-generated news anchors, emphasizing the need for human reporters in the field. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST   / discord   See more of John’s Talk in Philly: https://x.com/ForHumanityPod/status/1772449876388765831?s=20 FOLLOW DAVID SHAPIRO ON YOUTUBE! David Shapiro - YouTube 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS
undefined
Mar 25, 2024 • 4min

“Why AI Killing You Isn’t On The News” TRAILER For Humanity: An AI Safety Podcast Episode #21

In Episode #21 TRAILER “Why AI Killing You Isn’t On The News” Casey Clark Interview, John Sherman interviews WJZY-TV News Director Casey Clark about TV news coverage of AI existential risk. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. RESOURCES: JOIN THE FIGHT, help Pause AI!!!! Pause AI Join the Pause AI Weekly Discord Thursdays at 3pm EST  / discord 

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app