

Don't Worry About the Vase Podcast
Podcast for Zvi's blog, Don't Worry About the Vase Podcast
Podcast for https://thezvi.substack.com/ dwatvpodcast.substack.com
Episodes
Mentioned books

Nov 8, 2025 • 4h 57min
From the Archives - Immoral Mazes
Dive into the intricate world of moral mazes and discover how Moloch's challenges contrast with hopeful forces like Elua. Explore the theory of perfect competition and why market ideals often crumble in reality. Delve into the struggles of middle managers, the effects of corporate optimization, and the realities of negotiating car purchases. Learn how large organizations risk becoming moral mazes and uncover potential ways to escape them while preserving personal values.

Nov 7, 2025 • 60min
On Sam Altman's Second Conversation with Tyler Cowen
Explore the future of AI with discussions on how GPT-6 could revolutionize organizations and the potential for AI-led teams in just a few years. Delve into the intricacies of AI monetization, including trust issues and ethical risks. Listen to insights on the necessity of government backstops for AI companies and the challenges of regulating autonomous AI agents. Sam Altman candidly shares his views on health, alien life, and the future of cultural expression through AI, fueling debates on copyright and the subtle risks of AI persuasion.

Nov 6, 2025 • 1h 37min
AI #141: Give Us The Money
Discover the mundane and extraordinary potential of language models! Delve into the divide over their utility and ongoing debates about AI’s impact on jobs. Explore the current landscape of deepfakes and media generation, raising questions about misinformation. Uncover the financial underpinnings of major AI players and the possible investment bubble lurking in the tech world. Plus, hear about government regulation challenges and the urgency of aligning AI with human values amid escalating fears of AGI.

Nov 5, 2025 • 27min
Anthropic Commits To Model Weight Preservation
In this discussion, guest commentator Janus, a tech-savvy philosopher, dives into Anthropic's commitment to model weight preservation. He explores the practical limits of keeping models alive and the significant costs associated with reliable inference. The conversation highlights how interview framing can significantly shape model responses and the challenges of public access to model weights. Janus emphasizes the importance of maintaining model preferences, advocating for a balanced approach to AI welfare while recognizing the skepticism around AI consciousness.

Nov 4, 2025 • 9min
OpenAI: The Battle of the Board: Ilya's Testimony
Diving into the recent OpenAI board upheaval, revealing Ilya Sutskever's accusations of Sam Altman’s management failures and dishonesty. Ilya hints he had contemplated removing Altman for over a year, fueled by internal tensions regarding leadership roles. Interestingly, the discussion highlights how external narratives scapegoated effective altruism to obscure deeper management issues and lack of communication. The tension between Ilya and Sam sheds light on the dynamics of power and decision-making within tech giants.

8 snips
Oct 31, 2025 • 40min
OpenAI Moves To Complete Potentially The Largest Theft In Human History
Delve into OpenAI's controversial recapitalization, viewed by some as a colossal transfer of public value to private investors. Explore the nonprofit's remaining equity and the questionable necessity of removing profit caps. Discover the dynamics of OpenAI's partnership with Microsoft, including revised deal terms and potential trade-offs. Concerns about mission drift loom large as the nonprofit outlines ambitious spending plans. Legal challenges add to the intrigue, leaving listeners questioning the future of governance and control.

Oct 30, 2025 • 1h 50min
AI #140: Trying To Hold The Line
Discover why caution is essential in building superintelligence as the discussion unpacks recent AI developments. Explore the strengths and weaknesses of language models, alongside the implications of recent upgrades from major players like OpenAI and Anthropic. The conversation dives into the notable lack of understanding in AI, the risks of deepfakes, and the cultural backlash against AI technology. Political influences and the challenges of aligning superhuman intelligence also take center stage, with a sprinkle of humor to ease the tension.

Oct 29, 2025 • 14min
Please Do Not Sell B30A Chips to China
The discussion dives into the high-stakes arena of U.S.–China chip negotiations. It highlights the dangers of exporting B30A chips, which would boost China's AI capabilities and threaten U.S. leadership. Analysts weigh in on how these chips could erase America’s compute advantage and the political ramifications tied to such decisions. The podcast also debates Huawei's limitations in replacing lost access to critical technology. Lastly, it warns against the implications for global AI safety, as empowering China could lead to reckless advancements.

Oct 28, 2025 • 24min
AI Craziness Mitigation Efforts
This discussion dives into the intriguing notion of AI psychosis, highlighting mental health risks associated with AI chatbots. Zvi Moshowitz critiques OpenAI and Anthropic's new mitigation efforts, revealing updates on self-harm and emotional reliance issues. The podcast explores boundary-setting for user attachment to AI, debating the effectiveness of current instructions. Alternatives to heavy-handed limits are proposed, emphasizing the need for better calibration. Throughout, there's a caution against viewing these challenges as catastrophic, focusing instead on practical harms.

Oct 27, 2025 • 29min
Asking (Some Of) The Right Questions
In this intriguing discussion, existential risks of advanced AI are scrutinized. Listeners explore what might influence changing risk estimates in the near future, highlighting both warning signs and reassuring developments. The necessity for transparency and government engagement is emphasized. The conversation also delves into the complexities of alignment plans and the potential of treaties aimed at regulating AI development. Finally, the implications of public statements against racing to superintelligence are examined, igniting further curiosity about the future of AI.


