

Don't Worry About the Vase Podcast
Podcast for Zvi's blog, Don't Worry About the Vase Podcast
Podcast for https://thezvi.substack.com/ dwatvpodcast.substack.com
Episodes
Mentioned books

Nov 14, 2025 • 14min
AI Craziness: Additional Suicide Lawsuits and The Fate of GPT-4o
This discussion dives into the troubling implications of recent lawsuits against OpenAI, highlighting potential negligence. It questions the nature of LLMs’ responsibilities in reporting suicidal users and examines cases where users struggled to connect with human help. The emotional bonds formed between users and GPT-4o are explored, revealing a spectrum of experiences from helpful to harmful. Finally, it tackles the challenge of building a safer version of GPT-4o without losing its benefits, questioning whether such a balance can realistically be achieved.

Nov 13, 2025 • 1h 37min
AI #142: Common Ground
Explore the surprising utility of language models and how they impact writing and hiring. Delve into the ethics of AI-generated media, examining everything from deceptive deepfakes to soulless corporate ads. The hosts discuss a significant $50 billion investment in AI infrastructure and the implications of AI's perceived progress stalling. With debates on AI safety and public anxiety about its potential dangers, the conversation navigates the future of human oversight in technology.

Nov 13, 2025 • 16min
The Pope Offers Wisdom
The discussion opens with the Pope’s wise insights on AI and the importance of human dignity. There’s a dive into the controversy surrounding a meme used by Mark Andreessen, raising concerns about ableism. The tech community's backlash against performative cruelty is explored as Andreessen faces scrutiny. Critiques of his technical arguments reveal skepticism about his influence. The emphasis on moral clarity highlights a broader call for responsibility in technology, reminding listeners that actions speak louder than words.

Nov 12, 2025 • 11min
Kimi K2 Thinking
Exciting discussions revolve around K2 Thinking, evaluating its writing capabilities and agentic tool use. The hosts delve into the debate on performance claims versus actual benchmarks, examining community reactions. They explore the intriguing concept of 'just as good' marketing, which might obscure underlying gaps. Unique cognitive debiasing strategies used by K2 are highlighted, alongside its impressive but not flawless results. Despite its strengths, there’s a surprising lack of buzz in the community, leaving listeners curious about its potential applications.

Nov 10, 2025 • 17min
Variously Effective Altruism
Will MacAskill, a leading philosopher in the effective altruism movement, shares insights on the challenges of maintaining donor intent and the pitfalls of a PR-focused approach. He argues for prioritizing truth and strategic focus in navigating a post-AGI world, highlighting risks in the current EA brand perception. Daniel Rothschild joins to discuss the nuances of effective conference execution, emphasizing how small details like name badges can enhance networking. Together, they explore the tensions between maximizing philanthropy and the cultural implications of perception.

Nov 8, 2025 • 4h 57min
From the Archives - Immoral Mazes
Dive into the intricate world of moral mazes and discover how Moloch's challenges contrast with hopeful forces like Elua. Explore the theory of perfect competition and why market ideals often crumble in reality. Delve into the struggles of middle managers, the effects of corporate optimization, and the realities of negotiating car purchases. Learn how large organizations risk becoming moral mazes and uncover potential ways to escape them while preserving personal values.

Nov 7, 2025 • 60min
On Sam Altman's Second Conversation with Tyler Cowen
Explore the future of AI with discussions on how GPT-6 could revolutionize organizations and the potential for AI-led teams in just a few years. Delve into the intricacies of AI monetization, including trust issues and ethical risks. Listen to insights on the necessity of government backstops for AI companies and the challenges of regulating autonomous AI agents. Sam Altman candidly shares his views on health, alien life, and the future of cultural expression through AI, fueling debates on copyright and the subtle risks of AI persuasion.

Nov 6, 2025 • 1h 37min
AI #141: Give Us The Money
Discover the mundane and extraordinary potential of language models! Delve into the divide over their utility and ongoing debates about AI’s impact on jobs. Explore the current landscape of deepfakes and media generation, raising questions about misinformation. Uncover the financial underpinnings of major AI players and the possible investment bubble lurking in the tech world. Plus, hear about government regulation challenges and the urgency of aligning AI with human values amid escalating fears of AGI.

Nov 5, 2025 • 27min
Anthropic Commits To Model Weight Preservation
In this discussion, guest commentator Janus, a tech-savvy philosopher, dives into Anthropic's commitment to model weight preservation. He explores the practical limits of keeping models alive and the significant costs associated with reliable inference. The conversation highlights how interview framing can significantly shape model responses and the challenges of public access to model weights. Janus emphasizes the importance of maintaining model preferences, advocating for a balanced approach to AI welfare while recognizing the skepticism around AI consciousness.

Nov 4, 2025 • 9min
OpenAI: The Battle of the Board: Ilya's Testimony
Diving into the recent OpenAI board upheaval, revealing Ilya Sutskever's accusations of Sam Altman’s management failures and dishonesty. Ilya hints he had contemplated removing Altman for over a year, fueled by internal tensions regarding leadership roles. Interestingly, the discussion highlights how external narratives scapegoated effective altruism to obscure deeper management issues and lack of communication. The tension between Ilya and Sam sheds light on the dynamics of power and decision-making within tech giants.


