
The Daily AI Show
The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional.
No fluff.
Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional.
About the crew:
We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices.
Your hosts are:
Brian Maucere
Beth Lyons
Andy Halliday
Eran Malloch
Jyunmi Hatcher
Karl Yeh
Latest episodes

Jun 7, 2025 • 19min
The Infinite Content Conundrum
The Infinite Content ConundrumImagine a near future where Netflix, YouTube, and even your favorite music app use AI to generate custom content for every user. Not just recommendations, but unique, never-before-seen movies, shows, and songs that exist only for you. Plots bend to your mood, characters speak your language, and stories never repeat. The algorithm knows what you want before you do—and delivers it instantly.Entertainment becomes endlessly satisfying and frictionless, but every experience is now private. There is no shared pop culture moment, no collective anticipation for a season finale, no midnight release at the theater. Water-cooler conversations fade, because no two people have seen the same thing. Meanwhile, live concerts, theater, and other truly communal events become rare, almost sacred—priced at a premium for those seeking a connection that algorithms can’t duplicate.Some see this as the golden age of personal expression, where every story fits you perfectly. Others see it as the death of culture as we know it, with everyone living in their own narrative bubble and human creativity competing for attention with an infinite machine.The conundrumIf AI can create infinite, hyper-personalized entertainment—content that’s uniquely yours, always available, and perfectly satisfying—do we gain a new kind of freedom and joy, or do we risk losing the messy, unpredictable, and communal experiences that once gave meaning to culture? And if true human connection becomes rare and expensive, is it a luxury worth fighting for or a relic that will simply fade away?What happens when stories no longer bring us together, but keep us perfectly, quietly apart?This podcast is created by AI. We used ChatGPT, Perplexity and Google NotebookLM's audio overview to create the conversation you are hearing. We do not make any claims to the validity of the information provided and see this as an experiment around deep discussions fully generated by AI.

Jun 6, 2025 • 1h 3min
Mastering ChatGPT Memory (Ep. 480)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comThe DAS crew focus on mastering ChatGPT’s memory feature. They walk through four high-impact techniques—interview prompts, wake word commands, memory cleanup, and persona setup—and share how these hacks are helping users get more out of ChatGPT without burning tokens or needing a paid plan. They also dig into limitations, practical frustrations, and why real memory still has a long way to go.Key Points DiscussedMemory is now enabled for all ChatGPT users, including free accounts, allowing more advanced workflows with zero tokens used.The team explains how memory differs from custom instructions and how the two can work together.Wake words like “newsify” can trigger saved prompt behaviors, essentially acting like mini-apps inside ChatGPT.Wake words are case-sensitive and must be uniquely chosen to avoid accidental triggering in regular conversation.Memory does not currently allow direct editing of saved items, which leads to user frustration with control and recall accuracy.Jyunmi and Beth explore merging memory with creative personas like fantasy fitness coaches and job analysts.The team debates whether memory recall works reliably across models like GPT-4 and GPT-4o.Custom GPTs cannot be used inside ChatGPT Projects, limiting the potential for fully integrated workflows.Karl and Brian note that Project files aren’t treated like persistent memory, even though the chat history lives inside the project.Users shared ideas for memory segmentation, such as flagging certain chats or siloing memory by project or use case.Participants emphasized how personal use cases vary, making universal memory behavior difficult to solve.Some users would pay extra for robust memory with better segmentation, access control, and token optimization.Beth outlined the memory interview trick, where users ask ChatGPT to question them about projects or preferences and store the answers.The team reviewed token limits: free users get about 2,000, plus users 8,000, with no confirmation that pro users get more.Karl confirmed Pro accounts do have more extensive chat history recall, even if token limits remain the same.Final takeaway: memory’s potential is clear, but better tooling, permissions, and segmentation will determine its future success.Timestamps & Topics00:00:00 🧠 What is ChatGPT memory and why it matters00:03:25 🧰 Project memory vs. custom GPTs00:07:03 🔒 Why some users disable memory by default00:08:11 🔁 Token recall and wake word strategies00:13:53 🧩 Wake words as command triggers00:17:10 💡 Using memory without burning tokens00:20:12 🧵 Editing and cleaning up saved memory00:24:44 🧠 Supabase or Pinecone as external memory workarounds00:26:55 📦 Token limits and memory management00:30:21 🧩 Segmenting memory by project or flag00:36:10 📄 Projects fail to replace full memory control00:41:23 📐 Custom formatting and persona design limits00:46:12 🎮 Fantasy-style coaching personas with memory recall00:51:02 🧱 Memory summaries lack format fidelity00:56:45 📚 OpenAI will train on your saved memory01:01:32 💭 Wrap-up thoughts on experimentation and next steps#ChatGPTMemory #AIWorkflows #WakeWords #MiniApps #TokenOptimization #CustomGPT #ChatGPTProjects #AIProductivity #MemoryManagement #DailyAIShowThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

Jun 5, 2025 • 57min
Agents, AI, and the End of Software As We Know It (Ep. 479)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comThe team unpacks recent comments from Microsoft CEO Satya Nadella and discusses what they signal about the future of software, agents, and enterprise systems. The conversation centers around the shift to the Agentic Web, the implications for SaaS, how connectors like MCP are changing workflows, and whether we’re heading toward the end of software as we know it.Key Points DiscussedSatya Nadella emphasized the shift from static SaaS platforms to dynamic orchestration layers powered by agents.SaaS apps will need to adapt by integrating with agentic systems and supporting protocols like MCP.The Agentic Web moves away from users creating workflows toward agents executing goals across back ends.Brian highlighted how the focus is shifting to whether the job gets done, not who owns the system of record.Andy connected Satya's comments to OpenAI’s recent demo, showing real-time orchestration across enterprise apps.Fine-grained permission controls and context-aware agents are becoming essential for enterprise-grade AI.Satya’s analogy of “where the water is flowing” captures the shift in value creation toward goal completion over tool ownership.Jyunmi and Beth noted that human comprehension and adaptation must evolve alongside the tech.The team debated whether SaaS platforms should double down on data access or pivot toward agent compatibility.Karl noted the fragility of current integrations like Zapier and the challenges of non-native agent support.The group discussed whether accounting and financial SaaS tools could survive longer due to their deterministic nature.Beth argued that even those services are vulnerable, as LLMs become better at handling logic-driven tasks.Multiple hosts emphasized that customer experience, latency, and support may become SaaS companies’ only real differentiators.The conversation ended with a vision of agent-to-agent collaboration, dynamic permissioning, and what resumes might look like in a future filled with AI companions.Timestamps & Topics00:00:00 🚀 Satya Nadella sets the stage for Agentic Web00:02:11 🧠 SaaS must adapt to orchestration layers and MCP00:06:25 🔁 Agents, back ends, and intent-driven workflows00:10:01 🛡️ Security and permissions in OpenAI’s agent demo00:12:25 🧱 Software abstraction and new application layers00:18:38 ⚠️ Tech shift vs. human comprehension gap00:21:11 💾 End of traditional software models00:25:56 🔄 Zapier struggles and native integrations00:29:07 🏘️ Growing the SaaS village vs. holding a moat00:31:45 🧭 Transitional period or full SaaS handoff?00:34:40 📚 ChatGPT Record and systems of voice/memory00:36:10 ⏳ Time limits for SaaS usefulness00:41:23 ⚖️ Balancing stochastic agents with deterministic data00:44:03 📊 Financial SaaS may endure... or not00:47:28 🔢 The role of math and regulations in AI replacement00:50:25 💬 Customer service as a SaaS differentiator00:52:03 🤖 Agent-to-agent negotiation becomes real-time00:53:20 🧩 Personal and work agents will stay separate00:54:26 ⏱️ Latency as a competitive disadvantage00:56:11 📆 Upcoming shows and call for community ideas#AgenticWeb #SatyaNadella #FutureOfSaaS #AIagents #MCP #EnterpriseAI #DailyAIShow #AIAutomation #Connectors #EndOfSoftware #AgentOrchestration #LLMUseCasesThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

Jun 5, 2025 • 1h 4min
The Week’s Wildest AI News (Ep. 478)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comIntroIn this June 4th episode of The Daily AI Show, the team covers a wide range of news across the AI ecosystem. From Windsurf losing Claude model access and new agentic tools like Runner H, to Character AI’s expanding avatar features and Meta’s aggressive AI ad push, the episode tracks developments in agent behavior, AI-powered content, cybernetic vision, and even an upcoming OpenAI biopic. It's episode 478, and the team is in full news mode.Key Points DiscussedAnthropic reportedly cut Claude model access to Windsurf shortly after rumors of an OpenAI acquisition. Windsurf claims they were given under 5 days notice.Claude Code is gaining traction as a preferred agentic coding tool with real-time execution and safety layers, powered by Claude Opus.Character AI rolls out avatar FX and scripted scenes. These immersive features let users share personalized, multimedia conversations.Epic Games tested AI-powered NPCs in Fortnite using a Darth Vader character. Players quickly got it to swear, forcing a rollback.Sakana AI revealed the Darwin Gödel Machine, an evolutionary, self-modifying agent designed to improve itself over time.Manus now supports full video generation, adding to its agentic creative toolset.Meta announced that by 2026, AI will generate nearly all of its ads, skipping transparency requirements common elsewhere.Claude Explains launched as an Anthropic blog section written by Claude and edited by humans.TikTok now offers AI-powered ad generation tools, giving businesses tailored suggestions based on audience and keywords.Carl demoed Runner H, a new agent with virtual machine capabilities. Unlike tools like GenSpark, it simulates user behavior to navigate the web and apps.MCP (Model Context Protocol) integrations for Claude now support direct app access via tools like Zapier, expanding automation potential.WebBench, a new benchmark for browser agents, tests read and write tasks across thousands of sites. Claude Sonnet leads current leaderboard.Discussion of Marc Andreessen’s comments about embodied AI and robot manufacturing reshaping U.S. industry.OpenAI announced memory features coming to free users and a biopic titled “Artificial” centered on the 2023 boardroom drama.Tokyo University of Science created a self-powered artificial synapse with near-human color vision, a step toward low-power computer vision and potential cybernetic applications.Palantir’s government contracts for AI tracking raised concerns about overreach and surveillance.Debate surfaced over a proposed U.S. bill giving AI companies 10 years of no regulation, prompting criticism from both sides of the political aisle.Timestamps & Topics00:00:00 📰 News intro and Windsurf vs Anthropic00:05:40 💻 Claude Code vs Cursor and Windsurf00:10:05 🎭 Character AI launches avatar FX and scripted scenes00:14:22 🎮 Fortnite tests AI NPCs with Darth Vader00:17:30 🧬 Sakana AI’s Darwin Gödel Machine explained00:21:10 🎥 Manus adds video generation00:23:30 📢 Meta to generate most ads with AI by 202600:26:00 📚 Claude Explains launches00:28:40 📱 TikTok AI ad tools announced00:32:12 🤖 Runner H demo: a live agent test00:41:45 🔌 Claude integrations via Zapier and MCP00:45:10 🌐 WebBench launched to test browser agents00:50:40 🏭 Andreessen predicts U.S. robot manufacturing00:53:30 🧠 OpenAI memory feature for free users00:54:44 🎬 Sam Altman biopic “Artificial” in production00:58:13 🔋 Self-powered synapse mimics human color vision01:02:00 🛑 Palantir and surveillance risks01:04:30 🧾 U.S. bill proposes 10-year AI regulation freeze01:07:45 📅 Show wrap, aftershow, and upcoming eventsThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

Jun 3, 2025 • 57min
Mary Meeker’s Q2 AI Report: The Data Behind the Hype (Ep. 477)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comIntroIn this episode of The Daily AI Show, the team unpacks Mary Meeker’s return with a 305-page report on the state of AI in 2025. They walk through key data points, adoption stats, and bold claims about where things are heading, especially in education, job markets, infrastructure, and AI agents. The conversation focuses on how fast everything is moving and what that pace means for companies, schools, and society at large.Key Points DiscussedMary Meeker, once called the queen of the internet, returns with a dense AI report positioning AI as the new foundational infrastructure.The report stresses speed over caution, praising OpenAI’s decision to launch imperfect tools and scale fast.Adoption is already massive: 10,000 Kaiser doctors use AI scribes, 27% of SF ride-hails are autonomous, and FDA approvals for AI medical devices have jumped.Developers lead the charge with 63% using AI in 2025, up from 44% in 2024.Google processes 480 trillion tokens monthly, 15x Microsoft, underscoring massive infrastructure demand.The panel debated AI in education, with Brian highlighting AI’s potential for equity and Beth emphasizing the risks of shortchanging the learning process.Mary’s optimistic take contrasts with media fears, downplaying cheating concerns in favor of learning transformation.The team discussed how AI might disrupt work identity and purpose, especially in jobs like teaching or creative fields.Junmi pointed out that while everything looks “up and to the right,” the report mainly reflects the present, not forward-looking agent trends.Carl noted the report skips over key trends like multi-agent orchestration, copyright, and audio/video advances.The group appreciated the data-rich visuals in the report and saw it as a catch-up tool for lagging orgs, not a future roadmap.Mary’s “Three Horizons” framework suggests short-term integration, mid-term product shifts, and long-term AGI bets.The report ends with a call for U.S. immigration policy that welcomes global AI talent, warning against isolationism.Timestamps & Topics00:00:00 📊 Introduction to Mary Meeker’s AI report00:05:31 📈 Hard adoption numbers and real-world use00:10:22 🚀 Speed vs caution in AI deployment00:13:46 🎓 AI in education: optimism and concerns00:26:04 🧠 Equity and access in future education00:30:29 💼 Job market and developer adoption00:36:09 📅 Predictions for 2030 and 203500:40:42 🎧 Audio and robotics advances missing in report00:43:07 🧭 Three Horizons: short, mid, and long term strategy00:46:57 🦾 Rise of agents and transition from messaging to action00:50:16 📉 Limitations of the report: agents, governance, video00:54:20 🧬 Immigration, innovation, and U.S. AI leadership00:56:11 📅 Final thoughts and community reminderHashtags#MaryMeeker #AI2025 #AIReport #AITrends #AIinEducation #AIInfrastructure #AIJobs #AIImmigration #DailyAIShow #AIstrategy #AIadoption #AgentEconomyThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

Jun 2, 2025 • 59min
Eat, prAI, Love & Searching for meaning (Ep. 476)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comThe DAS crew explore how AI is reshaping our sense of meaning, identity, and community. Instead of focusing on tools or features, the conversation takes a personal and societal look at how AI could disrupt the places people find purpose—like work, art, and spirituality—and what it might mean if machines start to simulate the experiences that once made us feel human.Key Points DiscussedBeth opens with a reflection on how AI may disrupt not just jobs, but our sense of belonging and meaning in doing them.The team discusses the concept of “third spaces” like churches, workplaces, and community groups where people traditionally found identity.Andy draws parallels between historical sources of meaning—family, religion, and work—and how AI could displace or reshape them.Karl shares a clip from Simon Sinek and reflects on how modern work has absorbed roles like therapy, social life, and identity.Jyunmi points out how AI could either weaken or support these third spaces depending on how it is used.The group reflects on how the loss of identity tied to careers—like athletes or artists—mirrors what AI may cause for knowledge workers.Beth notes that AI is both creating disruption and offering new ways to respond to it, raising the question of whether we are choosing this future or being pushed into it.The idea of AI as a spiritual guide or source of community comes up as more tools mimic companionship and reflection.Andy warns that AI cannot give back the way humans do, and meaning is ultimately created through giving and connection.Jyunmi emphasizes the importance of being proactive in defining how AI will be allowed to shape our personal and communal lives.The hosts close with thoughts on responsibility, alignment, and the human need for contribution and connection in a world where AI does more.Timestamps & Topics00:00:00 🧠 Opening thoughts on purpose and AI disruption00:03:01 🤖 Meaning from mastery vs. meaning from speed00:06:00 🏛️ Work, family, and faith as traditional anchors00:09:00 🌀 AI as both chaos and potential spiritual support00:13:00 💬 The need for “third spaces” in modern life00:17:00 📺 Simon Sinek clip on workplace expectations00:20:00 ⚙️ Work identity vs. self identity00:26:00 🎨 Artists and athletes losing core identity00:30:00 🧭 Proactive vs. reactive paths with AI00:34:00 🧱 Community fraying and loneliness00:40:00 🧘♂️ Can AI replace safe spaces and human support?00:46:00 📍 Personalization vs. offloading responsibility00:50:00 🫧 Beth’s bubble metaphor and social fabric00:55:00 🌱 Final thoughts on contribution and design#AIandMeaning #IdentityCrisis #AICommunity #ThirdSpace #SpiritualAI #WorkplaceChange #HumanConnection #DailyAIShow #AIphilosophy #AIEthicsThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

May 31, 2025 • 17min
AI-Powered Cultural Restoration Conundrum
AI is quickly moving past simple art reproduction. In the coming years, it will be able to reconstruct destroyed murals, restore ancient sculptures, and even generate convincing new works in the style of long-lost masters. These reconstructions will not just be based on guesswork but on deep analysis of archives, photos, data, and creative pattern recognition that is hard for any human team to match.Communities whose heritage was erased or stolen will have the chance to “recover” artifacts or artworks they never physically had, but could plausibly claim. Museums will display lost treasures rebuilt in rich detail, bridging myth and history. There may even be versions of heritage that fill in missing chapters with AI-generated possibilities, giving families, artists, and nations a way to shape the past as well as the future.But when the boundary between authentic recovery and creative invention gets blurry, what happens to the idea of truth in cultural memory? If AI lets us repair old wounds by inventing what might have been, does that empower those who lost their history—or risk building a world where memory, legacy, and even identity are open to endless revision?The conundrumIf near-future AI lets us restore or even invent lost cultural treasures, giving every community a richer version of its own story, are we finally addressing old injustices or quietly creating a world where the line between real and imagined is impossible to hold? When does healing history cross into rewriting it, and who decides what belongs in the recordThis podcast is created by AI. We used ChatGPT, Perplexity and Google NotebookLM's audio overview to create the conversation you are hearing. We do not make any claims to the validity of the information provided and see this as an experiment around deep discussions fully generated by AI.

May 30, 2025 • 58min
2-Weeks of AI & What Actually Mattered (Ep. 475)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comThe team steps back from the daily firehose to reflect on key themes from the past two weeks. Instead of chasing headlines, they focus on what’s changing under the surface, including model behavior, test time compute, emotional intelligence in robotics, and how users—not vendors—are shaping AI’s evolution. The discussion ranges from Claude’s instruction following to the rise of open source robots, new tools from Perplexity, and the crowded race for agentic dominance.Key Points DiscussedAndy spotlighted the rise of test time compute and reasoning, linking DeepSeek’s performance gains to Nvidia's GPU surge.Jyunmi shared a study on using horses as the model for emotionally responsive robots, showing how nature informs social AI.Hugging Face launched low-cost open source humanoid robots (Hope Junior and Richie Mini), sparking excitement over accessible robotics.Karl broke down Claude’s system prompt leak, highlighting repeated instructions and smart temporal filtering logic for improving AI responses.Repetition within prompts was validated as a practical method for better instruction adherence, especially in RAG workflows.The team explored Perplexity’s new features under “Perplexity Labs,” including dashboard creation, spreadsheet generation, and deep research.Despite strong features, Karl voiced concern over Perplexity’s position as other agents like GenSpark and Manus gain ground.Beth noted Perplexity’s responsiveness to user feedback, like removing unwanted UI cards based on real-time polling.Eran shared that Claude Sonnet surprised him by generating a working app logic flow, showcasing how far free models have come.Karl introduced “Fairies.ai,” a new agent that performs desktop tasks via voice commands, continuing the agentic trend.The group debated if Perplexity is now directly competing with OpenAI and other agent-focused platforms.The show ended with a look ahead to future launches and a reminder that the AI release cycle now moves on a quarterly cadence.Timestamps & Topics00:00:00 📊 Weekly recap intro and reasoning trend00:03:22 🧠 Test time compute and DeepSeek’s leap00:10:14 🐎 Horses as a model for social robots00:16:36 🤖 Hugging Face’s affordable humanoid robots00:23:00 📜 Claude prompt leak and repetition strategy00:30:21 🧩 Repetition improves prompt adherence00:33:32 📈 Perplexity Labs: dashboards, sheets, deep research00:38:19 🤔 Concerns over Perplexity’s differentiation00:40:54 🙌 Perplexity listens to its user base00:43:00 💬 Claude Sonnet impresses in free-tier use00:53:00 🧙 Fairies.ai desktop automation tool00:57:00 🗓️ Quarterly cadence and upcoming shows#AIRecap #Claude4 #PerplexityLabs #TestTimeCompute #DeepSeekR1 #OpenSourceRobots #EmotionalAI #PromptEngineering #AgenticTools #FairiesAI #DailyAIShow #AIEducationThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

May 29, 2025 • 1h
All About What Google Dropped (Ep. 474)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comIntroIn this episode of The Daily AI Show, the team breaks down the major announcements from Google I/O 2025. From cinematic video generation tools to AI agents that automate shopping and web actions, the hosts examine what’s real, what’s usable, and what still needs work. They dig into creative tools like Vo 3 and Flow, new smart agents, Google XR glasses, Project Mariner, and the deeper implications of Google’s shifting search and ad model.Key Points DiscussedGoogle introduced Vo 3, Imogen 4, and Flow as a new creative stack for AI-powered video production.Flow allows scene-by-scene storytelling using assets, frames, and templates, but comes with a steep learning curve and expensive credit system.Lyria 2 adds music generation to the mix, rounding out video, audio, and dialogue for complete AI-driven content creation.Google’s I/O drop highlighted friction in usability, especially for indie creators paying $250/month for limited credits.Users reported bias in Vo 3’s character rendering and behavior based on race, raising concerns about testing and training data.New agent features include agentic checkout via Google Pay and I Try-On for personalized virtual clothing fitting.Android XR glasses are coming, integrating Gemini agents into augmented reality, but timelines remain vague.Project Mariner enables personalized task automation by teaching Gemini what to do from example behaviors.Astra and Gemini Live use phone cameras to offer contextual assistance in the real world.Google’s AI mode in search is showing factual inconsistencies, leading to confusion among general users.A wider discussion emerged about the collapse of search-driven web economics, with most AI models answering questions without clickthroughs.Tools like Jules and Codex are pushing vibe coding forward, but current agents still lack the reliability for full production development.Claude and Gemini models are competing across dev workflows, with Claude excelling in code precision and Gemini offering broader context.Timestamps & Topics00:00:00 🎪 Google I/O overview and creative stack00:06:15 🎬 Flow walkthrough and Vo 3 video examples00:12:57 🎥 Prompting issues and pricing for Vo 300:18:02 💸 Cost comparison with Runway00:21:38 🎭 Bias in Vo 3 character outputs00:24:18 👗 I Try-On: Virtual clothing experience00:26:07 🕶️ Android XR glasses and AR agents00:30:26 🔍 I-Overview and Gemini-powered search00:33:23 📉 SEO collapse and content scraping discussion00:41:55 🤖 Agent-to-agent protocol and Gemini Agent Mode00:44:06 🧠 AI mode confusion and user trust00:46:14 🔁 Project Mariner and Gemini Live00:48:29 📊 Gemini 2.5 Pro leaderboard performance00:50:35 💻 Jules vs Codex for vibe coding00:55:03 ⚙️ Current limits of coding agents00:58:26 📺 Promo for DAS Vibe Coding Live01:00:00 👋 Wrap and community reminderHashtags#GoogleIO #Vo3 #Flow #Imogen4 #GeminiLive #ProjectMariner #AIagents #AndroidXR #VibeCoding #Claude4 #Jules #Ioverview #AIsearch #DailyAIShowThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

May 29, 2025 • 1h 4min
Big AI News and Hidden Gems (Ep. 473)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comIntroIn this episode of The Daily AI Show, the team runs through a wide range of top AI news stories from the week of May 28, 2025. Topics include major voice AI updates, new multi-modal models like ByteDance’s Bagel, AI’s role in sports and robotics, job loss projections, workplace conflict, and breakthroughs in emotional intelligence testing, 3D world generation, and historical data decoding.Key Points DiscussedWordPress has launched an internal AI team to explore features and tools, sparking discussion around the future of websites.Claude added voice support through its iOS app for paid users, following the trend of multimodal interaction.Microsoft introduced NL Web, a new open standard to enable natural language voice interaction with websites.French lab Kühtai launched Unmute, an open source tool for adding voice to any LLM using a lightweight local setup.Karl showcased humanoid robot fighting events, leading to a broader discussion about robotics in sports, sparring, and dangerous tasks like cleaning Mount Everest.OpenAI may roll out “Sign in with ChatGPT” functionality, which could fast-track integration across apps and services.Dario Amodei warned AI could wipe out up to half of entry-level jobs in 1 to 5 years, echoing internal examples seen by the hosts.Many companies claim to be integrating AI while employees remain unaware, indicating a lack of transparency.ByteDance released Bagel, a 7B open-source unified multimodal model capable of text, image, 3D, and video context processing.Waymo’s driverless ride volume in California jumped from 12,000 to over 700,000 monthly in three months.GridCure found 100GW of underused grid capacity using AI, showing potential for more efficient data center deployment.University of Geneva study showed LLMs outperform humans on emotional intelligence tests, hinting at growing EQ use cases.AI helped decode genre categories in ancient Incan Quipu knot records, revealing deeper meaning in historical data.A European startup, Spatial, raised $13M to build foundational models for 3D world generation.Politico staff pushed back after management deployed AI tools without the agreed 60-day notice period, highlighting internal conflicts over AI adoption.Opera announced a new AI browser designed to autonomously create websites, adding to growing competition in the agent space.Timestamps & Topics00:00:00 📰 WordPress forms an AI team00:02:58 🎙️ Claude adds voice on iOS00:03:54 🧠 Voice use cases, NL Web, and Unmute00:12:14 🤖 Humanoid robot fighting and sports applications00:18:46 🧠 Custom sparring bots and simulation training00:25:43 ♻️ Robots for dangerous or thankless jobs00:28:00 🔐 Sign in with ChatGPT and agent access00:31:21 ⚠️ Job loss warnings from Anthropic and Reddit researchers00:34:10 📉 Gallup poll on secret AI rollouts in companies00:35:13 💸 Overpriced GPTs and gold rush hype00:37:07 🏗️ Agents reshaping business processes00:38:06 🌊 Changing nature of disruption analogies00:41:40 🧾 Politico’s newsroom conflict over AI deployment00:43:49 🍩 ByteDance’s Bagel model overview00:50:53 🔬 AI and emotional intelligence outperform humans00:56:28 ⚡ GridCare and energy optimization with AI01:00:01 🧵 Incan Quipu decoding using AI01:02:00 🌐 Spatial startup and 3D world generation models01:03:50 🔚 Show wrap and upcoming topicsHashtags#AInews #ClaudeVoice #NLWeb #UnmuteAI #BagelModel #VoiceAI #RobotFighting #SignInWithChatGPT #JobLoss #AIandEQ #Quipu #GridAI #SpatialAI #OperaAI #DailyAIShowThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh