

The Daily AI Show
The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional.
No fluff.
Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional.
About the crew:
We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices.
Your hosts are:
Brian Maucere
Beth Lyons
Andy Halliday
Eran Malloch
Jyunmi Hatcher
Karl Yeh
No fluff.
Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional.
About the crew:
We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices.
Your hosts are:
Brian Maucere
Beth Lyons
Andy Halliday
Eran Malloch
Jyunmi Hatcher
Karl Yeh
Episodes
Mentioned books

Jun 12, 2025 • 58min
Is Perplexity Labs The Future of AI Work? (Ep. 484)
The discussion revolves around Perplexity Labs, a project operating system that streamlines AI workflows. It highlights how the platform automates complex tasks, from research to content creation. Hands-on demos show its capability to generate complete project packages with a single prompt. Comparisons with Gen Spark reveal differing strengths in executing custom tasks. The conversation also touches on future implications for sales and education, emphasizing enhanced collaboration and user experience through AI-assisted tools.

Jun 10, 2025 • 50min
AI for the Curious Citizen: Science in the Age of Algorithms (Ep. 482)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comThe team explores the rise of citizen scientists in the age of AI. From whale tracking to personalized healthcare, AI is lowering barriers and enabling everyday people to contribute to scientific discovery. The discussion blends storytelling, use cases, and philosophical questions about who gets to participate in research and how AI is changing what science looks like.Key Points DiscussedCitizen science is expanding thanks to AI tools that make participation and data collection easier.Platforms like Zooniverse are creating collaborative opportunities between professionals and the public.Tools like FlukeBook help identify whales by their tails, combining crowdsourced photos with AI pattern recognition.AI is helping individuals analyze personal health data, even leading to better follow-up questions for doctors.The concept of “n=1” (study of one) becomes powerful when AI helps individuals find meaning in their own data.Edge AI devices, like portable defibrillators, are already saving lives by offering smarter, AI-guided instructions.Historically, citizen science was limited by access, but AI is now democratizing capabilities like image analysis, pattern recognition, and medical inference.Personalized experiments in areas like nutrition and wellness are becoming viable without lab-level resources.Open-source models allow hobbyists to build custom tools and conduct real research with relatively low cost.AI raises new challenges in discerning quality data from bad research, but it also enables better validation of past studies.There’s a strong potential for grassroots movements to drive change through AI-enhanced data sharing and insight.Timestamps & Topics00:00:00 🧬 Introduction to AI citizen science00:01:40 🐋 Whale tracking with AI and FlukeBook00:03:00 📚 Lorenzo’s Oil and early citizen-led research00:05:45 🌐 Zooniverse and global collaboration00:07:43 🧠 AI as partner, not replacement00:10:00 📰 Citizen journalism parallels00:13:44 🧰 Lowering the barrier to entry in science00:17:05 📷 Voice and image data collection projects00:21:47 🦆 Rubber ducky ocean data and accidental science00:24:11 🌾 Personalized health and gluten studies00:26:00 🏥 Using ChatGPT to understand CT scans00:30:35 🧪 You are statistically significant to yourself00:35:36 ⚡ AI-powered edge devices and AEDs00:39:38 🧠 Building personalized models for research00:41:27 🔍 AI helping reassess old research00:44:00 🌱 Localized solutions through grassroots efforts00:47:22 🤝 Invitation to join a community-led citizen science project#CitizenScience #AIForGood #AIAccessibility #Zooniverse #Biohacking #PersonalHealth #EdgeAI #OpenSourceScience #ScienceForAll #FlukeBook #DailyAIShow #GrassrootsScienceThe Daily AI Show Co-Hosts:Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

Jun 9, 2025 • 59min
AI Agent Orchestration: What You MUST Know (Ep. 481)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comThe team breaks down two OpenAI-linked articles on the rise of agent orchestrators and the coming age of agent specifications. They explore what it means for expertise, jobs, company structure, and how AI orchestration is shaping up as a must-have skill. The conversation blends practical insight with long-term implications for individuals, startups, and legacy companies.Key Points DiscussedThe “agent orchestrator” role is emerging as a key career path, shifting value from expertise to coordination.AI democratizes knowledge, forcing experts to rethink their value in a world where anyone can call an API.Orchestrators don’t need deep domain knowledge but must know how systems interact and where agents can plug in.Agent management literacy is becoming the new Excel—basic workplace fluency for the next decade.Organizations need to flatten hierarchies and break silos to fully benefit from agentic workflows.Startups with one person and dozens of agents may outpace slow-moving incumbents with rigid workflows.The resource optimization layer of orchestration includes knowing when to deploy agents, balance compute costs, and iterate efficiently.Experience managing complex systems—like stage managers, air traffic controllers, or even gamers—translates well to orchestrator roles.Generalists with broad experience may thrive more than traditional specialists in this new environment.A shift toward freelance, contract-style work is accelerating as teams become agent-enhanced rather than role-defined.Companies that fail to overhaul their systems for agent participation may fall behind or collapse.The future of hiring may focus on what personal AI infrastructure you bring with you, not just your resume.Successful adaptation depends on documenting your workflows, experimenting constantly, and rethinking traditional roles and org structures.Timestamps & Topics00:00:00 🚀 Intro and context for the orchestrator concept00:01:34 🧠 Expertise gets democratized00:04:35 🎓 Training for orchestration, not gatekeeping00:07:06 🎭 Stage managers and improv analogies00:10:03 📊 Resource optimization as an orchestration skill00:13:26 🕹️ Civilization and game-based thinking00:16:35 🧮 Agent literacy as workplace fluency00:21:11 🏗️ Systems vs culture in enterprise adoption00:25:56 🔁 Zapier fragility and real-time orchestration00:31:09 💼 Agent-backed personal brand in job market00:36:09 🧱 Legacy systems and institutional memory00:41:57 🌍 Gravity shift metaphor and awareness gaps00:46:12 🎯 Campaign-style teams and short-term employment00:50:24 🏢 Flattening orgs and replacing the C-suite00:52:05 🧬 Infrastructure is almost ready, agents still catching up00:54:23 🔮 Challenge assumptions and explore what’s possible00:56:07 ✍️ Record everything to prove impact and train models#AgentOrchestrator #AgenticWeb #FutureOfWork #AIJobs #AIAgents #OpenAI #WorkforceShift #Generalists #AgentLiteracy #EnterpriseAI #DailyAIShow #OrchestrationSkills #FutureOfSaaSThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

Jun 7, 2025 • 19min
The Infinite Content Conundrum
The Infinite Content ConundrumImagine a near future where Netflix, YouTube, and even your favorite music app use AI to generate custom content for every user. Not just recommendations, but unique, never-before-seen movies, shows, and songs that exist only for you. Plots bend to your mood, characters speak your language, and stories never repeat. The algorithm knows what you want before you do—and delivers it instantly.Entertainment becomes endlessly satisfying and frictionless, but every experience is now private. There is no shared pop culture moment, no collective anticipation for a season finale, no midnight release at the theater. Water-cooler conversations fade, because no two people have seen the same thing. Meanwhile, live concerts, theater, and other truly communal events become rare, almost sacred—priced at a premium for those seeking a connection that algorithms can’t duplicate.Some see this as the golden age of personal expression, where every story fits you perfectly. Others see it as the death of culture as we know it, with everyone living in their own narrative bubble and human creativity competing for attention with an infinite machine.The conundrumIf AI can create infinite, hyper-personalized entertainment—content that’s uniquely yours, always available, and perfectly satisfying—do we gain a new kind of freedom and joy, or do we risk losing the messy, unpredictable, and communal experiences that once gave meaning to culture? And if true human connection becomes rare and expensive, is it a luxury worth fighting for or a relic that will simply fade away?What happens when stories no longer bring us together, but keep us perfectly, quietly apart?This podcast is created by AI. We used ChatGPT, Perplexity and Google NotebookLM's audio overview to create the conversation you are hearing. We do not make any claims to the validity of the information provided and see this as an experiment around deep discussions fully generated by AI.

Jun 6, 2025 • 1h 3min
Mastering ChatGPT Memory (Ep. 480)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comThe DAS crew focus on mastering ChatGPT’s memory feature. They walk through four high-impact techniques—interview prompts, wake word commands, memory cleanup, and persona setup—and share how these hacks are helping users get more out of ChatGPT without burning tokens or needing a paid plan. They also dig into limitations, practical frustrations, and why real memory still has a long way to go.Key Points DiscussedMemory is now enabled for all ChatGPT users, including free accounts, allowing more advanced workflows with zero tokens used.The team explains how memory differs from custom instructions and how the two can work together.Wake words like “newsify” can trigger saved prompt behaviors, essentially acting like mini-apps inside ChatGPT.Wake words are case-sensitive and must be uniquely chosen to avoid accidental triggering in regular conversation.Memory does not currently allow direct editing of saved items, which leads to user frustration with control and recall accuracy.Jyunmi and Beth explore merging memory with creative personas like fantasy fitness coaches and job analysts.The team debates whether memory recall works reliably across models like GPT-4 and GPT-4o.Custom GPTs cannot be used inside ChatGPT Projects, limiting the potential for fully integrated workflows.Karl and Brian note that Project files aren’t treated like persistent memory, even though the chat history lives inside the project.Users shared ideas for memory segmentation, such as flagging certain chats or siloing memory by project or use case.Participants emphasized how personal use cases vary, making universal memory behavior difficult to solve.Some users would pay extra for robust memory with better segmentation, access control, and token optimization.Beth outlined the memory interview trick, where users ask ChatGPT to question them about projects or preferences and store the answers.The team reviewed token limits: free users get about 2,000, plus users 8,000, with no confirmation that pro users get more.Karl confirmed Pro accounts do have more extensive chat history recall, even if token limits remain the same.Final takeaway: memory’s potential is clear, but better tooling, permissions, and segmentation will determine its future success.Timestamps & Topics00:00:00 🧠 What is ChatGPT memory and why it matters00:03:25 🧰 Project memory vs. custom GPTs00:07:03 🔒 Why some users disable memory by default00:08:11 🔁 Token recall and wake word strategies00:13:53 🧩 Wake words as command triggers00:17:10 💡 Using memory without burning tokens00:20:12 🧵 Editing and cleaning up saved memory00:24:44 🧠 Supabase or Pinecone as external memory workarounds00:26:55 📦 Token limits and memory management00:30:21 🧩 Segmenting memory by project or flag00:36:10 📄 Projects fail to replace full memory control00:41:23 📐 Custom formatting and persona design limits00:46:12 🎮 Fantasy-style coaching personas with memory recall00:51:02 🧱 Memory summaries lack format fidelity00:56:45 📚 OpenAI will train on your saved memory01:01:32 💭 Wrap-up thoughts on experimentation and next steps#ChatGPTMemory #AIWorkflows #WakeWords #MiniApps #TokenOptimization #CustomGPT #ChatGPTProjects #AIProductivity #MemoryManagement #DailyAIShowThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

5 snips
Jun 5, 2025 • 57min
Agents, AI, and the End of Software As We Know It (Ep. 479)
In this engaging discussion, experts dive into the future of software amidst the rise of the Agentic Web, where AI-driven agents take over traditional workflows. They explore how companies must adapt to integrate with these dynamic systems, shifting focus from tool ownership to goal completion. Insights from tech leader Satya Nadella highlight the significance of user-centric development and the need for robust permission architectures as AI becomes central in operational efficiency. The conversation also touches on the balancing act of privacy and collaboration in SaaS evolution.

Jun 5, 2025 • 1h 4min
The Week’s Wildest AI News (Ep. 478)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comIntroIn this June 4th episode of The Daily AI Show, the team covers a wide range of news across the AI ecosystem. From Windsurf losing Claude model access and new agentic tools like Runner H, to Character AI’s expanding avatar features and Meta’s aggressive AI ad push, the episode tracks developments in agent behavior, AI-powered content, cybernetic vision, and even an upcoming OpenAI biopic. It's episode 478, and the team is in full news mode.Key Points DiscussedAnthropic reportedly cut Claude model access to Windsurf shortly after rumors of an OpenAI acquisition. Windsurf claims they were given under 5 days notice.Claude Code is gaining traction as a preferred agentic coding tool with real-time execution and safety layers, powered by Claude Opus.Character AI rolls out avatar FX and scripted scenes. These immersive features let users share personalized, multimedia conversations.Epic Games tested AI-powered NPCs in Fortnite using a Darth Vader character. Players quickly got it to swear, forcing a rollback.Sakana AI revealed the Darwin Gödel Machine, an evolutionary, self-modifying agent designed to improve itself over time.Manus now supports full video generation, adding to its agentic creative toolset.Meta announced that by 2026, AI will generate nearly all of its ads, skipping transparency requirements common elsewhere.Claude Explains launched as an Anthropic blog section written by Claude and edited by humans.TikTok now offers AI-powered ad generation tools, giving businesses tailored suggestions based on audience and keywords.Carl demoed Runner H, a new agent with virtual machine capabilities. Unlike tools like GenSpark, it simulates user behavior to navigate the web and apps.MCP (Model Context Protocol) integrations for Claude now support direct app access via tools like Zapier, expanding automation potential.WebBench, a new benchmark for browser agents, tests read and write tasks across thousands of sites. Claude Sonnet leads current leaderboard.Discussion of Marc Andreessen’s comments about embodied AI and robot manufacturing reshaping U.S. industry.OpenAI announced memory features coming to free users and a biopic titled “Artificial” centered on the 2023 boardroom drama.Tokyo University of Science created a self-powered artificial synapse with near-human color vision, a step toward low-power computer vision and potential cybernetic applications.Palantir’s government contracts for AI tracking raised concerns about overreach and surveillance.Debate surfaced over a proposed U.S. bill giving AI companies 10 years of no regulation, prompting criticism from both sides of the political aisle.Timestamps & Topics00:00:00 📰 News intro and Windsurf vs Anthropic00:05:40 💻 Claude Code vs Cursor and Windsurf00:10:05 🎭 Character AI launches avatar FX and scripted scenes00:14:22 🎮 Fortnite tests AI NPCs with Darth Vader00:17:30 🧬 Sakana AI’s Darwin Gödel Machine explained00:21:10 🎥 Manus adds video generation00:23:30 📢 Meta to generate most ads with AI by 202600:26:00 📚 Claude Explains launches00:28:40 📱 TikTok AI ad tools announced00:32:12 🤖 Runner H demo: a live agent test00:41:45 🔌 Claude integrations via Zapier and MCP00:45:10 🌐 WebBench launched to test browser agents00:50:40 🏭 Andreessen predicts U.S. robot manufacturing00:53:30 🧠 OpenAI memory feature for free users00:54:44 🎬 Sam Altman biopic “Artificial” in production00:58:13 🔋 Self-powered synapse mimics human color vision01:02:00 🛑 Palantir and surveillance risks01:04:30 🧾 U.S. bill proposes 10-year AI regulation freeze01:07:45 📅 Show wrap, aftershow, and upcoming eventsThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

Jun 3, 2025 • 57min
Mary Meeker’s Q2 AI Report: The Data Behind the Hype (Ep. 477)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comIntroIn this episode of The Daily AI Show, the team unpacks Mary Meeker’s return with a 305-page report on the state of AI in 2025. They walk through key data points, adoption stats, and bold claims about where things are heading, especially in education, job markets, infrastructure, and AI agents. The conversation focuses on how fast everything is moving and what that pace means for companies, schools, and society at large.Key Points DiscussedMary Meeker, once called the queen of the internet, returns with a dense AI report positioning AI as the new foundational infrastructure.The report stresses speed over caution, praising OpenAI’s decision to launch imperfect tools and scale fast.Adoption is already massive: 10,000 Kaiser doctors use AI scribes, 27% of SF ride-hails are autonomous, and FDA approvals for AI medical devices have jumped.Developers lead the charge with 63% using AI in 2025, up from 44% in 2024.Google processes 480 trillion tokens monthly, 15x Microsoft, underscoring massive infrastructure demand.The panel debated AI in education, with Brian highlighting AI’s potential for equity and Beth emphasizing the risks of shortchanging the learning process.Mary’s optimistic take contrasts with media fears, downplaying cheating concerns in favor of learning transformation.The team discussed how AI might disrupt work identity and purpose, especially in jobs like teaching or creative fields.Junmi pointed out that while everything looks “up and to the right,” the report mainly reflects the present, not forward-looking agent trends.Carl noted the report skips over key trends like multi-agent orchestration, copyright, and audio/video advances.The group appreciated the data-rich visuals in the report and saw it as a catch-up tool for lagging orgs, not a future roadmap.Mary’s “Three Horizons” framework suggests short-term integration, mid-term product shifts, and long-term AGI bets.The report ends with a call for U.S. immigration policy that welcomes global AI talent, warning against isolationism.Timestamps & Topics00:00:00 📊 Introduction to Mary Meeker’s AI report00:05:31 📈 Hard adoption numbers and real-world use00:10:22 🚀 Speed vs caution in AI deployment00:13:46 🎓 AI in education: optimism and concerns00:26:04 🧠 Equity and access in future education00:30:29 💼 Job market and developer adoption00:36:09 📅 Predictions for 2030 and 203500:40:42 🎧 Audio and robotics advances missing in report00:43:07 🧭 Three Horizons: short, mid, and long term strategy00:46:57 🦾 Rise of agents and transition from messaging to action00:50:16 📉 Limitations of the report: agents, governance, video00:54:20 🧬 Immigration, innovation, and U.S. AI leadership00:56:11 📅 Final thoughts and community reminderHashtags#MaryMeeker #AI2025 #AIReport #AITrends #AIinEducation #AIInfrastructure #AIJobs #AIImmigration #DailyAIShow #AIstrategy #AIadoption #AgentEconomyThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

Jun 2, 2025 • 59min
Eat, prAI, Love & Searching for meaning (Ep. 476)
Want to keep the conversation going?Join our Slack community at thedailyaishowcommunity.comThe DAS crew explore how AI is reshaping our sense of meaning, identity, and community. Instead of focusing on tools or features, the conversation takes a personal and societal look at how AI could disrupt the places people find purpose—like work, art, and spirituality—and what it might mean if machines start to simulate the experiences that once made us feel human.Key Points DiscussedBeth opens with a reflection on how AI may disrupt not just jobs, but our sense of belonging and meaning in doing them.The team discusses the concept of “third spaces” like churches, workplaces, and community groups where people traditionally found identity.Andy draws parallels between historical sources of meaning—family, religion, and work—and how AI could displace or reshape them.Karl shares a clip from Simon Sinek and reflects on how modern work has absorbed roles like therapy, social life, and identity.Jyunmi points out how AI could either weaken or support these third spaces depending on how it is used.The group reflects on how the loss of identity tied to careers—like athletes or artists—mirrors what AI may cause for knowledge workers.Beth notes that AI is both creating disruption and offering new ways to respond to it, raising the question of whether we are choosing this future or being pushed into it.The idea of AI as a spiritual guide or source of community comes up as more tools mimic companionship and reflection.Andy warns that AI cannot give back the way humans do, and meaning is ultimately created through giving and connection.Jyunmi emphasizes the importance of being proactive in defining how AI will be allowed to shape our personal and communal lives.The hosts close with thoughts on responsibility, alignment, and the human need for contribution and connection in a world where AI does more.Timestamps & Topics00:00:00 🧠 Opening thoughts on purpose and AI disruption00:03:01 🤖 Meaning from mastery vs. meaning from speed00:06:00 🏛️ Work, family, and faith as traditional anchors00:09:00 🌀 AI as both chaos and potential spiritual support00:13:00 💬 The need for “third spaces” in modern life00:17:00 📺 Simon Sinek clip on workplace expectations00:20:00 ⚙️ Work identity vs. self identity00:26:00 🎨 Artists and athletes losing core identity00:30:00 🧭 Proactive vs. reactive paths with AI00:34:00 🧱 Community fraying and loneliness00:40:00 🧘♂️ Can AI replace safe spaces and human support?00:46:00 📍 Personalization vs. offloading responsibility00:50:00 🫧 Beth’s bubble metaphor and social fabric00:55:00 🌱 Final thoughts on contribution and design#AIandMeaning #IdentityCrisis #AICommunity #ThirdSpace #SpiritualAI #WorkplaceChange #HumanConnection #DailyAIShow #AIphilosophy #AIEthicsThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh

May 31, 2025 • 17min
AI-Powered Cultural Restoration Conundrum
AI is quickly moving past simple art reproduction. In the coming years, it will be able to reconstruct destroyed murals, restore ancient sculptures, and even generate convincing new works in the style of long-lost masters. These reconstructions will not just be based on guesswork but on deep analysis of archives, photos, data, and creative pattern recognition that is hard for any human team to match.Communities whose heritage was erased or stolen will have the chance to “recover” artifacts or artworks they never physically had, but could plausibly claim. Museums will display lost treasures rebuilt in rich detail, bridging myth and history. There may even be versions of heritage that fill in missing chapters with AI-generated possibilities, giving families, artists, and nations a way to shape the past as well as the future.But when the boundary between authentic recovery and creative invention gets blurry, what happens to the idea of truth in cultural memory? If AI lets us repair old wounds by inventing what might have been, does that empower those who lost their history—or risk building a world where memory, legacy, and even identity are open to endless revision?The conundrumIf near-future AI lets us restore or even invent lost cultural treasures, giving every community a richer version of its own story, are we finally addressing old injustices or quietly creating a world where the line between real and imagined is impossible to hold? When does healing history cross into rewriting it, and who decides what belongs in the recordThis podcast is created by AI. We used ChatGPT, Perplexity and Google NotebookLM's audio overview to create the conversation you are hearing. We do not make any claims to the validity of the information provided and see this as an experiment around deep discussions fully generated by AI.


