Crazy Wisdom

Stewart Alsop
undefined
Aug 25, 2025 • 49min

Episode #483: The Limits of Logic: Probabilistic Minds in a Messy World

In this episode of Crazy Wisdom, Stewart Alsop sits down with Derek Osgood, CEO of DoubleO.ai, to talk about the challenges and opportunities of building with AI agents. The conversation ranges from the shift from deterministic to probabilistic processes, to how humans and LLMs think differently, to why lateral thinking, humor, and creative downtime matter for true intelligence. They also explore the future of knowledge work, the role of context engineering and memory in making agents useful, and the culture of talent, credentials, and hidden gems in Silicon Valley. You can check out Derek’s work at doubleo.ai or connect with him on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Derek Osgood explains what AI agents are, the challenge of reliability and repeatability, and the difference between chat-based and process-based agents.05:00 Conversation shifts to probabilistic vs deterministic systems, with examples of agents handling messy data like LinkedIn profiles.10:00 Stewart Alsop and Derek discuss how humans reason compared to LLMs, token vs word prediction, and how language shapes action.15:00 They question whether chat interfaces are the right UX for AI, weighing structure, consistency, and the persistence of buttons in knowledge work.20:00 Voice interaction comes up, its sci-fi allure, and why unstructured speech makes it hard without stronger memory and higher-level reasoning.25:00 Derek unpacks OpenAI’s approach to memory as active context retrieval, context engineering, and why vector databases aren’t the full answer.30:00 They examine talent wars in AI, credentialism, signaling, and the difference between PhD-level model work and product design for agents.35:00 Leisure and creativity surface, linking downtime, fantasy, and imagination to better lateral thinking in knowledge work.40:00 Discussion of asynchronous AI reasoning, longer time horizons, and why extending “thinking time” could change agent behavior.45:00 Derek shares how Double O orchestrates knowledge work with natural language workflows, making agents act like teammates.50:00 They close with reflections on re-skilling, learning to work with LLMs, BS detection, and the future of critical thinking with AI.Key InsightsOne of the biggest challenges in building AI agents is not just creating them but ensuring their reliability, accuracy, and repeatability. It’s easy to build a demo, but the “last mile” of making an agent perform consistently in the messy, unstructured real world is where the hard problems live.The shift from deterministic software to probabilistic agents reflects the complexity of real-world data and processes. Deterministic systems work only when inputs and outputs are cleanly defined, whereas agents can handle ambiguity, search for missing context, and adapt to different forms of information.Humans and LLMs share similarities in reasoning—both operate like predictive engines—but the difference lies in agency and lateral thinking. Humans can proactively choose what to do without direction and make wild connections across unrelated experiences, something current LLMs still struggle to replicate.Chat interfaces may not be the long-term solution for interacting with AI. While chat offers flexibility, it is too unstructured for many use cases. Derek argues for a hybrid model where structured UI/UX supports repeatable workflows, while chat remains useful as one tool within a broader system.Voice interaction carries promise but faces obstacles. The unstructured nature of spoken input makes it difficult for agents to act reliably without stronger memory, better context retrieval, and a more abstract understanding of goals. True voice-first systems may require progress toward AGI.Much of the magic in AI comes not from the models themselves but from context engineering. Effective systems don’t just rely on vector databases and embeddings—they combine full context, partial context, and memory retrieval to create a more holistic understanding of user goals and history.Beyond the technical, the episode highlights cultural themes: credentialism, hidden talent, and the role of leisure in creativity. Derek critiques Silicon Valley’s obsession with credentials and signaling, noting that true innovation often comes from hidden gem hires and from giving the brain downtime to make unexpected lateral connections that drive creative breakthroughs.
undefined
Aug 22, 2025 • 58min

Episode #482: When Complexity Kills Meaning and Creativity Fights Back

In this episode of Crazy Wisdom, Stewart Alsop speaks with Juan Verhook, founder of Tender Market, about how AI reshapes creativity, work, and society. They explore the risks of AI-generated slop versus authentic expression, the tension between probability and uniqueness, and why the complexity dilemma makes human-in-the-loop design essential. Juan connects bureaucracy to proto-AI, questions the incentives driving black-box models, and considers how scaling laws shape emergent intelligence. The conversation balances skepticism with curiosity, reflecting on authenticity, creativity, and the economic realities of building in an AI-driven world. You can learn more about Juan Verhook’s work or connect with him directly through his LinkedIn or via his website at tendermarket.eu.Check out this GPT we trained on the conversationTimestamps00:00 – Stewart and Juan open by contrasting AI slop with authentic creative work. 05:00 – Discussion of probability versus uniqueness and what makes output meaningful. 10:00 – The complexity dilemma emerges, as systems grow opaque and fragile. 15:00 – Why human-in-the-loop remains central to trustworthy AI. 20:00 – Juan draws parallels between bureaucracy and proto-AI structures. 25:00 – Exploration of black-box models and the limits of explainability. 30:00 – The role of economic incentives in shaping AI development. 35:00 – Reflections on nature versus nurture in intelligence, human and machine. 40:00 – How scaling laws drive emergent behavior, but not always understanding. 45:00 – Weighing authenticity and creativity against automation’s pull. 50:00 – Closing thoughts on optimism versus pessimism in the future of work.Key InsightsAI slop versus authenticity – Juan emphasizes that much of today’s AI output tends toward “slop,” a kind of lowest-common-denominator content driven by probability. The challenge, he argues, is not just generating more information but protecting uniqueness and cultivating authenticity in an age where machines are optimized for averages.The complexity dilemma – As AI systems grow in scale, they become harder to understand, explain, and control. Juan frames this as a “complexity dilemma”: every increase in capability carries a parallel increase in opacity, leaving us to navigate trade-offs between power and transparency.Human-in-the-loop as necessity – Instead of replacing people, AI works best when embedded in systems where humans provide judgment, context, and ethical grounding. Juan sees human-in-the-loop design not as a stopgap, but as the foundation for trustworthy AI use.Bureaucracy as proto-AI – Juan provocatively links bureaucracy to early forms of artificial intelligence. Both are systems that process information, enforce rules, and reduce individuality into standardized outputs. This analogy helps highlight the social risks of AI if left unexamined: efficiency at the cost of humanity.Economic incentives drive design – The trajectory of AI is not determined by technical possibility alone but by the economic structures funding it. Black-box models dominate because they are profitable, not because they are inherently better for society. Incentives, not ideals, shape which technologies win.Nature, nurture, and machine intelligence – Juan extends the age-old debate about human intelligence into the AI domain, asking whether machine learning is more shaped by architecture (nature) or training data (nurture). This reflection surfaces the uncertainty of what “intelligence” even means when applied to artificial systems.Optimism and pessimism in balance – While AI carries risks of homogenization and loss of meaning, Juan maintains a cautiously optimistic view. By prioritizing creativity, human agency, and economic models aligned with authenticity, he sees pathways where AI amplifies rather than diminishes human potential.
undefined
Aug 18, 2025 • 58min

Episode #481: From Rothschilds to Robinhood: Cycles of Finance and Control

In this engaging conversation, Michael Jagdeo, founder of Exponent Labs and The Syndicate, dives into the cyclical nature of finance and power, exploring financial history from the Rothschilds to modern trends. He discusses the impact of AI on both markets and society, the balance of collectivism versus individualism, and the rise of exponential organizations. Jagdeo shares unique recruiting insights and book recommendations that reflect on the interplay between technology and human behavior, all while examining how historical narratives shape our current dynamics.
undefined
Aug 15, 2025 • 1h 30min

Episode #480: The Patchwork Age and Why AI Can’t Grasp the Human Story

In this engaging discussion, Paul Spencer, a writer at Zeitville Media, delves into the unique crossroads of AI and astrology. He argues that while AI can process data, it falls short in grasping human narratives shaped by mortality and embodiment. Spencer contrasts the solar punk and cyberpunk visions, emphasizing collaboration amid rapid change. They also discuss the cultural shifts since 2020 and explore America's evolving identity, using raw milk symbolism to reflect deeper ideological divides. It's a thought-provoking journey into technology, culture, and human experience.
undefined
Aug 11, 2025 • 1h 16min

Episode #479: From Bitcoin to Birdsong: Building Trust in a World of Fakes

Discover cutting-edge technologies aimed at ensuring authenticity in our digital age, tackling deepfakes with blockchain and proof of liveness systems. Explore the fascinating interplay between advanced cryptography and AI, as well as its implications for reality and misinformation. Delve into conservation initiatives that use AI to analyze birdsong for wildlife monitoring. Reflect on the future of technology, ecology, and the role of robotics in sustainable agriculture, all while navigating the complexities of trust in an increasingly digital world.
undefined
Aug 8, 2025 • 50min

Episode #478: Beyond Encyclopedias: Teaching History for the AI Era

Zachary Cote, Executive Director of Thinking Nation, champions critical thinking in history education. He elaborates on how memory shapes understanding and the ethics of curating historical narratives in a world of 'alternative facts.' The conversation highlights the importance of intellectual humility, advocating for a shift from memorization to inquiry. Cote warns about the misuse of AI in education, discussing its potential to diminish students' questioning skills. He encourages embracing diverse perspectives for richer historical understanding.
undefined
Aug 4, 2025 • 55min

Episode #477: Why Curiosity Isn’t Just a Virtue—It’s Our Oldest Technology

Edouard Machery, a Distinguished Professor at the University of Pittsburgh, dives into the intriguing roots of curiosity and question-asking. He explores how ancient Sumerian writing shaped societal norms and the evolution of curiosity from a vice to a celebrated virtue during the Renaissance. The discussion covers the cross-cultural perceptions of AI and how curiosity distinguishes humans from other species. Insightful links between early scientific practices and philosophical inquiry further illuminate our unique drive to ask 'why' and seek understanding in an ever-changing world.
undefined
13 snips
Aug 1, 2025 • 1h 10min

Episode #476: More Than Magic: Astrology as the Oldest Data Science

C.T. Lucero, an astrologer and researcher specializing in ancient astrology, joins to discuss the fascinating intersections of astrology, science, and mysticism. They delve into the historical roots of astrology from Hellenistic Greece to Persian advancements. Lucero reveals how AI is shaping contemporary astrological practices and debates Western versus Vedic astrology. They also explore the significance of the 2020 Saturn-Jupiter conjunction and its impact on societal events, offering a fresh perspective on understanding time cycles and astrological influences.
undefined
Jul 28, 2025 • 48min

Episode #475: The Illusion We Opt Into: VR, AI, and the Fractals of Reality

Ryan Estes, a Buddhist entrepreneur and host of AIforFounders, discusses the fascinating intersections of AI and ancient philosophy. He explores the evolution of communication and consciousness, stressing how technologies impact our realities. A deep dive into the illusion of VR reveals parallels to Buddhist insights. Ryan also tackles themes of data ownership and the tension between scientism and spirituality. The conversation touches on historical knowledge suppression and the fragility of democracy, prompting thoughts on the future structures of power in a tech-driven world.
undefined
Jul 25, 2025 • 58min

Episode #474: Truth Beams and Chaotic Solutions: Building Decentralized Futures

Join Cathal O’Broin, leader of the PoliePals, as he dives into a world where humans, nature, and machines are interconnected through groundbreaking tech. He discusses the fascinating concepts of neural and cryptographic projector-camera systems, known as the 'truth beam,' that redefine data authenticity. The conversation also highlights innovative tinkering, the shifting landscape of decentralized systems, and the vital role of collaboration in creative problem-solving. Get ready for a journey through art, technology, and the future of decentralized communication!

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app