

Crazy Wisdom
Stewart Alsop
In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.
Episodes
Mentioned books

Sep 15, 2025 • 1h 5min
Episode #489: The Music Maker’s Stack: From Spotify to On-Chain Revenue
On this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Sweetman, the developer behind on-chain music and co-founder of Recoup. We talk about how musicians in 2025 are coining their content on Base and Zora, earning through Farcaster collectibles, Sound drops, and live shows, while AI agents are reshaping management, discovery, and creative workflows across music and art. The conversation also stretches into Spotify’s AI push, the “dead internet theory,” synthetic hierarchies, and how creators can avoid future shock by experimenting with new tools. You can follow Sweetman on Twitter, Farcaster, Instagram, and try Recoup at chat.recoupable.com.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces Sweetman to talk about on-chain music in 2025.05:00 Coins, Base, Zora, Farcaster, collectibles, Sound, and live shows emerge as key revenue streams for musicians.10:00 Streaming shifts into marketing while AI music quietly fills shops and feeds, sparking talk of the dead internet theory.15:00 Sweetman ties IoT growth and shrinking human birthrates to synthetic consumption, urging builders to plug into AI agents.20:00 Conversation turns to synthetic hierarchies, biological analogies, and defining what an AI agent truly is.25:00 Sweetman demos Recoup: model switching with Vercel AI SDK, Spotify API integration, and building artist knowledge bases.30:00 Tool chains, knowledge storage on Base and Arweave, and expanding into YouTube and TikTok management for labels.35:00 AI elements streamline UI, Sam Altman’s philosophy on building with evolving models sparks a strategy discussion.40:00 Stewart reflects on the return of Renaissance humans, orchestration of machine intelligence, and prediction markets.45:00 Sweetman weighs orchestration trade-offs, cost of Claude vs GPT-5, and boutique services over winner-take-all markets.50:00 Parasocial relationships with models, GPT psychosis, and the emotional shock of AI’s rapid changes.55:00 Future shock explored through Sweetman’s reaction to Cursor, ending with resilience and leaning into experimentation.Key InsightsOn-chain music monetization is diversifying. Sweetman describes how musicians in 2025 use coins, collectibles, and platforms like Base, Zora, Farcaster, and Sound to directly earn from their audiences. Streaming has become more about visibility and marketing, while real revenue comes from tokenized content, auctions, and live shows.AI agents are replacing traditional managers. By consuming data from APIs like Spotify, Instagram, and TikTok, agents can segment audiences, recommend collaborations, and plan tours. What once cost thousands in management fees is now automated, providing musicians with powerful tools at a fraction of the price.Platforms are moving to replace artists. Spotify and other major players are experimenting with AI-generated music, effectively cutting human musicians further out of the revenue loop. This shift reinforces the importance of artists leaning into blockchain monetization and building direct relationships with fans.The “dead internet theory” reframes the future. Sweetman connects IoT expansion and declining birth rates to a world where AI, not humans, will make most online purchases and content. The lesson: build products that are easy for AI agents to buy, consume, and amplify, since they may soon outnumber human users.Synthetic hierarchies mirror biological ones. Stewart introduces the idea that just as cells operate autonomously within the body, billions of AI agents will increasingly act as intermediaries in human creativity and commerce. This frames AI as part of a broader continuity of hierarchical systems in nature and society.Recoup showcases orchestration in practice. Sweetman explains how Recoup integrates Vercel AI SDK, Spotify APIs, and multi-model tool chains to build knowledge bases for artists. By storing profiles on Base and Arweave, Recoup not only manages social media but also automates content optimization, giving musicians leverage once reserved for labels.Future shock is both risk and opportunity. Sweetman shares his initial rejection of AI coding tools as a threat to his identity, only to later embrace them as collaborators. The conversation closes with a call for resilience: experiment with new systems, adapt quickly, and avoid becoming a Luddite in an accelerating digital age.

Sep 12, 2025 • 1h
Episode #488: Responsibility as Freedom, Belonging as Wealth
In this episode of Crazy Wisdom, host Stewart Alsop sits down with Hannah Aline Taylor to explore themes of personal responsibility, freedom, and interdependence through her frameworks like the Village Principles, Distribution Consciousness, and the Empowerment Triangle. Their conversation moves through language and paradox, equanimity, desire and identity, forgiveness, leadership, money and debt, and the ways community and relationship serve as our deepest resources. Hannah shares stories from her life in Nevada City, her perspective on abundance and belonging, and her practice of love and curiosity as tools for living in alignment. You can learn more about her work at loving.university, on her website hannahalinetaylor.com, and in her book The Way of Devotion, available on Amazon.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop welcomes Hannah Aline Taylor, introducing Loving University, Nevada City, and the Village Principles.05:00 They talk about equanimity versus non-duality, emotional mastery, and curating experience through boundaries and high standards.10:00 The focus shifts to desire as “who do I want to be,” identity as abstraction, and relationships beyond monogamy or labels.15:00 Hannah introduces the Empowerment Triangle of anything, everything, nothing, reflecting on reality as it is and the role of perception.20:00 Discussion of Nevada City’s healing energy, community respect, curiosity, and differences between East Coast judgment and West Coast freedom.25:00 Responsibility as true freedom, rebellion under tyranny, delicate ecosystems, and leadership inspired by the Dao De Jing.30:00 Love and entropy, conflict without enmity, curiosity as practice, and attention as the prerequisite for experience.35:00 Forgiveness, discernment, moral debts, economic debt, and reframing wealth consciousness through the “princess card.”40:00 Interdependence, community belonging, relationship as the real resource, and stewarding abundance in a disconnected world.45:00 Building, frontiers, wisdom of indigenous stewardship, the Amazon rainforest, and how knowledge without wisdom creates loss.50:00 Closing reflections on wholeness, abundance, scarcity, relationship technology, and prioritizing humanity in transition.Key InsightsHannah Taylor introduces the Village Principles as a framework for living in “distribution consciousness” rather than “acquisition consciousness.” Instead of chasing community, she emphasizes taking responsibility for one’s own energy, time, and attention, which naturally draws people into authentic connection.A central theme is personal responsibility as the true meaning of freedom. For Hannah, freedom is inseparable from responsibility—when it’s confused with rebellion against control, it remains tied to tyranny. Real freedom comes from holding high standards for one’s life, curating experiences, and owning one’s role in every situation.Desire is reframed from the shallow “what do I want” into the deeper question of “who do I want to be.” This shift moves attention away from consumer-driven longing toward identity, integrity, and presence, turning desire into a compass for embodied living rather than a cycle of lack.Language, abstraction, and identity are questioned as both necessary tools and limiting frames. Distinction is what fuels connection—without difference, there can be no relationship. Yet when we cling to abstractions like “monogamy” or “polyamory,” we obscure the uniqueness of each relationship in favor of labels.Hannah contrasts the disempowerment triangle of victim, perpetrator, and rescuer with her empowerment triangle of anything, everything, and nothing. This model shows reality as inherently whole—everything arises from nothing, anything is possible, and suffering begins when we believe something is wrong.The conversation ties money, credit, and debt to spiritual and moral frameworks. Hannah reframes debt not as a burden but as evidence of trust and abundance, describing her credit card as a “princess card” that affirms belonging and access. Wealth consciousness, she says, is about recognizing the resources already present.Interdependence emerges as the heart of her teaching. Relationship is the true resource, and abundance is squandered when lived independently. Stories of Nevada City, the Amazon rainforest, and even a friend’s Wi-Fi outage illustrate how scarcity reveals the necessity of belonging, curiosity, and shared stewardship of both community and land.

Sep 8, 2025 • 49min
Episode #487: Stablecoins as Weapons, Bitcoin as Escape: A Conversation on Money and Control
On this episode of Crazy Wisdom, Stewart Alsop sits down with Abhimanyu Dayal, a longtime Bitcoin advocate and AI practitioner, to explore how money, identity, and power are shifting in a world of deepfakes, surveillance, automation, and geopolitical realignment. The conversation ranges from why self-custody of Bitcoin matters more than ETFs, to the dangers of probabilistic biometrics and face-swap apps, to the coming impact of AGI on labor markets and the role of universal basic income. They also touch on India’s refinery economy, its balancing act between Russia, China, and the U.S., and how soft power is eroding in the information age. For more from Abhimanyu, connect with him on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop opens with Abhimanyu Dayal on crypto, AI, and the risks of probabilistic biometrics like facial recognition and voice spoofing.05:00 They critique biometric surveillance, face-swap apps, and data exploitation through casual consent.10:00 The talk shifts to QR code treasure hunts, vibe coding on Replit and Claude, and using quizzes to mint NFTs.15:00 Abhimanyu shares his finance background, tying it to Bitcoin as people’s money, agent-to-agent payments, and post-AGI labor shifts.20:00 They discuss universal basic income, libertarian ideals, Hayek’s view of economics as critique, and how AI prediction changes policy.25:00 Pressure, unpredictability, AR glasses, quantum computing, and the surveillance state future come into focus.30:00 Open source vs closed apps, China’s DeepSeek models, propaganda through AI, and U.S.–China tensions are explored.35:00 India’s non-alignment, Soviet alliance in 1971, oil refining economy, and U.S.–India friction surface.40:00 They reflect on colonial history, East India Company, wealth drain, opium wars, and America’s rise on Indian capital.45:00 The conversation closes on Bitcoin’s role as reserve asset, stablecoins as U.S. leverage, BRICS disunity, and the geopolitics of freedom.Key InsightsA central theme of the conversation is the contrast between deterministic and probabilistic systems for identity and security. Abhimanyu Dayal stresses that passwords and private keys—things only you can know—are inherently more secure than facial recognition or voice scans, which can be spoofed through deepfakes, 3D prints, or AI reconstructions. In his view, biometric data should never be stored because it represents a permanent risk once leaked.The rise of face-swap apps and casual facial data sharing illustrates how surveillance and exploitation have crept into everyday life. Abhimanyu points out that companies already use online images to adjust things like insurance premiums, proving how small pieces of biometric consent can spiral into systemic manipulation. This isn’t a hypothetical future—it is already happening in hidden ways.On the lighter side, they experiment with “vibe coding,” using tools like Replit and Claude to design interactive experiences such as a treasure hunt via QR codes and NFTs. This playful example underscores a broader point: lightweight coding and AI platforms empower individuals to create experiments without relying on centralized or closed systems that might inject malware or capture data.The discussion expands into automation, multi-agent systems, and the post-AGI economy. Abhimanyu suggests that artificial superintelligence will require machine-to-machine transactions, making Bitcoin an essential tool. But if machines do the bulk of labor, universal basic income may become unavoidable, even if it drifts toward collectivist structures libertarians dislike.A key shift identified is the transformation of economics itself. Where Hayek once argued economics should critique politicians because of limited data, AI and quantum computing now provide prediction capabilities so granular that human behavior is forecastable at the individual level. This erodes the pseudoscientific nature of past economics and creates a new landscape of policy and control.Geopolitically, the episode explores India’s rise, its reliance on refining Russian crude into petroleum exports, and its effort to stay unaligned between the U.S., Russia, and China. The conversation recalls India’s Soviet ties during the 1971 war, while noting how today’s energy and trade policies underpin domestic improvements for India’s poor and middle class.Finally, they critique the co-optation of Bitcoin through ETFs and institutional custody. While investors celebrate, Abhimanyu argues this betrays Satoshi’s vision of money controlled by individuals with private keys. He warns that Bitcoin may be absorbed into central bank reserves, while stablecoins extend U.S. monetary dominance by reinforcing dollar power rather than replacing it.

Sep 5, 2025 • 1h 1min
Episode #486: Sovereignty by Markets: How Futarchy Turns Bets into Decisions
In this episode of Crazy Wisdom, host Stewart Alsop speaks with Robin Hanson, economist and originator of the idea of futarchy, about how conditional betting markets might transform governance by tying decisions to measurable outcomes. Their conversation moves through examples of organizational incentives in business and government, the balance between elegant theories and messy implementation details, the role of AI in robust institutions, and the tension between complexity and simplicity in legal and political systems. Hanson highlights historical experiments with futarchy, reflects on polarization and collective behavior in times of peace versus crisis, and underscores how ossified bureaucracies mirror software rot. To learn more about his work, you can find Robin Hanson online simply by searching his name or his blog overcomingbias.com, where his interviews—including one with Jeffrey Wernick on early applications of futarchy—are available.Check out this GPT we trained on the conversationTimestamps00:05 Hanson explains futarchy as conditional betting markets that tie governance to measurable outcome metrics, contrasting elegant ideas with messy implementation details.00:10 He describes early experiments, including Jeffrey Wernick’s company in the 1980s, and more recent trials in crypto and an India-based agency.00:15 The conversation shifts to how companies use stock prices as feedback, comparing public firms tied to speculators with private equity and long-term incentives.00:20 Alsop connects futarchy to corporate governance and history, while Hanson explains how futarchy can act as a veto system against executive self-interest.00:25 They discuss conditional political markets in elections, AI participation in institutions, and why proof of human is unnecessary for robust systems.00:30 Hanson reflects on simplicity versus complexity in democracy and legal systems, noting how futarchy faces similar design trade-offs.00:35 He introduces veto markets and outcome metrics, adding nuance to how futarchy could constrain executives while allowing discretion.00:40 The focus turns to implementation in organizations, outcome-based OKRs, and trade-offs between openness, liquidity, and transparency.00:45 They explore DAOs, crypto governance, and the need for focus, then compare news-driven attention with deeper institutional design.00:50 Hanson contrasts novelty with timelessness in academia and policy, explaining how futarchy could break the pattern of weak governance.00:55 The discussion closes on bureaucratic inertia, software rot, and how government ossifies compared to adaptive private organizations.Key InsightsFutarchy proposes that governance can be improved by tying decisions directly to measurable outcome metrics, using conditional betting markets to reveal which policies are expected to achieve agreed goals. This turns speculation into structured decision advice, offering a way to make institutions more competent and accountable.Early experiments with futarchy existed decades ago, including Jeffrey Wernick’s 1980s company that made hiring and product decisions using prediction markets, as well as more recent trials in crypto-based DAOs and a quiet adoption by a government agency in India. These examples show that the idea, while radical, is not just theoretical.A central problem in governance is the tension between elegant ideas and messy implementation. Hanson emphasizes that while the core concept of futarchy is simple, real-world use requires addressing veto powers, executive discretion, and complex outcome metrics. The evolution of institutions involves finding workable compromises without losing the simplicity of the original vision.The conversation highlights how existing governance in corporations mirrors these challenges. Public firms rely heavily on speculators and short-term stock incentives, while private equity benefits from long-term executive stakes. Futarchy could offer companies a new tool, giving executives market-based feedback on major decisions before they act.Institutions must be robust not just to human diversity but also to AI participation. Hanson argues that markets, unlike one-person-one-vote systems, can accommodate AI traders without needing proof of human identity. Designing systems to be indifferent to whether participants are human or machine strengthens long-term resilience.Complexity versus simplicity emerges as a theme, with Hanson noting that democracy and legal systems began with simple structures but accreted layers of rules that now demand lawyers to navigate. Futarchy faces the same trade-off: it starts simple, but real implementation requires added detail, and the balance between elegance and robustness becomes crucial.Finally, the episode situates futarchy within broader social trends. Hanson connects rising polarization and inequality to times of peace and prosperity, contrasting this with the unifying effect of external threats. He also critiques bureaucratic inertia and “software rot” in government, arguing that without innovation in governance, even advanced societies risk ossification.

Sep 1, 2025 • 1h 8min
Episode #485: Bitcoin as Silent Revolution, AI as Accelerated Intelligence
On this episode of Crazy Wisdom, Stewart Alsop sits down with Brad Costanzo, founder and CEO of Accelerated Intelligence, for a wide-ranging conversation that stretches from personal development and the idea that “my mess is my message” to the risks of AI psychosis, the importance of cognitive armor, and Brad’s sovereign mind framework. They talk about education through the lens of the Trivium, the natural pull of elites and hierarchies, and how Bitcoin and stablecoins tie into the future of money, inflation, and technological deflation. Brad also shares his perspective on the synergy between AI and Bitcoin, the dangers of too-big-to-fail banks, and why decentralized banking may be the missing piece. To learn more about Brad’s work, visit acceleratedintelligence.ai or reach out directly at brad@acceleratedintelligence.ai.Check out this GPT we trained on the conversationTimestamps00:00 Brad Costanzo joins Stewart Alsop, opening with “my mess is my message” and Accelerated Intelligence as a way to frame AI as accelerated, not artificial.05:00 They explore AI as a tool for personal development, therapy versus coaching, and AI’s potential for self-insight and pattern recognition.10:00 The conversation shifts to AI psychosis, hype cycles, gullibility, and the need for cognitive armor, leading into Brad’s sovereign mind framework of define, collaborate, and refine.15:00 They discuss education through the Trivium—grammar, logic, rhetoric—contrasted with the Prussian mass education model designed for factory workers.20:00 The theme turns to elites, natural hierarchies, and the Robbers Cave experiment showing how quickly humans split into tribes.25:00 Bitcoin enters as a silent, nonviolent revolution against centralized money, with Hayek’s quote on sound money and the Trojan horse of Wall Street adoption.30:00 Stablecoins, treasuries, and the Treasury vs Fed dynamic highlight how monetary demand is being engineered through crypto markets.35:00 Inflation, disinflation, and deflation surface, tied to real estate costs, millennials vs boomers, Austrian economics, and Jeff Booth’s “Price of Tomorrow.”40:00 They connect Bitcoin and AI as deflationary forces, population decline, productivity gains, and the idea of a personal Bitcoin denominator.45:00 The talk expands into Bitcoin mining, AI data centers, difficulty adjustments, and Richard Werner’s insights on quantitative easing, commercial banks, and speculative vs productive loans.50:00 Wrapping themes center on decentralized banking, the dangers of too-big-to-fail, assets as protection, Bitcoin’s volatility, and why it remains the strongest play for long-term purchasing power.Key InsightsOne of the strongest insights Brad shares is the shift from artificial intelligence to accelerated intelligence. Instead of framing AI as something fake or external, he sees it as a leverage tool to amplify human intelligence—whether emotional, social, spiritual, or business-related. This reframing positions AI less as a threat to authenticity and more as a partner in unlocking dormant creativity.Personal development surfaces through the mantra “my mess is my message.” Brad emphasizes that the struggles, mistakes, and rock-bottom moments in life can become the foundation for helping others. AI plays into this by offering low-cost access to self-insight, giving people the equivalent of a reflective mirror that can help them see patterns in their own thinking without immediately needing therapy.The episode highlights the emerging problem of AI psychosis. People overly immersed in AI conversations, chatbots, or hype cycles can lose perspective. Brad and Stewart argue that cognitive armor—what Brad calls the “sovereign mind” framework of define, collaborate, and refine—is essential to avoid outsourcing one’s thinking entirely to machines.Education is another theme, with Brad pointing to the classical Trivium—grammar, logic, and rhetoric—as the foundation of real learning. Instead of mass education modeled on the Prussian system for producing factory workers, he argues for rhetoric, debate, and critical thinking as the ultimate tests of knowledge, even in an AI-driven world.When the discussion turns to elites, Brad acknowledges that hierarchies are natural and unavoidable, citing experiments like Robbers Cave. The real danger lies not in elitism itself, but in concentrated control—particularly financial elites who maintain power through the monetary system.Bitcoin is framed as a “silent, nonviolent revolution.” Brad describes it as a Trojan horse—appearing as a speculative asset while quietly undermining government monopoly on money. Stablecoins, treasuries, and the Treasury vs Fed conflict further reveal how crypto is becoming a new driver of monetary demand.Finally, the synergy between AI and Bitcoin offers a hopeful counterbalance to deflation fears and demographic decline. AI boosts productivity while Bitcoin enforces financial discipline. Together, they could stabilize a future where fewer people are needed for the same output, costs of living decrease, and savings in hard money protect purchasing power—even against the inertia of too-big-to-fail banks.

Aug 29, 2025 • 55min
Episode #484: Pirates, Black Swans, and Smart Contracts: Rethinking Insurance in DeFi
Juan Samitier, co-founder of DAMM Capital, specializes in decentralized insurance and on-chain finance. He discusses the risks of smart contracts and the importance of insurance in attracting institutional investors to crypto. Drawing parallels between historical maritime trade and modern finance, he explores black swan events and their implications for economic stability. The conversation also delves into how traditional finance is increasingly merging with DeFi, along with insights on stablecoins and the evolving landscape of asset management.

Aug 25, 2025 • 49min
Episode #483: The Limits of Logic: Probabilistic Minds in a Messy World
In this episode of Crazy Wisdom, Stewart Alsop sits down with Derek Osgood, CEO of DoubleO.ai, to talk about the challenges and opportunities of building with AI agents. The conversation ranges from the shift from deterministic to probabilistic processes, to how humans and LLMs think differently, to why lateral thinking, humor, and creative downtime matter for true intelligence. They also explore the future of knowledge work, the role of context engineering and memory in making agents useful, and the culture of talent, credentials, and hidden gems in Silicon Valley. You can check out Derek’s work at doubleo.ai or connect with him on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Derek Osgood explains what AI agents are, the challenge of reliability and repeatability, and the difference between chat-based and process-based agents.05:00 Conversation shifts to probabilistic vs deterministic systems, with examples of agents handling messy data like LinkedIn profiles.10:00 Stewart Alsop and Derek discuss how humans reason compared to LLMs, token vs word prediction, and how language shapes action.15:00 They question whether chat interfaces are the right UX for AI, weighing structure, consistency, and the persistence of buttons in knowledge work.20:00 Voice interaction comes up, its sci-fi allure, and why unstructured speech makes it hard without stronger memory and higher-level reasoning.25:00 Derek unpacks OpenAI’s approach to memory as active context retrieval, context engineering, and why vector databases aren’t the full answer.30:00 They examine talent wars in AI, credentialism, signaling, and the difference between PhD-level model work and product design for agents.35:00 Leisure and creativity surface, linking downtime, fantasy, and imagination to better lateral thinking in knowledge work.40:00 Discussion of asynchronous AI reasoning, longer time horizons, and why extending “thinking time” could change agent behavior.45:00 Derek shares how Double O orchestrates knowledge work with natural language workflows, making agents act like teammates.50:00 They close with reflections on re-skilling, learning to work with LLMs, BS detection, and the future of critical thinking with AI.Key InsightsOne of the biggest challenges in building AI agents is not just creating them but ensuring their reliability, accuracy, and repeatability. It’s easy to build a demo, but the “last mile” of making an agent perform consistently in the messy, unstructured real world is where the hard problems live.The shift from deterministic software to probabilistic agents reflects the complexity of real-world data and processes. Deterministic systems work only when inputs and outputs are cleanly defined, whereas agents can handle ambiguity, search for missing context, and adapt to different forms of information.Humans and LLMs share similarities in reasoning—both operate like predictive engines—but the difference lies in agency and lateral thinking. Humans can proactively choose what to do without direction and make wild connections across unrelated experiences, something current LLMs still struggle to replicate.Chat interfaces may not be the long-term solution for interacting with AI. While chat offers flexibility, it is too unstructured for many use cases. Derek argues for a hybrid model where structured UI/UX supports repeatable workflows, while chat remains useful as one tool within a broader system.Voice interaction carries promise but faces obstacles. The unstructured nature of spoken input makes it difficult for agents to act reliably without stronger memory, better context retrieval, and a more abstract understanding of goals. True voice-first systems may require progress toward AGI.Much of the magic in AI comes not from the models themselves but from context engineering. Effective systems don’t just rely on vector databases and embeddings—they combine full context, partial context, and memory retrieval to create a more holistic understanding of user goals and history.Beyond the technical, the episode highlights cultural themes: credentialism, hidden talent, and the role of leisure in creativity. Derek critiques Silicon Valley’s obsession with credentials and signaling, noting that true innovation often comes from hidden gem hires and from giving the brain downtime to make unexpected lateral connections that drive creative breakthroughs.

Aug 22, 2025 • 58min
Episode #482: When Complexity Kills Meaning and Creativity Fights Back
In this episode of Crazy Wisdom, Stewart Alsop speaks with Juan Verhook, founder of Tender Market, about how AI reshapes creativity, work, and society. They explore the risks of AI-generated slop versus authentic expression, the tension between probability and uniqueness, and why the complexity dilemma makes human-in-the-loop design essential. Juan connects bureaucracy to proto-AI, questions the incentives driving black-box models, and considers how scaling laws shape emergent intelligence. The conversation balances skepticism with curiosity, reflecting on authenticity, creativity, and the economic realities of building in an AI-driven world. You can learn more about Juan Verhook’s work or connect with him directly through his LinkedIn or via his website at tendermarket.eu.Check out this GPT we trained on the conversationTimestamps00:00 – Stewart and Juan open by contrasting AI slop with authentic creative work. 05:00 – Discussion of probability versus uniqueness and what makes output meaningful. 10:00 – The complexity dilemma emerges, as systems grow opaque and fragile. 15:00 – Why human-in-the-loop remains central to trustworthy AI. 20:00 – Juan draws parallels between bureaucracy and proto-AI structures. 25:00 – Exploration of black-box models and the limits of explainability. 30:00 – The role of economic incentives in shaping AI development. 35:00 – Reflections on nature versus nurture in intelligence, human and machine. 40:00 – How scaling laws drive emergent behavior, but not always understanding. 45:00 – Weighing authenticity and creativity against automation’s pull. 50:00 – Closing thoughts on optimism versus pessimism in the future of work.Key InsightsAI slop versus authenticity – Juan emphasizes that much of today’s AI output tends toward “slop,” a kind of lowest-common-denominator content driven by probability. The challenge, he argues, is not just generating more information but protecting uniqueness and cultivating authenticity in an age where machines are optimized for averages.The complexity dilemma – As AI systems grow in scale, they become harder to understand, explain, and control. Juan frames this as a “complexity dilemma”: every increase in capability carries a parallel increase in opacity, leaving us to navigate trade-offs between power and transparency.Human-in-the-loop as necessity – Instead of replacing people, AI works best when embedded in systems where humans provide judgment, context, and ethical grounding. Juan sees human-in-the-loop design not as a stopgap, but as the foundation for trustworthy AI use.Bureaucracy as proto-AI – Juan provocatively links bureaucracy to early forms of artificial intelligence. Both are systems that process information, enforce rules, and reduce individuality into standardized outputs. This analogy helps highlight the social risks of AI if left unexamined: efficiency at the cost of humanity.Economic incentives drive design – The trajectory of AI is not determined by technical possibility alone but by the economic structures funding it. Black-box models dominate because they are profitable, not because they are inherently better for society. Incentives, not ideals, shape which technologies win.Nature, nurture, and machine intelligence – Juan extends the age-old debate about human intelligence into the AI domain, asking whether machine learning is more shaped by architecture (nature) or training data (nurture). This reflection surfaces the uncertainty of what “intelligence” even means when applied to artificial systems.Optimism and pessimism in balance – While AI carries risks of homogenization and loss of meaning, Juan maintains a cautiously optimistic view. By prioritizing creativity, human agency, and economic models aligned with authenticity, he sees pathways where AI amplifies rather than diminishes human potential.

Aug 18, 2025 • 58min
Episode #481: From Rothschilds to Robinhood: Cycles of Finance and Control
In this engaging conversation, Michael Jagdeo, founder of Exponent Labs and The Syndicate, dives into the cyclical nature of finance and power, exploring financial history from the Rothschilds to modern trends. He discusses the impact of AI on both markets and society, the balance of collectivism versus individualism, and the rise of exponential organizations. Jagdeo shares unique recruiting insights and book recommendations that reflect on the interplay between technology and human behavior, all while examining how historical narratives shape our current dynamics.

Aug 15, 2025 • 1h 30min
Episode #480: The Patchwork Age and Why AI Can’t Grasp the Human Story
In this engaging discussion, Paul Spencer, a writer at Zeitville Media, delves into the unique crossroads of AI and astrology. He argues that while AI can process data, it falls short in grasping human narratives shaped by mortality and embodiment. Spencer contrasts the solar punk and cyberpunk visions, emphasizing collaboration amid rapid change. They also discuss the cultural shifts since 2020 and explore America's evolving identity, using raw milk symbolism to reflect deeper ideological divides. It's a thought-provoking journey into technology, culture, and human experience.