

Crazy Wisdom
Stewart Alsop
In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.
Episodes
Mentioned books

Sep 8, 2025 • 49min
Episode #487: Stablecoins as Weapons, Bitcoin as Escape: A Conversation on Money and Control
On this episode of Crazy Wisdom, Stewart Alsop sits down with Abhimanyu Dayal, a longtime Bitcoin advocate and AI practitioner, to explore how money, identity, and power are shifting in a world of deepfakes, surveillance, automation, and geopolitical realignment. The conversation ranges from why self-custody of Bitcoin matters more than ETFs, to the dangers of probabilistic biometrics and face-swap apps, to the coming impact of AGI on labor markets and the role of universal basic income. They also touch on India’s refinery economy, its balancing act between Russia, China, and the U.S., and how soft power is eroding in the information age. For more from Abhimanyu, connect with him on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop opens with Abhimanyu Dayal on crypto, AI, and the risks of probabilistic biometrics like facial recognition and voice spoofing.05:00 They critique biometric surveillance, face-swap apps, and data exploitation through casual consent.10:00 The talk shifts to QR code treasure hunts, vibe coding on Replit and Claude, and using quizzes to mint NFTs.15:00 Abhimanyu shares his finance background, tying it to Bitcoin as people’s money, agent-to-agent payments, and post-AGI labor shifts.20:00 They discuss universal basic income, libertarian ideals, Hayek’s view of economics as critique, and how AI prediction changes policy.25:00 Pressure, unpredictability, AR glasses, quantum computing, and the surveillance state future come into focus.30:00 Open source vs closed apps, China’s DeepSeek models, propaganda through AI, and U.S.–China tensions are explored.35:00 India’s non-alignment, Soviet alliance in 1971, oil refining economy, and U.S.–India friction surface.40:00 They reflect on colonial history, East India Company, wealth drain, opium wars, and America’s rise on Indian capital.45:00 The conversation closes on Bitcoin’s role as reserve asset, stablecoins as U.S. leverage, BRICS disunity, and the geopolitics of freedom.Key InsightsA central theme of the conversation is the contrast between deterministic and probabilistic systems for identity and security. Abhimanyu Dayal stresses that passwords and private keys—things only you can know—are inherently more secure than facial recognition or voice scans, which can be spoofed through deepfakes, 3D prints, or AI reconstructions. In his view, biometric data should never be stored because it represents a permanent risk once leaked.The rise of face-swap apps and casual facial data sharing illustrates how surveillance and exploitation have crept into everyday life. Abhimanyu points out that companies already use online images to adjust things like insurance premiums, proving how small pieces of biometric consent can spiral into systemic manipulation. This isn’t a hypothetical future—it is already happening in hidden ways.On the lighter side, they experiment with “vibe coding,” using tools like Replit and Claude to design interactive experiences such as a treasure hunt via QR codes and NFTs. This playful example underscores a broader point: lightweight coding and AI platforms empower individuals to create experiments without relying on centralized or closed systems that might inject malware or capture data.The discussion expands into automation, multi-agent systems, and the post-AGI economy. Abhimanyu suggests that artificial superintelligence will require machine-to-machine transactions, making Bitcoin an essential tool. But if machines do the bulk of labor, universal basic income may become unavoidable, even if it drifts toward collectivist structures libertarians dislike.A key shift identified is the transformation of economics itself. Where Hayek once argued economics should critique politicians because of limited data, AI and quantum computing now provide prediction capabilities so granular that human behavior is forecastable at the individual level. This erodes the pseudoscientific nature of past economics and creates a new landscape of policy and control.Geopolitically, the episode explores India’s rise, its reliance on refining Russian crude into petroleum exports, and its effort to stay unaligned between the U.S., Russia, and China. The conversation recalls India’s Soviet ties during the 1971 war, while noting how today’s energy and trade policies underpin domestic improvements for India’s poor and middle class.Finally, they critique the co-optation of Bitcoin through ETFs and institutional custody. While investors celebrate, Abhimanyu argues this betrays Satoshi’s vision of money controlled by individuals with private keys. He warns that Bitcoin may be absorbed into central bank reserves, while stablecoins extend U.S. monetary dominance by reinforcing dollar power rather than replacing it.

Sep 5, 2025 • 1h 1min
Episode #486: Sovereignty by Markets: How Futarchy Turns Bets into Decisions
In this episode of Crazy Wisdom, host Stewart Alsop speaks with Robin Hanson, economist and originator of the idea of futarchy, about how conditional betting markets might transform governance by tying decisions to measurable outcomes. Their conversation moves through examples of organizational incentives in business and government, the balance between elegant theories and messy implementation details, the role of AI in robust institutions, and the tension between complexity and simplicity in legal and political systems. Hanson highlights historical experiments with futarchy, reflects on polarization and collective behavior in times of peace versus crisis, and underscores how ossified bureaucracies mirror software rot. To learn more about his work, you can find Robin Hanson online simply by searching his name or his blog overcomingbias.com, where his interviews—including one with Jeffrey Wernick on early applications of futarchy—are available.Check out this GPT we trained on the conversationTimestamps00:05 Hanson explains futarchy as conditional betting markets that tie governance to measurable outcome metrics, contrasting elegant ideas with messy implementation details.00:10 He describes early experiments, including Jeffrey Wernick’s company in the 1980s, and more recent trials in crypto and an India-based agency.00:15 The conversation shifts to how companies use stock prices as feedback, comparing public firms tied to speculators with private equity and long-term incentives.00:20 Alsop connects futarchy to corporate governance and history, while Hanson explains how futarchy can act as a veto system against executive self-interest.00:25 They discuss conditional political markets in elections, AI participation in institutions, and why proof of human is unnecessary for robust systems.00:30 Hanson reflects on simplicity versus complexity in democracy and legal systems, noting how futarchy faces similar design trade-offs.00:35 He introduces veto markets and outcome metrics, adding nuance to how futarchy could constrain executives while allowing discretion.00:40 The focus turns to implementation in organizations, outcome-based OKRs, and trade-offs between openness, liquidity, and transparency.00:45 They explore DAOs, crypto governance, and the need for focus, then compare news-driven attention with deeper institutional design.00:50 Hanson contrasts novelty with timelessness in academia and policy, explaining how futarchy could break the pattern of weak governance.00:55 The discussion closes on bureaucratic inertia, software rot, and how government ossifies compared to adaptive private organizations.Key InsightsFutarchy proposes that governance can be improved by tying decisions directly to measurable outcome metrics, using conditional betting markets to reveal which policies are expected to achieve agreed goals. This turns speculation into structured decision advice, offering a way to make institutions more competent and accountable.Early experiments with futarchy existed decades ago, including Jeffrey Wernick’s 1980s company that made hiring and product decisions using prediction markets, as well as more recent trials in crypto-based DAOs and a quiet adoption by a government agency in India. These examples show that the idea, while radical, is not just theoretical.A central problem in governance is the tension between elegant ideas and messy implementation. Hanson emphasizes that while the core concept of futarchy is simple, real-world use requires addressing veto powers, executive discretion, and complex outcome metrics. The evolution of institutions involves finding workable compromises without losing the simplicity of the original vision.The conversation highlights how existing governance in corporations mirrors these challenges. Public firms rely heavily on speculators and short-term stock incentives, while private equity benefits from long-term executive stakes. Futarchy could offer companies a new tool, giving executives market-based feedback on major decisions before they act.Institutions must be robust not just to human diversity but also to AI participation. Hanson argues that markets, unlike one-person-one-vote systems, can accommodate AI traders without needing proof of human identity. Designing systems to be indifferent to whether participants are human or machine strengthens long-term resilience.Complexity versus simplicity emerges as a theme, with Hanson noting that democracy and legal systems began with simple structures but accreted layers of rules that now demand lawyers to navigate. Futarchy faces the same trade-off: it starts simple, but real implementation requires added detail, and the balance between elegance and robustness becomes crucial.Finally, the episode situates futarchy within broader social trends. Hanson connects rising polarization and inequality to times of peace and prosperity, contrasting this with the unifying effect of external threats. He also critiques bureaucratic inertia and “software rot” in government, arguing that without innovation in governance, even advanced societies risk ossification.

13 snips
Sep 1, 2025 • 1h 8min
Episode #485: Bitcoin as Silent Revolution, AI as Accelerated Intelligence
In a compelling discussion, Brad Costanzo, founder and CEO of Accelerated Intelligence, shares insights on harnessing AI for personal growth and the risks of AI psychosis. He emphasizes the importance of cognitive armor and his sovereign mind framework. The dialogue dives into Bitcoin as a silent revolution against traditional banking and contrasts stablecoins with monetary policy. Costanzo also highlights the synergy between Bitcoin mining and AI infrastructure, proposing decentralized banking as a crucial alternative to 'too-big-to-fail' institutions.

Aug 29, 2025 • 55min
Episode #484: Pirates, Black Swans, and Smart Contracts: Rethinking Insurance in DeFi
Juan Samitier, co-founder of DAMM Capital, specializes in decentralized insurance and on-chain finance. He discusses the risks of smart contracts and the importance of insurance in attracting institutional investors to crypto. Drawing parallels between historical maritime trade and modern finance, he explores black swan events and their implications for economic stability. The conversation also delves into how traditional finance is increasingly merging with DeFi, along with insights on stablecoins and the evolving landscape of asset management.

Aug 25, 2025 • 49min
Episode #483: The Limits of Logic: Probabilistic Minds in a Messy World
In this episode of Crazy Wisdom, Stewart Alsop sits down with Derek Osgood, CEO of DoubleO.ai, to talk about the challenges and opportunities of building with AI agents. The conversation ranges from the shift from deterministic to probabilistic processes, to how humans and LLMs think differently, to why lateral thinking, humor, and creative downtime matter for true intelligence. They also explore the future of knowledge work, the role of context engineering and memory in making agents useful, and the culture of talent, credentials, and hidden gems in Silicon Valley. You can check out Derek’s work at doubleo.ai or connect with him on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Derek Osgood explains what AI agents are, the challenge of reliability and repeatability, and the difference between chat-based and process-based agents.05:00 Conversation shifts to probabilistic vs deterministic systems, with examples of agents handling messy data like LinkedIn profiles.10:00 Stewart Alsop and Derek discuss how humans reason compared to LLMs, token vs word prediction, and how language shapes action.15:00 They question whether chat interfaces are the right UX for AI, weighing structure, consistency, and the persistence of buttons in knowledge work.20:00 Voice interaction comes up, its sci-fi allure, and why unstructured speech makes it hard without stronger memory and higher-level reasoning.25:00 Derek unpacks OpenAI’s approach to memory as active context retrieval, context engineering, and why vector databases aren’t the full answer.30:00 They examine talent wars in AI, credentialism, signaling, and the difference between PhD-level model work and product design for agents.35:00 Leisure and creativity surface, linking downtime, fantasy, and imagination to better lateral thinking in knowledge work.40:00 Discussion of asynchronous AI reasoning, longer time horizons, and why extending “thinking time” could change agent behavior.45:00 Derek shares how Double O orchestrates knowledge work with natural language workflows, making agents act like teammates.50:00 They close with reflections on re-skilling, learning to work with LLMs, BS detection, and the future of critical thinking with AI.Key InsightsOne of the biggest challenges in building AI agents is not just creating them but ensuring their reliability, accuracy, and repeatability. It’s easy to build a demo, but the “last mile” of making an agent perform consistently in the messy, unstructured real world is where the hard problems live.The shift from deterministic software to probabilistic agents reflects the complexity of real-world data and processes. Deterministic systems work only when inputs and outputs are cleanly defined, whereas agents can handle ambiguity, search for missing context, and adapt to different forms of information.Humans and LLMs share similarities in reasoning—both operate like predictive engines—but the difference lies in agency and lateral thinking. Humans can proactively choose what to do without direction and make wild connections across unrelated experiences, something current LLMs still struggle to replicate.Chat interfaces may not be the long-term solution for interacting with AI. While chat offers flexibility, it is too unstructured for many use cases. Derek argues for a hybrid model where structured UI/UX supports repeatable workflows, while chat remains useful as one tool within a broader system.Voice interaction carries promise but faces obstacles. The unstructured nature of spoken input makes it difficult for agents to act reliably without stronger memory, better context retrieval, and a more abstract understanding of goals. True voice-first systems may require progress toward AGI.Much of the magic in AI comes not from the models themselves but from context engineering. Effective systems don’t just rely on vector databases and embeddings—they combine full context, partial context, and memory retrieval to create a more holistic understanding of user goals and history.Beyond the technical, the episode highlights cultural themes: credentialism, hidden talent, and the role of leisure in creativity. Derek critiques Silicon Valley’s obsession with credentials and signaling, noting that true innovation often comes from hidden gem hires and from giving the brain downtime to make unexpected lateral connections that drive creative breakthroughs.

Aug 22, 2025 • 58min
Episode #482: When Complexity Kills Meaning and Creativity Fights Back
In this episode of Crazy Wisdom, Stewart Alsop speaks with Juan Verhook, founder of Tender Market, about how AI reshapes creativity, work, and society. They explore the risks of AI-generated slop versus authentic expression, the tension between probability and uniqueness, and why the complexity dilemma makes human-in-the-loop design essential. Juan connects bureaucracy to proto-AI, questions the incentives driving black-box models, and considers how scaling laws shape emergent intelligence. The conversation balances skepticism with curiosity, reflecting on authenticity, creativity, and the economic realities of building in an AI-driven world. You can learn more about Juan Verhook’s work or connect with him directly through his LinkedIn or via his website at tendermarket.eu.Check out this GPT we trained on the conversationTimestamps00:00 – Stewart and Juan open by contrasting AI slop with authentic creative work. 05:00 – Discussion of probability versus uniqueness and what makes output meaningful. 10:00 – The complexity dilemma emerges, as systems grow opaque and fragile. 15:00 – Why human-in-the-loop remains central to trustworthy AI. 20:00 – Juan draws parallels between bureaucracy and proto-AI structures. 25:00 – Exploration of black-box models and the limits of explainability. 30:00 – The role of economic incentives in shaping AI development. 35:00 – Reflections on nature versus nurture in intelligence, human and machine. 40:00 – How scaling laws drive emergent behavior, but not always understanding. 45:00 – Weighing authenticity and creativity against automation’s pull. 50:00 – Closing thoughts on optimism versus pessimism in the future of work.Key InsightsAI slop versus authenticity – Juan emphasizes that much of today’s AI output tends toward “slop,” a kind of lowest-common-denominator content driven by probability. The challenge, he argues, is not just generating more information but protecting uniqueness and cultivating authenticity in an age where machines are optimized for averages.The complexity dilemma – As AI systems grow in scale, they become harder to understand, explain, and control. Juan frames this as a “complexity dilemma”: every increase in capability carries a parallel increase in opacity, leaving us to navigate trade-offs between power and transparency.Human-in-the-loop as necessity – Instead of replacing people, AI works best when embedded in systems where humans provide judgment, context, and ethical grounding. Juan sees human-in-the-loop design not as a stopgap, but as the foundation for trustworthy AI use.Bureaucracy as proto-AI – Juan provocatively links bureaucracy to early forms of artificial intelligence. Both are systems that process information, enforce rules, and reduce individuality into standardized outputs. This analogy helps highlight the social risks of AI if left unexamined: efficiency at the cost of humanity.Economic incentives drive design – The trajectory of AI is not determined by technical possibility alone but by the economic structures funding it. Black-box models dominate because they are profitable, not because they are inherently better for society. Incentives, not ideals, shape which technologies win.Nature, nurture, and machine intelligence – Juan extends the age-old debate about human intelligence into the AI domain, asking whether machine learning is more shaped by architecture (nature) or training data (nurture). This reflection surfaces the uncertainty of what “intelligence” even means when applied to artificial systems.Optimism and pessimism in balance – While AI carries risks of homogenization and loss of meaning, Juan maintains a cautiously optimistic view. By prioritizing creativity, human agency, and economic models aligned with authenticity, he sees pathways where AI amplifies rather than diminishes human potential.

Aug 18, 2025 • 58min
Episode #481: From Rothschilds to Robinhood: Cycles of Finance and Control
In this engaging conversation, Michael Jagdeo, founder of Exponent Labs and The Syndicate, dives into the cyclical nature of finance and power, exploring financial history from the Rothschilds to modern trends. He discusses the impact of AI on both markets and society, the balance of collectivism versus individualism, and the rise of exponential organizations. Jagdeo shares unique recruiting insights and book recommendations that reflect on the interplay between technology and human behavior, all while examining how historical narratives shape our current dynamics.

Aug 15, 2025 • 1h 30min
Episode #480: The Patchwork Age and Why AI Can’t Grasp the Human Story
In this engaging discussion, Paul Spencer, a writer at Zeitville Media, delves into the unique crossroads of AI and astrology. He argues that while AI can process data, it falls short in grasping human narratives shaped by mortality and embodiment. Spencer contrasts the solar punk and cyberpunk visions, emphasizing collaboration amid rapid change. They also discuss the cultural shifts since 2020 and explore America's evolving identity, using raw milk symbolism to reflect deeper ideological divides. It's a thought-provoking journey into technology, culture, and human experience.

Aug 11, 2025 • 1h 16min
Episode #479: From Bitcoin to Birdsong: Building Trust in a World of Fakes
Discover cutting-edge technologies aimed at ensuring authenticity in our digital age, tackling deepfakes with blockchain and proof of liveness systems. Explore the fascinating interplay between advanced cryptography and AI, as well as its implications for reality and misinformation. Delve into conservation initiatives that use AI to analyze birdsong for wildlife monitoring. Reflect on the future of technology, ecology, and the role of robotics in sustainable agriculture, all while navigating the complexities of trust in an increasingly digital world.

Aug 8, 2025 • 50min
Episode #478: Beyond Encyclopedias: Teaching History for the AI Era
Zachary Cote, Executive Director of Thinking Nation, champions critical thinking in history education. He elaborates on how memory shapes understanding and the ethics of curating historical narratives in a world of 'alternative facts.' The conversation highlights the importance of intellectual humility, advocating for a shift from memorization to inquiry. Cote warns about the misuse of AI in education, discussing its potential to diminish students' questioning skills. He encourages embracing diverse perspectives for richer historical understanding.


