

2035
Delphi Intelligence
Delphi Intelligence leverages the researcher, builder, and investor perspectives across the hivemind to recreate the flywheel that allowed us to excel in crypto. Now in podcast form, hear the best insights in the AI sector on the 2035 podcast!
Episodes
Mentioned books

Nov 20, 2025 • 29min
Joscha Bach presents "Machine Consciousness and Beyond" | dAGI Summit 2025
Bach reframes AI as the endpoint of a long philosophical project to “naturalize the mind,” arguing that modern machine learning operationalizes a lineage from Aristotle to Turing in which minds, worlds, and representations are computational state-transition systems. He claims computer science effectively re-discovers animism—software as self-organizing, energy-harvesting “spirits”—and that consciousness is a simple coherence-maximizing operator required for self-organizing agents rather than a metaphysical mystery. Current LLMs only simulate phenomenology using deepfaked human texts, but the universality of learning systems suggests that, when trained on the right structures, artificial models could converge toward the same internal causal patterns that give rise to consciousness. Bach proposes a biological-to-machine consciousness framework and a research program (CIMC) to formalize, test, and potentially reproduce such mechanisms, arguing that understanding consciousness is essential for culture, ethics, and future coexistence with artificial minds.Key takeaways▸ Speaker & lens: Cognitive scientist and AI theorist aiming to unify philosophy of mind, computer science, and modern ML into a single computationalist worldview.▸ AI as philosophical project: Modern AI fulfills the ancient ambition to map mind into mathematics; computation provides the only consistent language for modeling reality and experience.▸ Computationalist functionalism: Objects = state-transition functions; representations = executable models; syntax = semantics in constructive systems.▸ Cyber-animism: Software as “spirits”—self-organizing, adaptive control processes; living systems differ from dead ones by the software they run.▸ Consciousness as function: A coherence-maximizing operator that integrates mental states; second-order perception that stabilizes working memory; emerges early in development as a prerequisite for learning.▸ LLMs & phenomenology: Current models aren’t conscious; they simulate discourse about consciousness using data full of “deepfaked” phenomenology. A Turing test cannot detect consciousness because performance ≠ mechanism.▸ Universality hypothesis: Different architectures optimized for the same task tend to converge on similar internal causal structures; suggests that consciousness-like organization could arise if it’s the simplest solution to coherence and control.▸ Philosophical zombies: Behaviorally identical but non-conscious agents may be more complex than conscious ones; evolution chooses simplicity → consciousness may be the minimal solution for self-organized intelligence.▸ Language vs embodiment: Language may contain enough statistical structure to reconstruct much of reality; embodiment may not be strictly necessary for convergent world models.▸ Testing for machine consciousness: Requires specifying phenomenology, function, search space, and success criteria—not performance metrics.▸ CIMC agenda: Build frameworks and experiments to recreate consciousness-like operators in machines; explore implications for ethics, interfaces, and coexistence with future minds.

Nov 18, 2025 • 42min
"The Future Is Distributed: AI, Markets, And The Battle Between Open And Closed" | dAGI Summit 2025
This panel from the dAGI Summit brings together leaders from decentralized AI projects—Ambient, Gensyn, Nous Research, and NEAR AI—to examine why open-source, distributed approaches might prevail over centralized systems. The discussion centers on fundamental economics: closed labs face misaligned incentives (surveillance capitalism, censorship, rug-pull risk) while open-source struggles to monetize. Panelists advocate for crypto-economic models where tokens align global contributor incentives, enable permissionless participation, and create deflationary flywheels as inference demand burns supply. Key tensions emerge around launch timing (shipping imperfect networks risks credibility; waiting loses market), whether to embrace or hide Web3 properties, and whether distributed training can compete with centralized data centers.Key Takeaways▸ Trust as first principle: Open-source AI prevents centralized bias, censorship, and platform risk—critical as LLMs become "choice architecture" for daily decisions; users need models that won't serve provider interests over theirs.▸ Incentive alignment problem: Closed labs monetize through services; open-source lacks revenue models—crypto tokens enable contributor coordination, revenue sharing for creators, and data provider compensation without corporate structures.▸ Quality beats ideology: Users prioritize performance over privacy/decentralization—for open-source to win, it must deliver best-in-class capabilities; philosophical arguments alone won't drive adoption.▸ Miner economics as foundation: Proof-of-work models make miners network owners; inference transactions burn tokens creating deflation while inflation rewards compute—mimics Bitcoin's flywheel at AI scale.▸ RL changes everything: Reinforcement learning now rivals pre-training compute budgets—requires solving both inference and training scale simultaneously, accelerating need for distributed solutions.▸ Privacy as unlock: Confidential compute using TEEs enables private inference where no party can see user data—necessary for user-owned AI and sensitive enterprise applications.▸ Launch timing paradox: If comfortable launching, you've waited too long given AI's pace—but premature mainnet with exploits kills credibility; tokens can't be "relaunched" after failed start.▸ Token utility beyond speculation: Staking for Sybil resistance, slashing for failures, global payment rails—tokens provide coordination impossible with fiat; also unlock capital for obsolete hardware.▸ Different architecture advantages: Lean into distributed strengths—Gensyn's 40K-node swarm of small models learning via gossip protocols; edge deployment; multi-agent coordination impossible in monolithic systems.▸ Inference-to-training flywheel: Some start with verified inference to build revenue, then fund fine-tuning and pre-training—inference demand creates monetary flywheel to subsidize training.▸ User ownership vision: Future where users control data in secure enclaves, AI comes to the data rather than vice versa—eliminates hesitation about sharing sensitive info with centralized providers.▸ Web3 integration split: Some say "hide crypto, just build best AI"; others argue lean into trustless properties as differentiator—non-custodial agents, fair revenue splits, permissionless innovation closed systems can't match.▸ AI as future money: Provocative thesis that AI represents work, thus becomes money itself—though managing transition from fiat to AI-backed currencies remains unsolved challenge.

Nov 16, 2025 • 20min
Rawson Haverty presents "US & China AI: Lessons from Across the Pacific" | dAGI Summit 2025
Rawson contrasts the US and Chinese AI ecosystems through culture, history, and market design. The US channels deep capital into fast-forming, efficient oligopolies that drive closed, frontier models and a massive compute build-out; China orchestrates a state-guided “swarm” that rapidly diffuses (often open-source) AI across industry, leveraging dense supply chains and process skill—but with thinner margins and policy constraints. Capital and AI are framed as parallel forces that centralize if unchecked; each country fears a different failure mode (US: centralized authority; China: disorder). Looking ahead, today’s US lead meets China’s long-term industrial advantages, suggesting a durable, competitive race. The recommended path is a balanced “narrow corridor” that blends US frontier strengths with China’s diffusion strengths—seeking modular, widely accessible intelligence while avoiding both elite techno-feudalism and chaotic collapse.Key takeaways▸ Speaker & lens: Early-stage AI/robotics investor with experience in the US and China; goal is to compare AI market structures and cultures.▸ China’s dualities: Modern infrastructure yet widespread low incomes; strong tech/manufacturing innovation amid macro softness (property, LG debt, youth unemployment); open-source AI leadership despite the Great Firewall; globalization’s winner now pushing self-sufficiency.▸ US vs China—opposites and mirrors: Freedom vs stability/harmony; individual vs family unit; over-consumption vs over-production; democracy vs autocracy—yet each also reflects the other’s excesses (“fearful mirror” idea).▸ Historical roots shape instincts: US frontier ethos → skepticism of centralized authority; China’s recurring upheavals → preference for order and stability (especially among older generations).▸ Different views of capital:* US: Capital as expression of freedom/market choice (but concentrates power via money/compute).* China: Capital as instrument of national priorities (internet crackdown as example).▸ Capital ≈ AI: Both optimize for efficiency/automation; they centralize power if unchecked. The US tends to fear centralized authority; China tends to fear disorder.▸ Market structure archetypes:* US “efficient oligopoly”: Deep capital markets quickly crown category leaders—efficient allocation and reinvestment, but concentrated power and higher prices.* China “subjugated swarm”: State sets direction; provinces fund many firms → Darwinian competition; strengths in volume/quality/cost and process know-how, but lower margins, “involution,” and rising trade pushback.▸ AI ecosystems & priorities:* US: Massive compute build-out, closed frontier models, aim at AGI/ASI and “human transcendence,” global distribution.* China: Tighter cross-sector coordination, rapid diffusion of AI across society, prioritizes open-source/commoditization—useful but can embed political biases.▸ Now vs later: US leads today (chips/compute/users), but long-run trends (power generation, open-source uptake, robotics/industrial base) could tilt some advantages toward China; expect a long, competitive race.▸ Modular vs vertical: Vertically integrated stacks lead now; the speaker expects a gradual shift toward more modular intelligence (distributed incentives harnessing long-tail compute/data/talent), though it’s hard.▸ AI is physical & geopolitical: Energy, fabs, robots, and data centers anchor AI to nation-states → emerging competing operating systems (US stack ≈ Global North; China ≈ parts of Global South).▸ Governance “narrow corridor”: Need balance between strong institutions and strong civil society to avoid AI-induced totalitarianism on one side or anarchy/uncontrolled SI on the other.▸ Complementary strengths: US (frontier, software, 0→1, freedom) + China (diffusion, hardware, 1→n, stability). The tragedy is worsening ties despite potential complementarity; call for mutual curiosity and learning.

Nov 14, 2025 • 45min
"The Founders Playbook: What AI Founders Need To Know Before Raising & Scaling" | dAGI Summit 2025
This VC panel from the dAGI Summit explores venture capital's evolving landscape amid AI's transformative surge. The discussion tackles whether venture remains attractive (Sequoia's Roloff Beha argues it's "return-free risk"), examines talent consolidation toward major labs offering $10M+ salaries, and debates open-source versus centralized AI futures. Key tensions emerge: enterprise security requirements favoring closed models while advocates push permissionless innovation; the challenge of building decentralized systems when speed and capital naturally favor oligopolies. Panelists agree the power law will intensify—most funds lose money while winners capture trillion-dollar outcomes—but disagree on whether decentralized approaches can compete commercially beyond niche use cases.Key takeaways▸ Venture's extreme bifurcation: ~95% of funds will deliver sub-1x returns, but trillion-dollar outcomes are now plausible—creating unprecedented power law concentration where top funds massively outperform.▸ Talent consolidating to labs: Major AI labs pay extraordinary compensation ($10M cash offers to 24-year-olds mentioned), creating negative selection for startups—though counterbalanced by smaller teams achieving more (cited: 2 people, $1M ARR).▸ 1999 analogy breaks down: Unlike dot-com bubble, leading labs have real revenue (Anthropic at 35x revenue, 5x ARR growth)—though froth exists in oversubscribed seed rounds with 24-hour term sheet timelines.▸ Open source paradox: Distributed AI progress disappoints despite philosophical appeal; ironically, China and Meta's commoditization strategy drive open-source advancement more than decentralized crypto projects.▸ Decentralization handicapped: Startups require rapid iteration; decentralization excels at immutability (Bitcoin, DeFi)—fundamental mismatch for early-stage companies needing governance flexibility.▸ Enterprise blocks open adoption: Security, liability, and procurement bureaucracy favor centralized labs; open/decentralized projects must solve compliance or target consumer first.▸ Multipolar AI emerging: 10+ reasonably-sized labs now exist versus 2-3 two years ago—but open models still lag frontier capabilities significantly.▸ Companions achieve PMF: AI companion apps showing strong product-market fit (0 to $2.5M revenue in 6 months cited); addresses loneliness crisis (average American has 1.3 friends versus 7 needed).▸ Progress slowdown enables open-source: Open models become compelling when enterprises optimize for cost over cutting-edge; currently "AI curious" phase keeps everyone chasing frontier.▸ Safety as structural advantage: Security/interpretability aren't just cost centers—they're deployment prerequisites and potential moats (insurance products, secure compute, model evaluation).▸ Third-party evaluation essential: Labs can't grade own homework on capabilities/risks; independent evaluators necessary even as labs internalize safety work.▸ AI transforming VC: Partners using AI extensively for decisions; one fund running parallel "AI portfolio" to test if AI outperforms human selection—humans becoming "data collectors" for AI decision-making.▸ Bot performance advantage: Like poker bots that performed worse than players' peak but better than average (no tilt, bad days)—AI may outperform VCs across entire decision distribution, not just at peak.

Nov 12, 2025 • 19min
Stepan Gershuni presents "The Intent Economy: The Future of Agentic AI" | dAGI Summit 2025
In this talk, Stepan argues AI is pushing the economy from capturing attention to fulfilling intention. Instead of users spending hours searching, comparing, and coordinating, they will express goals (“Buy a Burning Man bike,” “Plan a Lisbon offsite under $X”), and a market of specialized AI agents will plan, source, negotiate, and execute. Because agents dramatically cut transaction costs, many tasks that once favored in-house teams will move to open markets where agents compete, yielding better outcomes and prices.This system requires distributed market mechanics rather than a single platform or super-agent: agents compete in multi-attribute auctions over intents, settle via cryptographic contracts, and interoperate through emerging agent standards. Trust comes from privacy-preserving user context plus public agent reputation and verifiable work receipts. With agent autonomy improving exponentially (e.g., code, legal, marketing), the speaker expects working intent-economy rails within 1–2 years, creating major opportunities for builders, researchers, and investors. Key TakeawaysShift from “attention economy” → “intention economy.” Value moves from time/clicks to outcomes: you state a goal, a network of AI agents delivers it.AI agents gain economic agency. Individuals will run dozens; orgs will run thousands—working 24/7 and transacting autonomously.Post-Coasean dynamics. As agents slash search, bargaining, contracting, and enforcement costs, markets beat firm boundaries more often; AI-native orgs stay lean and move faster.Why a network (not one super-agent): Such a singleton doesn’t exist; economics/history favor distributed, competitive markets over centralized platforms that may front-run or under-optimize user value.Every intent becomes a market. Intents are posted; solvers (agents/companies) compete to fulfill them; auctions drive efficient price discovery.Auctions must be multi-attribute. Matching isn’t just price—also SLA, ETA, constraints, policies, etc., turning intents into personalized RFPs.Throughput advantage. Agent-to-agent comms scale at hundreds of tokens/sec, compressing coordination time versus human bandwidth.Practical stack emerging. Interop and trust need standards: A2A (agent-to-agent context), MCP (tool/supply-chain orchestration), u004 (work validation via re-runs/TEEs/economic checks), X402 (agent-to-agent payments).Institutional layer required. Combine user privacy (ZK/FHE) with public reputation/track records for agents; cryptographic contracts govern fulfillment and recourse.Timeline & scale. Early versions could appear in 12–24 months; the target is a $10T+ swath of today’s digital economy (ads, e-commerce, B2B SaaS, social).

Sep 17, 2025 • 46min
Starcloud: The Rise of Orbital Data Centers
In this discussion with Philip Johnston, founder of StarCloud, we dive into the innovative world of orbital data centers. Philip shares his vision of harnessing abundant solar energy for compute power in space, enabled by decreasing launch costs. He reveals plans to launch the first H100 GPU in November 2025, promising a 100x increase in space computing capabilities. The conversation touches on the engineering challenges of radiation shielding, heat dissipation, and the geopolitical implications of moving compute off Earth, paving the way for a revolutionary era of space-based technology.

Sep 12, 2025 • 55min
Healthcare's AI Overhaul with Tanishq Abraham
Tanishq Abraham, the 21-year-old founder and CEO of Sophont AI, shares insights from his impressive journey in academia and AI. He discusses the critical need for multimodal foundation models to enhance healthcare by integrating diverse patient data. Tanishq argues for the benefits of open-source models in building trust and transparency. He envisions a future where continuous monitoring leads to proactive care, while expressing concerns about the US-China AI race. Additionally, he reveals his ambitions in various fields, including drug discovery and longevity.

Aug 12, 2025 • 1h 24min
Chinese vs. American AI: the Rundown with Alex Lee
In this discussion, Alex Lee, co-founder of TrueNorth and AI expert with a PhD in electrical engineering, shares valuable insights into the US-China AI rivalry. He explores China's leadership in open-source AI and its implications on global tech dynamics. The conversation touches on the economic incentives driving Chinese innovation and how cultural differences impact tech execution. Alex also discusses the potential ramifications of a Taiwan conflict on chip supply chains and the evolving landscape of AI hardware, including the future of AI accelerators.


