
2035 "The Future Is Distributed: AI, Markets, And The Battle Between Open And Closed" | dAGI Summit 2025
This panel from the dAGI Summit brings together leaders from decentralized AI projects—Ambient, Gensyn, Nous Research, and NEAR AI—to examine why open-source, distributed approaches might prevail over centralized systems. The discussion centers on fundamental economics: closed labs face misaligned incentives (surveillance capitalism, censorship, rug-pull risk) while open-source struggles to monetize. Panelists advocate for crypto-economic models where tokens align global contributor incentives, enable permissionless participation, and create deflationary flywheels as inference demand burns supply. Key tensions emerge around launch timing (shipping imperfect networks risks credibility; waiting loses market), whether to embrace or hide Web3 properties, and whether distributed training can compete with centralized data centers.
Key Takeaways
▸ Trust as first principle: Open-source AI prevents centralized bias, censorship, and platform risk—critical as LLMs become "choice architecture" for daily decisions; users need models that won't serve provider interests over theirs.
▸ Incentive alignment problem: Closed labs monetize through services; open-source lacks revenue models—crypto tokens enable contributor coordination, revenue sharing for creators, and data provider compensation without corporate structures.
▸ Quality beats ideology: Users prioritize performance over privacy/decentralization—for open-source to win, it must deliver best-in-class capabilities; philosophical arguments alone won't drive adoption.
▸ Miner economics as foundation: Proof-of-work models make miners network owners; inference transactions burn tokens creating deflation while inflation rewards compute—mimics Bitcoin's flywheel at AI scale.
▸ RL changes everything: Reinforcement learning now rivals pre-training compute budgets—requires solving both inference and training scale simultaneously, accelerating need for distributed solutions.
▸ Privacy as unlock: Confidential compute using TEEs enables private inference where no party can see user data—necessary for user-owned AI and sensitive enterprise applications.
▸ Launch timing paradox: If comfortable launching, you've waited too long given AI's pace—but premature mainnet with exploits kills credibility; tokens can't be "relaunched" after failed start.
▸ Token utility beyond speculation: Staking for Sybil resistance, slashing for failures, global payment rails—tokens provide coordination impossible with fiat; also unlock capital for obsolete hardware.
▸ Different architecture advantages: Lean into distributed strengths—Gensyn's 40K-node swarm of small models learning via gossip protocols; edge deployment; multi-agent coordination impossible in monolithic systems.
▸ Inference-to-training flywheel: Some start with verified inference to build revenue, then fund fine-tuning and pre-training—inference demand creates monetary flywheel to subsidize training.
▸ User ownership vision: Future where users control data in secure enclaves, AI comes to the data rather than vice versa—eliminates hesitation about sharing sensitive info with centralized providers.
▸ Web3 integration split: Some say "hide crypto, just build best AI"; others argue lean into trustless properties as differentiator—non-custodial agents, fair revenue splits, permissionless innovation closed systems can't match.
▸ AI as future money: Provocative thesis that AI represents work, thus becomes money itself—though managing transition from fiat to AI-backed currencies remains unsolved challenge.
