
AI + a16z How Should AI Be Regulated? Use vs. Development
8 snips
Jan 20, 2026 Martin Casado, a general partner at a16z and a former CTO of Nicira, shares his insights on AI regulation. He emphasizes the importance of focusing on how AI is used rather than how it's developed, drawing parallels with past tech shifts. Casado warns that existing laws should apply when AI is used for wrongdoing. He highlights the risks of hasty regulations and the need for evidence-based policies. The discussion touches on the significance of open-source AI for U.S. competitiveness and stresses learning from historical tech governance to create effective regulations.
AI Snips
Chapters
Transcript
Episode notes
Focus Regulation On Uses, Not Development
- Regulate uses, not core development, because you can’t foresee emergent risks at invention time.
- Focusing on development creates loopholes and hampers innovation without effectively stopping bad actors.
Marginal Risk Determines Smart Regulation
- You must study marginal risk to know what new regulations, if any, are appropriate.
- Without evidence of distinct marginal risks, regulatory change may be misdirected or counterproductive.
Strengthen Enforcement Tools, Not Blanket Bans
- Build enforcement and investigative capabilities alongside legal frameworks to address misuse.
- Expand forensic and cyber skills in law enforcement as technologies evolve to keep pace with new harms.

