Humans On The Loop

Design for Provably Safe AI with Evan Miyazono

10 snips
Jun 24, 2025
Evan Miyazono, CEO of Atlas Computing and former Protocol Labs researcher, dives into the fascinating world of provable AI safety. He discusses the importance of transparency in AI decisions and outlines the challenges in regulating these systems. By advocating for interdisciplinary collaboration, he reveals how combining insights from humanities and sciences can enhance AI oversight. The conversation also touches on trust, ethics, and the need for clear frameworks to align AI with human values, ultimately championing a future where technology promotes human flourishing.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

AI Review Bottleneck Risk

  • AI proliferation creates a review bottleneck, overwhelming human oversight systems.
  • Without scaling accountable review, AI accountability becomes difficult and potentially dangerous.
INSIGHT

Differentiated Laws for Humans and AI

  • Humans should have flexible, subjective laws reflecting complexity of human behavior.
  • AI systems require stricter, more objective rules that they must prove compliance with to be safe.
ADVICE

Use Specifications for AI Safety

  • Define precise, human-understandable specifications for AI outputs to ensure safety.
  • Use formal proofs or certificates to guarantee AI compliance with these specs in critical applications.
Get the Snipd Podcast app to discover more snips from this episode
Get the app