
 The AI Longread Keeping AI agents under control doesn't seem very hard by Timothy B. Lee
 Aug 13, 2025 
 Timothy B. Lee, a writer for the Understanding AI website, offers a refreshing perspective on AI management. He argues that fears about losing control over AI are exaggerated. Instead of viewing AI as a threat, Lee suggests practical management strategies focused on human oversight. He emphasizes the importance of robust review processes and the principle of least privilege to integrate AI safely and effectively into organizations. His insights inspire a balanced approach to harness AI's potential without succumbing to panic. 
 AI Snips 
 Chapters 
 Transcript 
 Episode notes 
Keep Humans In Strategic Control
- Timothy B. Lee argues we shouldn't cede excessive power to AI agents and can keep humans in strategic roles.
 - He claims existing human supervision techniques can be adapted to manage AI delegation safely.
 
Testing Claude Code Revealed Supervision Friction
- Lee recounts testing Claude Code, an agent that executes commands on his local machine and asked for permission before risky actions.
 - He quickly found stepwise approvals annoying and granted blanket permissions, illustrating the supervision dilemma.
 
Use Sandboxes And Least Privilege
- Lee highlights the principle of least privilege as a better oversight mechanism than constant approvals.
 - He notes AI agents can be sandboxed per task, allowing automatic approvals without broad system access.
 

