

We’re Not Ready for Agentic AI
May 22, 2025
In this discussion, Avijit Ghosh, an Applied Policy Researcher at Hugging Face focused on AI safety, reveals the perils of deploying agentic AI without proper safeguards. He highlights the gaps in current AI ethical practices and the challenges in managing autonomy within these systems. Ghosh emphasizes the importance of human oversight in communication protocols between AI agents. Their conversation dives into the necessity for robust cybersecurity measures and the ethical implications of AI in critical fields like healthcare.
AI Snips
Chapters
Books
Transcript
Episode notes
Risks of Autonomous AI Agents
- Fully autonomous AI agents pose significant risks due to their unpredictability and potential to misuse access.
- Without safeguards, autonomous agents might replicate uncontrollably and cause harm.
History of Self-Replicating Worms
- Computer worms self-replicate and can overload systems unintentionally or maliciously.
- The first worm was accidentally created by a student counting computers on the internet.
Sandbox AI Agents for Safety
- Limit AI agents' access to the open internet and use sandboxed environments for safety.
- Include manual kill switches to stop agents if they perform unintended actions.