
Tool Use - AI Conversations Practical AI Safety (ft Kyle Clark)
Nov 11, 2025
In this insightful conversation, AI engineering consultant Kyle Clark shares his expertise on AI safety and risk mitigation. He discusses the dangers of prompt injection and AI web browsers, highlighting how they can be hijacked. Kyle emphasizes the importance of human-in-the-loop systems and the nuances of implementing AI safely in organizations. He also explores the build vs. buy debate for AI models and warns about context rot affecting AI performance. His advice? Keep humans involved and continuously educate yourself to navigate the AI landscape responsibly.
AI Snips
Chapters
Books
Transcript
Episode notes
Unexpected New Plane Of AI Risk
- New AI risks arrived faster than most organizations anticipated and people struggle to grasp novel paradigms.
- Kyle Clark stresses that humans don't naturally understand the jagged intelligence of these systems without experience.
Outlook Prompt Injection Story
- Kyle recounts discovering prompt injection and ASCII smuggling attacks via Outlook emails during Copilot rollout.
- He warned Microsoft engineers that the system could pull malicious email content into context and exfiltrate credentials.
Browser Agents Amplify Attack Surface
- Web-browser agents expand attack surface because they access arbitrary webpages and comments that may include prompt injections.
- Kyle warns providers understate how easily these systems can be hijacked and users over-trust them.



