
Into AI Safety Getting Agentic w/ Alistair Lowe-Norris
Oct 20, 2025
Alistair Lowe-Norris, Chief Responsible AI Officer at Iridius, dives into the practical side of building safe AI systems. He addresses the crucial need for compliance standards and the potential of procurement practices to ensure responsible AI adoption. Alistair highlights gaps between company promises and actual safety measures, discussing models like robot avatars and the risks associated with AI expansion. He also emphasizes the importance of transparency and continuous oversight to maintain safety in AI practices.
AI Snips
Chapters
Transcript
Episode notes
Core Definition Of Trustworthy AI
- Trustworthy AI must be ethical, safe, and beneficial to people and the planet.
- Alistair Lowe-Norris frames trust as AI acting for humanity's benefit rather than against it.
iWow Project Taught Human-Centered Change
- Alistair described the UK Ministry of Defence iWow project where users resisted new workflows.
- That experience taught him change succeeds only when the human side is addressed, not just technical rollout.
AI's Unusually Fast Pace Of Change
- AI is another tool but differs in the speed and scale of change it produces.
- Alistair warns foundation models are advancing rapidly and will produce revolutionary shifts faster than many expect.

