

đĄď¸The Future of AI Safety Testing with Bret Kinsella, GM of Fuel iX⢠at TELUS Digital
Aug 25, 2025
Bret Kinsella, GM of Fuel iX⢠at TELUS Digital, dives into the crucial realm of AI safety testing. He discusses his journey from Voicebot.ai to leading innovative AI safety research. Key topics include a new method for red teaming called Optimization by PROmpting (OPRO) that allows AI to assess its own vulnerabilities. Kinsella highlights the implications for industries like finance and healthcare, emphasizing accountability and compliance. He also addresses the evolving landscape of AI regulations and the necessity for proactive risk management to safeguard future advancements.
AI Snips
Chapters
Transcript
Episode notes
From VoiceBot.ai To TELUS Digital
- Bret Kinsella described his 30-year journey from early internet projects to founding VoiceBot.ai and joining TELUS Digital to productize internal AI tools.
- He explained how that work led to Fuel iXâ˘, TELUS Digital's generative AI platform used across tens of thousands of internal users.
Unbounded Inputs And Probabilistic Outputs
- Generative AI differs from older systems because both inputs and outputs are unbounded and probabilistic, making traditional programmatic security tools inadequate.
- This unbounded nature increases variability and introduces risks that guardrails alone cannot fully prevent.
Attacks Have A Random Success Distribution
- The same prompt sent multiple times can produce different outcomes because LLM-based systems are probabilistic and system-level behavior varies over repeats.
- Therefore single-shot red teaming yields a false binary view of vulnerability and misses the distribution of risk.