
DataFramed #332 How to Build AI Your Users Can Trust with David Colwell, VP of AI & ML at Tricentis
57 snips
Nov 17, 2025 David Colwell, VP of AI & ML at Tricentis, brings over 15 years of experience in AI and software testing. He dives into the critical relationship between data governance and AI quality, emphasizing the risks of unregulated AI that can lead to serious errors in legal contexts. David discusses the importance of meaningful metrics over mere quantity in AI outputs and highlights innovative strategies like using critic agents for reviewing AI content. He also stresses the evolution of data governance and the need for a human touch in ensuring AI systems are both accurate and compliant.
AI Snips
Chapters
Transcript
Episode notes
Volume ≠ Value With AI
- AI increases output volume but can make organizations mistake quantity for real value.
- Measure meaningful outcomes like velocity and impact rather than raw pages or generated content.
Hallucinated Legal Brief Sank A Lawyer
- David shares a legal disaster where a lawyer submitted a ChatGPT-generated brief full of fabricated citations and was publicly sanctioned.
- The lawyer defended themselves by claiming they treated the model like a search engine and trusted the output without verification.
Validate AI Outputs With Tests
- Put validation tests and benchmarks around AI-generated outputs so you can detect regressions quickly.
- Feed test results back to the AI to enable self-correction and tighter integration into the workflow.
