The Delphi Podcast cover image

Andrew Kilbride and Matthew Lewis: TestMachine's AI Is Attacking Crypto Code To Surface and Fix Vulnerabilities

The Delphi Podcast

00:00

How to Prevent Hallucinations With AI Models

Seawise: Hallucinations are like a common issue with AI models, right? The idea that these models just make things up. How do you prevent looking for things that aren't there? Seawise: Test machine technology is very different than what a large language model is doing. There's going to be external fact checking goes on.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app