Manifold cover image

Artificial Intelligence & Large Language Models: Oxford Lecture — #35

Manifold

00:00

The Hallucination of the Models

This is an example of it generating plausible text, but not actually factually accurate. And that phenomenon is now by people in this field, it's called hallucination of the models. The earlier models are not as good at GPT-4, which is the latest and greatest still has a hallucination rate of order 20%, 20 to 30%. So it will, if you ask it a detailed technical question, you say, like, tell me the five greatest papers that Freeman Dyson wrote, it could easily make up three out of five papers,. But it will look very plausible, maybe to a not.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app