Manifold cover image

Artificial Intelligence & Large Language Models: Oxford Lecture — #35

Manifold

00:00

The Problem With Human Natural Language Models

The objective function in their training is to predict the n plus one word given the nth word. But this means that the model can hallucinate. In other words, it can generate plausible text, which is not factual text. Nevertheless, if it can answer your query plausibly, it will be happy in a sense,. It's not been trained to do anything more than that.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app