5min chapter

Eye On A.I. cover image

Yann LeCun: Filling the Gap in Large Language Models

Eye On A.I.

CHAPTER

Is There a Limit to What We Can Do With LLMs?

Can we usefully transform existing language models whose purpose is only to produce text in such a way that they can do the planning and objectives? The answer is yes, that's probably fairly simple to do. But if you want systems that are robust and work, we need them to be grounded in reality. And my guess is that we can't do it with generative models, so we'll have to do John Tam Bidding. How does a computer recognize an image without tokenization? So conditional nets, for example, don't tokenize. They take an image as pixels, they extract local features on different windows on the image that overlap. That's the right approach long term, but

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode