
Yann LeCun: Filling the Gap in Large Language Models
Eye On A.I.
Is There a Limit to What We Can Do With LLMs?
Can we usefully transform existing language models whose purpose is only to produce text in such a way that they can do the planning and objectives? The answer is yes, that's probably fairly simple to do. But if you want systems that are robust and work, we need them to be grounded in reality. And my guess is that we can't do it with generative models, so we'll have to do John Tam Bidding. How does a computer recognize an image without tokenization? So conditional nets, for example, don't tokenize. They take an image as pixels, they extract local features on different windows on the image that overlap. That's the right approach long term, but
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.