Gradient Dissent: Conversations on AI cover image

Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere

Gradient Dissent: Conversations on AI

00:00

The Future of Large Language Models

I do believe that there are a lot of possible architectures that would be fast efficient and result in performance that we're seeing from the current large language models. There are some things you can't literally just scale up a MLP was relose because that would be done point wise, right? You wouldn't be able to learn relationships between words. But so long as you're not breaking that or like severely compromising that, I think there's a huge swath of models that would perform equivalently well and would scale equivalently well.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner