The Inside View cover image

Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI

The Inside View

00:00

How to Train a Language Model With a Limited Context Window

I learned quite a lot about everything from how the low level kind of the GPU operates and the importance of the memory bandwidth and vectorization. And now you're like working with LUTER AI on alignment projects, right? Like head of alignment for like all the different projects that are going on. Yeah. So I can talk a little bit about a few of them. The first one is looking at language models kind of as a Markov chains. It's mainly that it is a largely unexplored domain and seems highly relevant to just understanding language models as we currently use them. There's there's reason to believe that when you kind of run a language model for an extended period of time

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app