2min chapter

Brain Inspired cover image

BI 163 Ellie Pavlick: The Mind of a Language Model

Brain Inspired

CHAPTER

Do You Think It's Possible for a Text Only Trained Large Language Model to Learn Backwards Relative to Humans?

I think it might be possible for a text only trained large language model that's completely ungrounded to sort of learn that and get the grounding later in other words kind of learn backwards relative to humans yeah I I think it's possible. well okay so first of all we almost named my daughter Cora but it became Nora instead secondly yeah uh pointing at your belly button right it's an action and I don't know if we just need to jump right into thisbut a lot of some of some of our avenues of research deal with grounding language like in the world and how we learn as humans. So is this a fair statement to make based on what you're thinking am I interpreting your thinking

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode