Brain Inspired cover image

BI 169 Andrea Martin: Neural Dynamics and Language

Brain Inspired

00:00

The Negative Effects of Large Language Models

The curiosity is just how well these deep learning models are performing without having been built generally, you know, like with the constraints of. But they were specifically built to do this task, right? That they're doing. And then they are exquisitely tuned now between feedback. They are given the basically the prerogative impress us and give me a response that can't be the training data, but that is a really good approximation of it. So there's so many other things about the models that violate our beliefs about what a tenable system is,. It's ethically very fraught. I feel like my own work and my own case is not often served by being negative.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app