The Inside View cover image

Alan Chan And Max Kauffman on Model Evaluations, Coordination and AI Safety

The Inside View

00:00

The Future of Deep Learning

Alignment is really hard. And we don't have much time, less than 10 years to solve it. I definitely think we should still be doing alignment of deep learning. But like, it's a bet. And it might not work out. So yeah, maybe that's your first question. What am I optimistic about? Not a lot right now,. It seems we're sort of poking around in the dark, like deep learning and RLA Jeff: "I'm thinking up some prizes"

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app