CSPI Podcast cover image

AI Alignment as a Solvable Problem | Leopold Aschenbrenner & Richard Hanania

CSPI Podcast

00:00

The Black Box of Neural Networks

The way neural networks are trained is you just kind of like, turn the knobs on the 175 billion parameters again and again and again. The lighting up means what if there's not something that's lighting up. And then after we're going to adjust the knobs a little bit more so that it gets good, get thumbs up rather than thumbs down from humans. But this, I mean, this is part of the black box thing, right? There's no sort of like computer code you're writing. You're specifying this almost kind of like evolutionary process.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app