2min chapter

CSPI Podcast cover image

AI Alignment as a Solvable Problem | Leopold Aschenbrenner & Richard Hanania

CSPI Podcast

CHAPTER

The Black Box of Neural Networks

The way neural networks are trained is you just kind of like, turn the knobs on the 175 billion parameters again and again and again. The lighting up means what if there's not something that's lighting up. And then after we're going to adjust the knobs a little bit more so that it gets good, get thumbs up rather than thumbs down from humans. But this, I mean, this is part of the black box thing, right? There's no sort of like computer code you're writing. You're specifying this almost kind of like evolutionary process.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode