80k After Hours cover image

Highlights: #157 – Ezra Klein on existential risk from AI and what DC could do about it

80k After Hours

00:00

Slowing down AI development through interpretability, liability, and licensing

Exploring strategies to slow down the development of AI systems, such as demanding interpretability, placing liability on designers, and implementing government licenses for training AI models.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app