AXRP - the AI X-risk Research Podcast cover image

22 - Shard Theory with Quintin Pope

AXRP - the AI X-risk Research Podcast

00:00

The Importance of Consequences in Deep Learning

Current practice is actually very similar to the human brain already. Some of the ways that the brain works i think are kind of bad from an alignment perspective. The key controversial and alignment relevant claim of shard theory is that pretty simple things from deep learning are the way you want to go in order to get value alignment to work in a human like manner okay.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app