AXRP - the AI X-risk Research Podcast cover image

22 - Shard Theory with Quintin Pope

AXRP - the AI X-risk Research Podcast

CHAPTER

The Importance of Consequences in Deep Learning

Current practice is actually very similar to the human brain already. Some of the ways that the brain works i think are kind of bad from an alignment perspective. The key controversial and alignment relevant claim of shard theory is that pretty simple things from deep learning are the way you want to go in order to get value alignment to work in a human like manner okay.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner