AXRP - the AI X-risk Research Podcast cover image

22 - Shard Theory with Quintin Pope

AXRP - the AI X-risk Research Podcast

CHAPTER

The Parameter Function Map in Deep Learning Systems

The more possible internal configurations a system has that which produce functionally equivalent behavior the more likely you are to hit those sorts of internal configurations if that makes sense yeah. So it seems like there should be a similar sort of dynamic here where like your brain synapses are constantly changing a little bit over time in a sort of stochastic manner and so it seems like you should be in regions of brain synapse configuration that are like have a large volume of implementing very similar behavior since  your values and behaviors don't change wildly without well interacting with minimal amounts of data over your lifetime okay?

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner