The Mindplex Podcast cover image

Episode 2: Rachel St.Clair On AGI Hardware and Resource Management

The Mindplex Podcast

CHAPTER

Knurva's Sparse Distributed Memory

The hypervectorships are very good for other things like large-scale computing, just because of the nature of compression. So we've tried to take this resource limitation very seriously all the way down to the hardware level. And what ends up happening is that when you're using hypervectors to compute, you're actually computing more information per transistor rather than per space. I think getting this one new learning algorithm will allow us to learn new things without forgetting old things as quickly as we would and say something like back propagation - which fundamentally rewrites knowledge for the sake of new knowledge. That's number one biggest problem of a neural net being an AGI in my opinion.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner