2min chapter

The Mindplex Podcast cover image

Episode 2: Rachel St.Clair On AGI Hardware and Resource Management

The Mindplex Podcast

CHAPTER

Knurva's Sparse Distributed Memory

The hypervectorships are very good for other things like large-scale computing, just because of the nature of compression. So we've tried to take this resource limitation very seriously all the way down to the hardware level. And what ends up happening is that when you're using hypervectors to compute, you're actually computing more information per transistor rather than per space. I think getting this one new learning algorithm will allow us to learn new things without forgetting old things as quickly as we would and say something like back propagation - which fundamentally rewrites knowledge for the sake of new knowledge. That's number one biggest problem of a neural net being an AGI in my opinion.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode