Generally Intelligent cover image

Episode 26: Sugandha Sharma, MIT, on biologically inspired neural architectures, how memories can be implemented, and control theory

Generally Intelligent

00:00

The Upper Limits in Information Theory That Stop Applying, Right?

The basic idea is that if you separate the memory from features, then you can. The memory part of it doesn't need to have an explicit arbitrary information content. You could use a predefined set of states as attractors that formed fixed points of a dynamical system. Because these are pre-chosen, they don't really have any information content in them. And so then that way, now you can store an exponential number of features, but definitely, you still can't store full information in these arbitrary features. So you have to somehow lose some information either.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app