
Episode 26: Sugandha Sharma, MIT, on biologically inspired neural architectures, how memories can be implemented, and control theory
Generally Intelligent
00:00
Is Scalability a Blocker to Scalability?
With the theory that we've done in mesh, it's very clear what you need to do in terms of scaling. And so their scalability is not an issue at all. You can make the model bigger or smaller depending on how many memories you want reconstructed perfectly. The learning rules that we are using have its own limitations unless you build in some architectural bias. I think it's also the concept of modularity that we need to think about, right? Because it does seem like multiple parts of the brain are involved in many different things. So there's a lot of reuse going on among the motifs that we have.
Transcript
Play full episode