AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Learning of Networks
Hip net models were the first formalization of how a nero circuit can have two types of memory. They are called content addressible memories because you reinstantiate or recall the memory by just giving a fragment of the memory as the inpot. And so i think they're just really, they've beeng beautiful. How circuits can maintain states like persistent activity for short turm memory and stuff like that. The beauty of networks is just the elegance of the theory. It's very analogously to these discreet ote labers, their ways to construct ways that make the upworks able to store and retain variable values all at once.