AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Add a Memory to a Large Language Model
We have different types of memories in machine learning. We have to design also the memory. One option is so-called modern hop networks, where you have your entries think about text documents. For example, now when you interact with chat GPT and you ask, can you write an article about, I don't know, a French revolution? And it gives you something, but it doesn't give you references because all the text is coded in the parameters. But this new type with the modern hoptant networks, it would generate the text. This is a different way how to approach the problem.