The understanding piece, I think is still something that it can simulate a feeling of understanding somewhat reasonably well. That being said though, because these large language models have this like massive matrix of associations of different concepts, it's actually really good for helping. So when I try to understand the world, what are the features that are actually salient? Because too often I feel like humans are, we have so many biases that we forget all the different things that we should be considering.
Conversation between founder of RE:collect, Alice Albrecht and Sam Arbesman, Scientist in-residence at Lux Capital
You can sign up for Re:Collect early access here: https://www.re-collect.ai/
Jerry's Mindmap of this episode: https://bra.in/8jgDK2