Thing in itself cover image

Paul Thagard on cognition, consciousness, misinformation, balance

Thing in itself

CHAPTER

How to Explain Cognition Using Coherence Models

Coherence models are still important, but I've done lots of neural network models since then that have been more sophisticated in different areas. One of the big advances in cognitive science really over the last 20 years is that the neural network models have become much more neural. So my coherence models use very abstract neurons and not much like the neurons in the brain. But since then, really the last 10 or 20 years, the field of theoretical neuroscience has taken off. The classical neuroscience has developed more and more realistic neural models. They're actually modeling real neural neurons. And they do all sorts of explanatory tasks that my own coherence models weren't capable of. For example, emotions are conscious largely

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner