
Nancy Kanwisher: Your Brain is a Swiss Army Knife
Clear+Vivid with Alan Alda
Theory of Mind and False Belief Task
This chapter discusses the concept of theory of mind and the false belief task, which test the ability to understand others' thoughts and beliefs. It explores research conducted using functional MRI and highlights the development of this cognitive ability in children.
00:00
Transcript
Play full episode
Transcript
Episode notes
Speaker 2
He mentioned Rebecca Saxe, who has done work along with you, I believe, on that part of the brain that thinks about what other people are thinking. That really interests me a lot because to me that's part of the essence of communication is to pay really good attention to what's going on in the mind of the person you're trying to communicate
Speaker 1
with. Absolutely. Absolutely. That is the essence of being able to communicate with somebody is thinking, what do they already know, what do I need to tell them, how will they react if I say this next thing. And it's really the essence of being a human being is spending a lot of your waking minutes thinking about what other people are
Speaker 2
thinking. How is it possible to figure out what part of the brain is doing that? What condition do you put the person in so that
Speaker 1
it takes place and you can look at the activated part of the brain? Right. This was Rebecca Saxe's idea when she started graduate school. I believe she was, she might have just been 20 years old. She was very young, but brilliant back then, even. And she said she wanted to study theory of mind or thinking about how we think about other people's thoughts with functional MRI. And I said, well, that is a really charming idea, but there's no way that is going to work. So you're a nice smart kid. You can try two subjects and it's not going to work. And then you're going to get serious and study vision where we can make actual progress. And she kept finding intriguing results. And I kept saying, I don't believe it. Do more controls. I don't believe it. Do more controls. After three years of this is like,
Speaker 2
okay, you're
Speaker 1
right. So what she used was she used many different strategies, but the basic one was something that was taken from some beautiful work in developmental psychology, where it's been known for a long time that kids fail what's known as the false belief task until surprisingly late. Like a three year old is a really smart individual. But if you ask them a question about what someone knows, if what they, what that person knows is different from reality, right, then three year old, three year olds are very confused by that. And don't get it right. Whereas four year olds, boom, no problem. So we use similar questions. Really, they're really kind of not even very interesting questions, things like, you know, Joe baked lasagna and put it in the blue dish in his refrigerator. Later that night, his roommate ate the lasagna and put some bread in the blue dish. When Joe opens a refrigerator in the morning, what does he expect to see in the blue dish? Lasagna or bread? Okay. It's not
Speaker 2
even hard. Three year olds fail that task. Four year olds and up do fine with that task. Just to be clear, to fail that test, you would fail it if you thought that the dish that the man put in the refrigerator had what he put in the refrigerator, but you wouldn't make the connection to the bread that was put in by somebody else. No, the correct answer. Joe puts lasagna in the fridge and goes away.
Speaker 1
He doesn't know his roommate puts bread in there. So if you ask, what does Joe expect? Joe expects lasagna. Right.
Speaker 2
But the kid, the kid who's too young to...
Speaker 1
Yeah, they say bread. Bread because they know where the bread is. Yeah. They don't understand that people can have a belief that differs from reality. Yes.
Speaker 2
Yeah. Which is around the time that they learn it's possible
Speaker 1
to lie. Exactly. Exactly.
Speaker 5
You mentioned earlier that one of
Speaker 2
your graduate students was able to scan infants in an MRI. How is that possible? Aren't they too squirmy to stay still long enough? I remember when you put me in a scanner once you told me I had to keep perfectly
Speaker 1
still. The work with infants, which is all in collaboration with Rebecca Sacks, she's the one who really figured out how to scan infants. It took many, many years. She had to produce her own subjects to learn how to scan infants. Her son Arthur was scanned many, many times in his first year of life and between Rebecca and Arthur and some intrepid graduate students. It took many years and they figured out how to get usable data from infants. It's very, very difficult because infants tend to move in the scanner and when they move they blur all the brain imaging data and we get a mess. What you have to do is throw away almost all of your data except for the few little moments when the infants aren't moving. Then you have to figure out how to piece together those little moments of non-moving data to try to eke out a
Speaker 2
signal. What are some examples of times when it was really worth all that trouble?
Speaker 1
Rebecca and Heather with generously including me, kept it sing with them, but really the hard work of Rebecca
Speaker 4
and Heather showed that the face selective
Speaker 1
and place selective and body selective regions we had been studying in the visual cortex of adults are all present by six months of age in infants. So all of those functionally specific responses are present very early in development, which I think is super exciting.
Speaker 2
I thought it was interesting in one of your talks that I saw that you said that recognition of I think words or letters was pronounced in people who had learned to read and not in those who hadn't. So it did grows.
Speaker 1
Yeah. So the visual word form area is a little region in high level visual cortex that responds selectively when you look at words and letters. And the thing that's so interesting about that region is that that region we know gets its selectivity from that individual's experience. And we know that for a whole bunch of reasons. One, people have only been reading for a few thousand years. And that's not enough for evolution to have crafted a special machine just for reading. And two, as you say, it shows up in kids age eight after they learn to read and it's not there in kids age five before they learn to read. And third, in people who read read only one orthography, like me, I'm lame. I only read English. If I'm scanned looking at Arabic or Hebrew or Chinese script, that region does not respond. But in people who are bilingual with say English and Hebrew, it responds to both. So that tells us that it's that individual's experience that has trained up that little patch of cortex to respond to that specific kind of stimulus.
Speaker 2
I know this goes back in our conversation, but let me ask you one more thing about face recognition. I remember you saying in one talk that there are three areas related to face recognition. Have you figured out yet why there are three? Why are they separate?
Speaker 1
Oh, I feel busted. You're so right. There's now more than three. No, I haven't figured it out, but I'll tell you a few little clues. We know that the fusiform face area is critical for recognizing faces. If you have damage to that region, you will not be able to recognize faces. Okay. So we know that there's another face selective region that's in a quite different part of the brain. It's around the corner up in the top of the temporal lobe and it responds much more to videos of faces than stills. Okay. The fusiform face area doesn't care if it's looking at a movie of a face or a still picture of a face, but this other brain region responds three times more to movies of faces than stills. So it's something about the way the face changes over time that that region is interested in. But what we know now is that region also responds to voices. So it is part of a whole set of nearby regions that are processing complex, high level social information from people. In that case, somehow putting together their face and their voice and there are many mysteries with that part of the brain and how all of that perceptual information about people gets integrated together.
Speaker 2
How many different specialized areas have been found so
Speaker 1
far? Let's see. I'd say there are about a dozen that I would take to the bank. I would just absolutely bet these are not going to be overturned by future data. And then there's another four or five that we're working on and they look interesting and I wouldn't totally bet on it yet, but I'm hopeful. And there are probably many more that we haven't even thought to look for.
Speaker 2
Do you have any evidence that they communicate with one another?
Speaker 1
Well, they have to. They have to, right? So when we go around and do stuff in the world, we don't just look at a person and say, oh, that's Alan's face. That's it. End of story. We say, OK, there's Alan's face. You know, what should I say to him? What is he thinking now? What am I going to say next? Where did I see him last? All of those things require other brain regions. So all of these regions need to be interacting with, talking to each other, sharing information. And we see this and this is hard to study because how information moves around in the brain from one region to the next is something I am deeply interested in, but it's going to move around really fast. And so most of our tools aren't good for giving us both the spatial and temporal information we need
Speaker 2
to see that information moving around. So I would imagine that artificial intelligence could be helpful if you figure out the right formula.
Speaker 1
Well, there's actually a huge revolution going on in my field now with the use of artificial neural networks, which have proven to be enormously helpful in understanding the brain. So you know, we read about chat, GPT and all these other things in the news and you know, your cell phone can suddenly recognize your friends' faces. Their names pop up on your photographs, even when you don't ask them to. So all of this, all of this has been brought about by this massive revolution in artificial neural networks just over the last decade or so. And so the interesting thing is those same artificial neural networks that are so good at object recognition and producing language, those same networks were not designed to model the brain and yet they capture a lot of the things the brain does. To me, this is completely non-obvious and fascinating. Like why should an artificial neural network just train to classify what object is present in an image? Why should that network work at all like the way the brain does? But it turns out that to a first approximation, there are many, many similarities between those artificial neural networks in the brain. And that is so surprising to me and so cool that we have now built models of the fusiform face area based on artificial neural networks where we can feed those models a completely new image and we can predict extremely accurately exactly how strongly the fusiform face area will respond to that new image. So you do the experiment on the model. We get a prediction from the model and then we run it in the brain and we say how good is that prediction? And we find that the correlation between the predicted response and the observed response in the brain is 0.9. That's a really high correlation.
Speaker 2
So that sounds like it can speed up
Speaker 1
your work. Well exactly. I'll tell you one way it speeds up our work. So I've wondered for years, you know, I say the fusiform face area responds more to faces than anything else. But what do I know? I've only tested a few dozen stimuli because how long can I keep people in the scanner? I can't test them on thousands of images, but I can build the model in the network and I can test the model on the entire machine learning database of 3 million images, run it over the weekend and get all look at all the top images in the model of the fusiform face area. We did that hoping to falsify our hypothesis, hoping that some of those top images that the model predicts the strongest response to might not be faces and then we could test those in the brain. And this would be a kind of turbocharged way to show that we were wrong. Scientists like that powerful ways to falsify your hypothesis.
Speaker 2
Yeah, that's
Speaker 1
great. So we tried that, but all 200,000 top images were faces.
Speaker 2
So, so
Speaker 1
it's just true
Speaker 2
apparently. Yeah, it was true. You raised another question in my mind. Is it just human faces were good at recognizing or do we include animal faces? Well, well, I don't think we're
Speaker 1
as good at recognizing other animals unless we care a lot about those animals. Like I can recognize my dog's face because I love him dearly. And you know, other dogs that are related and sheep farmers are very good at distinguishing one sheep from another sheep, whereas you and I probably couldn't. No. But that's a behavioral question about the ability. In fact, the fusiform face area is going to respond strongly to all of those faces. It's kind of indiscriminate. It just like anything that's got basically the basic structure of a face. It'll give you a very
Speaker 5
strong response.
Speaker 2
I wish we could talk longer, but our time is running out. However, we end every show with seven quick questions. Okay. Of all the things you could understand, what do you wish you really understood?
Speaker 1
I would like to understand how information travels from one brain region to another, how it knows where to go, what shunts it in one direction rather than another direction. I think that would be really
Speaker 2
cool. How do you tell someone to have their facts wrong?
Speaker 1
Oof, with great difficulty, with great delicacy, maybe I should
Speaker 3
say.
Speaker 2
What's the strangest question anyone has ever asked you?
Speaker 1
Oh my God. I get asked weird questions all the time. I'm sorry, I'm drawing a blank on that. I'm not coming up with anything very good.
Speaker 2
How do you stop a
Speaker 1
compulsive talker? By looking away and then interrupting if needed.
Nancy Kanwisher has discovered many areas of the brain that are specialized for one particular purpose— like recognizing faces – which is interesting to Alan because of his inability to remember the faces of people he meets. Other specialized areas include identifying food, which Alan so far has no trouble with.