122 - Michal Kosinski: Studying Theory of Mind and Reasoning in LLMs.
Nov 30, 2023
auto_awesome
Dr. Michal Kosinski, Associate Professor of Organizational Behavior at Stanford University, discusses his research on theory of mind in Large Language Models (LLMs) and reasoning biases. They explore emergent properties in LLMs, the importance of theory of mind in language, testing theory of mind in LLMs, cognitive bias in solving tasks, reasoning vs intuition in language models, and the use of theory of mind tasks in LLMs. They also touch on artificial networks rediscovering human mechanisms and the guest's scientific journey.
Large language models demonstrate the ability to solve theory of mind tasks, suggesting the possibility of possessing similar capabilities.
Emergent properties like consciousness and understanding can arise in complex systems, exemplified by large language models despite their simplicity compared to the human brain.
Deep dives
Large Language Models and Theory of Mind
Large language models, such as GPT-3.5 Turbo and CHAT GPT-4, demonstrate the ability to solve theory of mind tasks, although caution is needed when interpreting these results. While humans naturally possess theory of mind, the emergence of this ability in artificial neural networks suggests the possibility that language-trained neural networks could also possess similar capabilities. However, due to the complexity and unique nature of human consciousness and understanding, further research is needed to fully understand the presence and extent of theory of mind in large language models.
Understanding Emergence in Complex Systems
Emergent properties, such as consciousness and understanding, can arise in complex systems that are composed of individual elements or components. While individual components, such as neurons in a neural network, do not possess consciousness or understanding, the interactions and connections between these components can lead to the emergence of these mental properties. Large language models, as simplified artificial neural networks, exemplify this emergence, demonstrating human-like capabilities despite their fundamental simplicity compared to the human brain.
Investigating Cognitive Biases in Large Language Models
Large language models also exhibit cognitive biases, similar to humans. These biases can be observed in their responses to cognitive reasoning tasks, where models may initially provide intuitively incorrect answers but can be prompted to arrive at the correct response through step-by-step reasoning. Additionally, models may develop hyperintuitive responses that surpass the intuitive responses of humans. By studying how cognitive biases manifest in large language models, insights can be gained into the cognitive processes and reasoning abilities of both machines and humans.
The Importance of Passion and Exploration in Research
In pursuing a research career, it is crucial to follow one's passion and curiosity, even if it means exploring unconventional or lesser-known areas of study. By pursuing what truly interests and excites the researcher, motivation and high-quality work can be maintained. It is also important to be open to abandoning projects that are not progressing or are not aligned with personal interests and goals, focusing instead on producing work that is genuinely meaningful and exciting.
Xi Jia chats with Dr. Michal Kosinski, an Associate Professor of Organizational Behavior at Stanford University's Graduate School of Business. Michal's research interests recently encompass both human and artificial cognition. Currently, his work centers on examining the psychological processes in Large Language Models (LLMs), and leveraging Artificial Intelligence (AI), Machine Learning (ML), Big Data, and computational techniques to model and predict human behavior.
In this episode, they chat about Michal's recent works: "Theory of Mind Might Have Spontaneously Emerged in Large Language Models" and "Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT". Michal also shared his scientific journey and some personal suggestions for PhD students.
If you found this episode interesting at all, subscribe on our Substack and consider leaving us a good rating! It just takes a second but will allow us to reach more people and make them excited about psychology.