AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Higher order evidence introduces uncertainties about uncertainties in belief formation. Individuals may struggle when estimating their own confidence in ambiguous situations, such as predicting whether an event will occur or evaluating personal opinions on specific matters. While people often incorporate error bars or estimates in their probabilities, explicit modeling of higher order uncertainty, where uncertainties extend into the levels of confidence about beliefs, remains rare yet important to consider.
Hindsight bias, the tendency to believe past events are more predictable than they actually were, can appear rational when considering third-person assessments. Observing how someone predicted an outcome and then learning the actual result can increase confidence in their initial estimation, indicating a logical adjustment based on new information.
Distinguishing between risks and ambiguities in decision-making showcases varying levels of uncertainty handling. In risk scenarios, individuals can assign probabilities despite unknown outcomes. However, in ambiguous situations like estimating the number of spoons someone owns, uncertainties persist not just in the belief but also in the distribution over potential beliefs, leading to challenges in assessing the validity of one's confidence levels.
Navigating higher order uncertainties beyond a certain threshold may exceed the cognitive capabilities of individuals due to limitations in processing complex layers of belief confidence. While theoretically, higher order uncertainties can extend indefinitely, practical constraints and cognitive load suggest a threshold to how many levels of uncertainties can effectively be monitored and managed.
Discussing the concept of infinite hierarchies of belief and certainty, the podcast delves into how our beliefs about beliefs can extend indefinitely. It highlights the intricate nature of having multiple layers of confidence or certainty in what we believe, ranging from basic beliefs to more meta-level reflections. The discussion touches on the complexities of implicit attitudes towards these hierarchical beliefs and the implications for our decision-making processes.
Exploring higher order evidence and rationality, the podcast examines situations where higher level evidence can impact our rational decision-making processes. It delves into cases where conflicting directives from higher order evidence challenge our beliefs and rationality. The conversation navigates through scenarios where reliability and rationality clash, leading to considerations on prioritizing conflicting directives in belief revision.
Focusing on the principle of deference in rational belief formation, the podcast emphasizes how our beliefs are influenced by deference to others' rational standards. It raises questions about interpersonal deference principles and their dependency on debates between uniqueness and permissivism in epistemology. The discussion highlights the nuanced role of deference in shaping our belief systems within societal contexts and the epistemological considerations at play.
Addressing ambiguity in evidence and rational polarization, the podcast delves into how unclear evidence can lead to divergent beliefs and rational polarization. It contrasts scenarios of clear evidence guiding belief certainty with situations of ambiguity where uncertainty clouds rational decision-making. The conversation links ambiguity in evidence to challenges in forming consistent and coherent beliefs, highlighting the impact of uncertain evidence on rational belief divergence.
Reflecting on the concepts discussed, the podcast hints at the complexities of rationality and belief formation in ambiguous and hierarchical contexts. It underscores the significance of understanding deference, higher order evidence, and ambiguity in shaping our rational decision-making processes. The episode leaves viewers pondering the intricate interplay between belief structures, evidence interpretation, and the rationality of our cognitive processes.
When faced with ambiguous evidence, individuals may struggle to maintain high levels of confidence in their decisions. Ambiguity introduces uncertainty that challenges even ideally rational agents, leading to deviations from expected levels of confidence. This deviation can be influenced by individual circumstances, such as political leanings, impacting how evidence is interpreted and decisions are made. The discussion highlights the contrast between ideally rational belief and human-level rationality, emphasizing the intricate relationship between ambiguity and rational decision-making.
Artificial intelligence, specifically chat GPT models, play a significant role in shaping how evidence is presented and perceived, affecting belief formation and rational decision-making. Through feedback mechanisms like instruction tuning, AI systems learn and refine responses based on human preferences, potentially influencing biases and rationality. The conversation delves into the interplay between AI-generated evidence, bias replication, and challenges in distinguishing between rational and irrational responses, fueling ongoing debates about rationality and the impact of technology on decision-making processes.
Episode 131
I spoke with Professor Kevin Dorst about:
* Subjective Bayesianism and epistemology foundations
* What happens when you’re uncertain about your evidence
* Why it’s rational for people to polarize on political matters
Enjoy—and let me know what you think!
Kevin is an Associate Professor in the Department of Linguistics and Philosophy at MIT. He works at the border between philosophy and social science, focusing on rationality.
Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:15) When do Bayesians need theorems?
* (05:52) Foundations of epistemology, metaethics, formal models, error theory
* (09:35) Extreme views and error theory, arguing for/against opposing positions
* (13:35) Changing focuses in philosophy — pragmatic pressures
* (19:00) Kevin’s goals through his research and work
* (25:10) Structural factors in coming to certain (political) beliefs
* (30:30) Acknowledging limited resources, heuristics, imperfect rationality
* (32:51) Hindsight Bias is Not a Bias
* (33:30) The argument
* (35:15) On eating cereal and symmetric properties of evidence
* (39:45) Colloquial notions of hindsight bias, time and evidential support
* (42:45) An example
* (48:02) Higher-order uncertainty
* (48:30) Explicitly modeling higher-order uncertainty
* (52:50) Another example (spoons)
* (54:55) Game theory, iterated knowledge, even higher order uncertainty
* (58:00) Uncertainty and philosophy of mind
* (1:01:20) Higher-order evidence about reliability and rationality
* (1:06:45) Being Rational and Being Wrong
* (1:09:00) Setup on calibration and overconfidence
* (1:12:30) The need for average rational credence — normative judgments about confidence and realism/anti-realism
* (1:15:25) Quasi-realism about average rational credence?
* (1:19:00) Classic epistemological paradoxes/problems — lottery paradox, epistemic luck
* (1:25:05) Deference in rational belief formation, uniqueness and permissivism
* (1:39:50) Rational Polarization
* (1:40:00) Setup
* (1:37:05) Epistemic nihilism, expanded confidence akrasia
* (1:40:55) Ambiguous evidence and confidence akrasia
* (1:46:25) Ambiguity in understanding and notions of rational belief
* (1:50:00) Claims about rational sensitivity — what stories we can tell given evidence
* (1:54:00) Evidence vs presentation of evidence
* (2:01:20) ChatGPT and the case for human irrationality
* (2:02:00) Is ChatGPT replicating human biases?
* (2:05:15) Simple instruction tuning and an alternate story
* (2:10:22) Kevin’s aspirations with his work
* (2:15:13) Outro
Links:
* Professor Dorst’s homepage and Twitter
* Papers
* Hedden: Hindsight bias is not a bias
* Higher-order evidence + (Almost) all evidence is higher-order evidence
* Being Rational and Being Wrong
* ChatGPT and human irrationality
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode