Introduction to Logical Decision Theory for Computer Scientists
May 13, 2023
auto_awesome
The podcast discusses the foundational differences in decision theories and how they affect real-life scenarios like voting and negotiation. It introduces logical decision theories and Newcomblike decision problems. The chapters explore logical decision theory and its academic status, the concept of rationality in the prisoner's dilemma game, fixing the infinite loop problem, and different calculations of expected utility.
14:28
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Decision theories differ in calculating expectations, leading to debates about real-life scenarios like voting and negotiating.
Logical decision theories propose controlling the logical output of decision algorithms in situations involving similar agents or small correlations.
Deep dives
Expected Utility and Decision Theories
Decision theories agree on using expected utility as the foundation of agent definitions and rational choice. However, they differ in how to calculate the expectation, creating debates about various real-life scenarios like voting in elections or making negotiating decisions.
Prisoner's Dilemma and Rational Choice
The classical prisoner's dilemma highlights the rational choice problem, where two rational agents playing the game would defect, resulting in a suboptimal outcome. Philosophers and scientists have debated whether rational agents should defect or cooperate in such situations, leading to discussions on super rationality and solving infinite regress.
Logical Decision Theories and New Comb-like Decision Problems
Logical decision theories propose choosing as if controlling the logical output of our decision algorithm. This approach becomes crucial in situations involving agents with similar decision algorithms, large groups of similar agents, or problems influenced by even small correlations. The debate over new comb-like decision problems revolves around how to define the probability of an outcome conditioned on a choice within calculations of expected utility.
Decision theories differ on exactly how to calculate the expectation--the probability of an outcome, conditional on an action. This foundational difference bubbles up to real-life questions about whether to vote in elections, or accept a lowball offer at the negotiating table. When you're thinking about what happens if you don't vote in an election, should you calculate the expected outcome as if only your vote changes, or as if all the people sufficiently similar to you would also decide not to vote? Questions like these belong to a larger class of problems, Newcomblike decision problems, in which some other agent is similar to us or reasoning about what we will do in the future. The central principle of 'logical decision theories', several families of which will be introduced, is that we ought to choose as if we are controlling the logical output of our abstract decision algorithm. Newcomblike considerations--which might initially seem like unusual special cases--become more prominent as agents can get higher-quality information about what algorithms or policies other agents use: Public commitments, machine agents with known code, smart contracts running on Ethereum. Newcomblike considerations also become more important as we deal with agents that are very similar to one another; or with large groups of agents that are likely to contain high-similarity subgroups; or with problems where even small correlations are enough to swing the decision. In philosophy, the debate over decision theories is seen as a debate over the principle of rational choice. Do 'rational' agents refrain from voting in elections, because their one vote is very unlikely to change anything? Do we need to go beyond 'rationality', into 'social rationality' or 'superrationality' or something along those lines, in order to describe agents that could possibly make up a functional society?