

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Jul 10, 2024 • 2min
LW - Robin Hanson & Liron Shapira Debate AI X-Risk by Liron
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Robin Hanson & Liron Shapira Debate AI X-Risk, published by Liron on July 10, 2024 on LessWrong.
Robin and I just had an interesting 2-hour AI doom debate. We picked up where the Hanson-Yudkowsky Foom Debate left off in 2008, revisiting key arguments in the light of recent AI advances.
My position is similar to Eliezer's: P(doom) on the order of 50%.
Robin's position remains shockingly different: P(doom) < 1%.
I think we managed to illuminate some of our cruxes of disagreement, though by no means all. Let us know your thoughts and feedback!
Topics
AI timelines
The "outside view" of economic growth trends
Future economic doubling times
The role of culture in human intelligence
Lessons from human evolution and brain size
Intelligence increase gradient near human level
Bostrom's Vulnerable World hypothesis
The optimization-power view
Feasibility of AI alignment
Will AI be "above the law" relative to humans
Where To Watch/Listen/Read
YouTube video
Podcast audio
Transcript
About Doom Debates
My podcast, Doom Debates, hosts high-quality debates between people who don't see eye-to-eye on the urgent issue of AI extinction risk.
All kinds of guests are welcome, from luminaries to curious randos. If you're interested to be part of an episode, DM me here or contact me via Twitter or email.
If you're interested in the content, please subscribe and share it to help grow its reach.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jul 10, 2024 • 21min
LW - Causal Graphs of GPT-2-Small's Residual Stream by David Udell
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Causal Graphs of GPT-2-Small's Residual Stream, published by David Udell on July 10, 2024 on LessWrong.
Thanks to the many people I've chatted with this about over the past many months. And special thanks to Cunningham et al., Marks et al., Joseph Bloom, Trenton Bricken, Adrià Garriga-Alonso, and Johnny Lin, for crucial research artefacts and/or feedback.
Codebase:
sparse_circuit_discovery
TL; DR: The residual stream in GPT-2-small, expanded with sparse autoencoders and systematically ablated, looks like the working memory of a forward pass. A few high-magnitude features causally propagate themselves through the model during inference, and these features are interpretable. We can see where in the forward pass, due to which transformer layer, those propagating features are written in and/or scrubbed out.
Introduction
What is GPT-2-small thinking about during an arbitrary forward pass?
I've been trying to isolate legible model circuits using sparse autoencoders. I was inspired by the following example, from the end of Cunningham et al. (2023):
I wanted to see whether naturalistic transformers[1] are generally this interpretable as circuits under sparse autoencoding. If this level of interpretability just abounds, then high-quality LLM mindreading & mindcontrol is in hand! If not, could I show how far we are from that kind of mindreading technology?
Related Work
As mentioned, I was led into this project by Cunningham et al. (2023), which established key early results about sparse autoencoding for LLM interpretability.
While I was working on this, Marks et al. (2024) developed an algorithm approximating the same causal graphs in constant time. Their result is what would make this scalable and squelch down the iteration loop on interpreting forward passes.
Methodology
A sparse autoencoder is a linear map, whose shape is
(autoencoder_dim, model_dim). I install sparse autoencoders at all of GPT-2-small's residual streams (one per model layer, 12 in total). Each sits at a
pre_resid bottleneck that all prior information in that forward pass routes through.[2]
I fix a context, and choose one forward pass of interest in that context. In every autoencoder, I go through and independently ablate out all of the dimensions in
autoencoder_dim during a "corrupted" forward pass. For every corrupted forward pass with a layer N sparse autoencoder dimension, I cache effects at the layer N+1 autoencoder. Every vector of cached effects can then be reduced to a set of edges in a causal graph. Each edge has a signed scalar weight and connects a node in the layer N autoencoder to a node in the layer N+1 autoencoder.
I keep only the top-k magnitude edges from each set of effects NN+1, where k is a number of edges. Then, I keep only the set of edges that form paths with lengths >1.[3]
The output of that is a top-k causal graph, showing largest-magnitude internal causal structure in GPT-2-small's residual stream during the forward pass you fixed.
Causal Graphs Key
Consider the causal graph below:
Each box with a bolded label like
5.10603 is a dimension in a sparse autoencoder.
5 is the layer number, while
10603 is its column index in that autoencoder. You can always cross-reference more comprehensive interpretability data for any given dimension on
Neuronpedia using those two indices.
Below the dimension indices, the blue-to-white highlighted contexts show how strongly a dimension activated following each of the tokens in that context (bluer means stronger).
At the bottom of the box, blue or red token boxes show the tokens most promoted (blue) and most suppressed (red) by that dimension.
Arrows between boxes plot the causal effects of an ablation on dimensions of the next layer's autoencoder. A red arrow means ablating dimension 1.x will also suppress downstream dimension 2.y. A blue arrow means ...

Jul 9, 2024 • 29min
LW - Medical Roundup #3 by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Medical Roundup #3, published by Zvi on July 9, 2024 on LessWrong.
This time around, we cover the Hanson/Alexander debates on the value of medicine, and otherwise we mostly have good news.
Technology Advances
Regeneron administers a single shot in a genetically deaf child's ear, and they can hear after a few months, n=2 so far.
Great news: An mRNA vaccine in early human clinical trials reprograms the immune system to attack glioblastoma, the most aggressive and lethal brain tumor. It will now proceed to Phase I. In a saner world, people would be able to try this now.
More great news, we have a cancer vaccine trial in the UK.
And we're testing personalized mRNA BioNTech canner vaccines too.
US paying Moderna $176 million to develop a pandemic vaccine against bird flu.
We also have this claim that Lorlatinib jumps cancer PFS rates from 8% to 60%.
The GLP-1 Revolution
Early results from a study show the GLP-1 drug liraglutide could reduce cravings in people with opioid use disorder by 40% compared with a placebo. This seems like a clear case where no reasonable person would wait for more than we already have? If there was someone I cared about who had an opioid problem I would do what it took to get them on a GLP-1 drug.
Rumblings that GLP-1 drugs might improve fertility?
Rumblings that GLP-1 drugs could reduce heart attack, stroke and death even if you don't lose weight, according to a new analysis? Survey says 6% of Americans might already be on them. Weight loss in studies continues for more than a year in a majority of patients, sustained up to four years, which is what they studied so far.
The case that GLP-1s can be sued against all addictions at scale. It gives users a sense of control which reduces addictive behaviors across the board, including acting as a 'vaccine' against developing new addictions. It can be additive to existing treatments. More alcoholics (as an example) already take GLP-1s than existing indicated anti-addiction medications, and a study showed 50%-56% reduction in risk of new or recurring alcohol addictions, another showed 30%-50% reduction for cannabis.
How to cover this? Sigh. I do appreciate the especially clean example below.
Matthew Yglesias: Conservatives more than liberals will see the systematic negativity bias at work in coverage of GLP-agonists.
Less likely to admit that this same dynamic colors everything including coverage of crime and the economy.
The situation is that there is a new drug that is helping people without hurting anyone, so they write an article about how it is increasing 'health disparities.'
The point is that they are writing similar things for everything else, too.
The Free Press's Bari Weiss and Johann Hari do a second round of 'Ozempic good or bad.' It takes a while for Hari to get to actual potential downsides.
The first is a claimed (but highly disputed) 50%-75% increased risk of thyroid cancer. That's not great, but clearly overwhelmed by reduced risks elsewhere.
The second is the worry of what else it is doing to your brain. Others have noticed it might be actively great here, giving people more impulse control, helping with things like smoking or gambling. Hari worries it might hurt his motivation for writing or sex. That seems like the kind of thing one can measure, both in general and in yourself. If people were losing motivation to do work, and this hurt productivity, we would know.
The main objection seems to be that obesity is a moral failure of our civilization and ourselves, so it would be wrong to fix it with a pill rather than correct the underlying issues like processed foods and lack of exercise. Why not be like Japan?
To which the most obvious response is that it is way too late for America to take that path. That does not mean that people should suffer. And if we find a way to fix the issues ...

Jul 9, 2024 • 6min
LW - Paper Summary: The Effects of Communicating Uncertainty on Public Trust in Facts and Numbers by AI Impacts
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper Summary: The Effects of Communicating Uncertainty on Public Trust in Facts and Numbers, published by AI Impacts on July 9, 2024 on LessWrong.
by Anne Marthe van der Bles, Sander van der Linden, Alexandra L. J. Freeman, and David J. Spiegelhalter. (2020)
https://www.pnas.org/doi/pdf/10.1073/pnas.1913678117.
Summary: Numerically expressing uncertainty when talking to the public is fine. It causes people to be less confident in the number itself (as it should), but does not cause people to lose trust in the source of that number.
Uncertainty is inherent to our knowledge about the state of the world yet often not communicated alongside scientific facts and numbers. In the "posttruth" era where facts are increasingly contested, a common assumption is that communicating uncertainty will reduce public trust. However, a lack of systematic research makes it difficult to evaluate such claims.
Within many specialized communities, there are norms which encourage people to state numerical uncertainty when reporting a number. This is not often done when speaking to the public. The public might not understand what the uncertainty means, or they might treat it as an admission of failure. Journalistic norms typically do not communicate the uncertainty.
But are these concerns actually justified? This can be checked empirically. Just because a potential bias is conceivable does not imply that it is a significant problem for many people. This paper does the work of actually checking if these concerns are valid.
Van der Bles et al. ran five surveys in the UK with a total n = 5,780. A brief description of their methods can be found in the appendix below.
Respondents' trust in the numbers varied with political ideology, but how they reacted to the uncertainty did not.
People were told the number either without mentioning uncertainty (as a control), with a numerical range, or with a verbal statement that uncertainty exists for these numbers. The study did not investigate stating p-values for beliefs. Exact statements used in the survey can be seen in Table 1, in the appendix.
The best summary of their data is in their Figure 5, which presents results from surveys 1-4. The fifth survey had smaller effect sizes, so none of the shifts in trust were significant.
Expressing uncertainty made it more likely that people perceived uncertainty in the number (A). This is good. When the numbers are uncertain, science communicators should want people to believe that they are uncertain. Interestingly, verbally reminding people of uncertainty resulted in higher perceived uncertainty than numerically stating the numerical range, which could mean that people are overestimating the uncertainty when verbally reminded of it.
The surveys distinguished between trust in the number itself (B) and trust in the source (C). Numerically expressing uncertainty resulted in a small decrease in the trust of that number. Verbally expressing uncertainty resulted in a larger decrease in the trust of that number. Numerically expressing uncertainty resulted in no significant change in the trust of the source. Verbally expressing uncertainty resulted in a small decrease in the trust of the source.
The consequences of expressing numerical uncertainty are what I would have hoped: people trust the number a bit less than if they hadn't thought about uncertainty at all, but don't think that this reflects badly on the source of the information.
Centuries of human thinking about uncertainty among many leaders, journalists, scientists, and policymakers boil down to a simple and powerful intuition: "No one likes uncertainty." It is therefore often assumed that communicating uncertainty transparently will decrease public trust in science. In this program of research, we set out to investigate whether such claims have any empirical ...

Jul 9, 2024 • 11min
LW - What and Why: Developmental Interpretability of Reinforcement Learning by Garrett Baker
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What and Why: Developmental Interpretability of Reinforcement Learning, published by Garrett Baker on July 9, 2024 on LessWrong.
Introduction
I happen to be in that happy stage in the research cycle where I ask for money so I can continue to work on things I think are important. Part of that means justifying what I want to work on to the satisfaction of the people who provide that money.
This presents a good opportunity to say what I plan to work on in a more layman-friendly way, for the benefit of LessWrong, potential collaborators, interested researchers, and funders who want to read the fun version of my project proposal
It also provides the opportunity for people who are very pessimistic about the chances I end up doing anything useful by pursuing this to have their say. So if you read this (or skim it), and have critiques (or just recommendations), I'd love to hear them! Publicly or privately.
So without further ado, in this post I will be discussing & justifying three aspects of what I'm working on, and my reasons for believing there are gaps in the literature in the intersection of these subjects that are relevant for AI alignment. These are:
1. Reinforcement learning
2. Developmental Interpretability
3. Values
Culminating in: Developmental interpretability of values in reinforcement learning.
Here are brief summaries of each of the sections:
1. Why study reinforcement learning?
1. Imposed-from-without or in-context reinforcement learning seems a likely path toward agentic AIs
2. The "data wall" means active-learning or self-training will get more important over time
3. There are fewer ways for the usual AI risk arguments to fail in the RL with mostly outcome-based rewards circumstance than the supervised learning + RL with mostly process-based rewards (RLHF) circumstance.
2. Why study developmental interpretability?
1. Causal understanding of the training process allows us to produce reward structure or environmental distribution interventions
2. Alternative & complementary tools to mechanistic interpretability
3. Connections with singular learning theory
3. Why study values?
1. The ultimate question of alignment is how can we make AI values compatible with human values, yet this is relatively understudied.
4. Where are the gaps?
1. Many experiments
2. Many theories
3. Few experiments testing theories or theories explaining experiments
Reinforcement learning
Agentic AIs vs Tool AIs
All generally capable adaptive systems are ruled by a general, ground-truth, but slow outer optimization process which reduces incoherency and continuously selects for systems which achieve outcomes in the world. Examples include evolution, business, cultural selection, and to a great extent human brains.
That is, except for LLMs. Most of the feedback LLMs receive is supervised, unaffected by the particular actions the LLM takes, and process-based (RLHF-like), where we reward the LLM according to how useful an action looks in contrast to a ground truth regarding how well that action (or sequence of actions) achieved its goal.
Now I don't want to make the claim that this aspect of how we train LLMs is clearly a fault of them, or in some way limits the problem solving abilities they can have. And I do think it possible we see in-context ground-truth optimization processes instantiated as a result of increased scaling, in the same way we see in context learning.
I do however want to make the claim that this current paradigm of mostly processed-based supervision, if it continues, and doesn't itself produce ground-truth based optimization, makes me optimistic about AI going well.
That is, if this lack of general ground-truth optimization continues, we end up with a cached bundle of not very agentic (compared to AIXI) tool AIs with limited search or bootstrapping capabilities.
Of course,...

Jul 9, 2024 • 52sec
AF - UC Berkeley course on LLMs and ML Safety by Dan H
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: UC Berkeley course on LLMs and ML Safety, published by Dan H on July 9, 2024 on The AI Alignment Forum.
The UC Berkeley course I co-taught now has lecture videos available: https://www.youtube.com/playlist?list=PLJ66BAXN6D8H_gRQJGjmbnS5qCWoxJNfe
Course site: Understanding LLMs: Foundations and Safety
Unrelatedly, a more conceptual AI safety course has its content available at https://www.aisafetybook.com/
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Jul 9, 2024 • 7min
EA - Resilience to Nuclear & Volcanic Winter by Stan Pinsent
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Resilience to Nuclear & Volcanic Winter, published by Stan Pinsent on July 9, 2024 on The Effective Altruism Forum.
This is a summary. The full CEARCH report can be found here.
Key points
Policy advocacy, targeted at a few key countries, is the most promising way to increase resilience to global agricultural crises. Advocacy should focus on increasing the degree to which governments respond with effective food distribution measures, continued trade, and adaptations to the agricultural sector.
We
estimate that an advocacy campaign costing ~$1million would avert 6,000 deaths in expectation. Incorporating the full mortality, morbidity and economic effects, the intervention would provide a marginal expected value of 24,000 DALYs per $100,000. This is around 30x the cost-effectiveness of a typical GiveWell-recommended charity.
Compared to other interventions addressing Global Catastrophic Risk (GCR), the evidence is unusually robust:
We know that the threat is real: volcanic cooling is confirmed by the historical and geological record.
We know this is neglected: current food resilience policy focuses on protecting farmers and consumers from price changes, regional agricultural shortfalls or from small global shocks. There is very little being done to prepare for a significant global agricultural shortfall.
We are uncertain about the effect size: we have significant uncertainty about the extent to which governments and the international community would step up to the challenge of a global agricultural shortfall. There is little evidence on the scale of the effect that a policy breakthrough would have on the human response.
GCR policy experts were broadly optimistic about the value of further work in this area. On average, they estimated that a two-person, five-year advocacy effort would have a 25% probability of triggering a significant policy breakthrough in one country. Experts emphasized the importance of multi-year funding to enable policy advocates to build strategic relationships.
Some experts suggested that food resilience is a better framing than ASRS (cooling catastrophe) resilience for policies that protect against global agricultural shortfall.
We identify two main sources of downside risk. (1) Increasing resilience to nuclear winter could reduce countries' reluctance to use nuclear weapons. (2) nuclear winter resilience efforts could be seen by other nuclear-armed states as preparation for war, thereby increasing tensions. However, these risks are unlikely to apply to broader food resilience efforts.
Executive Summary
This report addresses Abrupt Sunlight Reduction Scenarios (ASRSs) - catastrophic global cooling events triggered by large volcanic eruptions or nuclear conflicts - and interventions that may increase global resilience to such catastrophes. Cooling catastrophes can severely disrupt agricultural production worldwide, potentially leading to devastating famines.
We evaluate the probability of such events, model their expected impacts under various response scenarios, and identify the most promising interventions to increase global resilience.
Volcanoes are the main source of risk according to our model, although we expect nuclear cooling events to be more damaging.
We estimate that the
annual probability of an ASRS causing at least 1°C of average cooling over land is around 1 in 400, or a 20% per-century risk. Most of the threat comes from large volcanic eruptions injecting sun-blocking particles into the upper atmosphere. While the probability of a severe "nuclear winter" scenario is lower, such an event could potentially comprise a substantial portion of the expected overall burden.
This is due to the compounding effects of nuclear conflict undermining the international cooperation and social stability required for an effective humanitarian response...

Jul 9, 2024 • 1min
EA - AMA: Beast Philanthropy's Darren Margolias by Beast Philanthropy
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA: Beast Philanthropy's Darren Margolias, published by Beast Philanthropy on July 9, 2024 on The Effective Altruism Forum.
From Darren Margolias: I'm the Executive Director of Beast Philanthropy, the charity founded by the world's most popular YouTuber MrBeast.
We recently collaborated with GiveDirectly on the video below. You can read background the project from our LinkedIn here and here (plus GiveDirectly's blog)
On Thursday, July 18th I'll be recording a video AMA with CEA's Emma Richter. Her questions will come from you, and we'll post the video and transcript here afterwards.
Please post your questions as comments to this post and upvote the questions you'd like me to answer most. Emma and I will do our best to get to as many as we can.
Feel free to ask anything you'd like to know about Beast Philanthropy's process, projects, and goals!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Jul 9, 2024 • 4min
AF - Consent across power differentials by Ramana Kumar
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consent across power differentials, published by Ramana Kumar on July 9, 2024 on The AI Alignment Forum.
I'd like to put forward another description of a basic issue that's been around for a while. I don't know if there's been significant progress on a solution, and would be happy to pointed to any such progress. I've opted to go for a relatively rough and quick post that doesn't dive too hard into the details, to avoid losing the thought at all. I may be up for exploring details further in comments or follow-ups.
The Question: How do you respect the wishes (or preferences) of a subject over whom you have a lot of control?
The core problem: any indicator/requirement/metric about respecting their wishes is one you can manipulate (even inadvertently).
For example, think about trying to respect the preferences of the child you're babysitting when you simply know from experience what they will notice, how they will feel, what they will say they want, and what they will do, when you put them in one environment versus another (where the environment could be as small as what you present to them in your behaviour). Is there any way to provide them a way to meaningfully choose what happens?
We could think about this in a one-shot case where there's a round of information gathering and coming to agreement on terms, and then an action is taken. But I think this is a simplification too far, since a lot of what goes into respecting the subject/beneficiary is giving them space for recourse, space to change their mind, space to realise things that were not apparent with the resources for anticipation they had available during the first phase.
So let's focus more on the case where there's an ongoing situation where one entity has a lot of power over another but nevertheless wants to secure their consent for whatever actually happens, in a meaningful sense.
Lots of cases where this happens in real life, mostly where the powerful entity has a lot of their own agenda and doesn't care a huge amount about the subject (they may care a lot, but maybe not as much as they do about their other goals):
rape (the perhaps central example invoked by "consent")
advertising
representative democracy
colonisation ("civilising" as doing what's good for them)
Our intuitions may be mostly shaped by that kind of situation, where there's a strong need to defend against self-interest, corruption, or intention to gain and abuse power.
But I think there's a hard core of a problem left even if we remove the malicious or somewhat ill-intentioned features from the powerful entity. So let's focus: what does it mean to fully commit to respecting someone's autonomy, as a matter of genuine love or a strong sense of morality or something along those lines, even when you have a huge amount of power over them.
What forms power can take:
brute force, resources that give you physical power
support from others (that make you - your interests - a larger entity)
intelligence: the ability to predict and strategise in more detail, over longer time horizons, and faster, than the subject you are trying to engage with
speed - kinda the same as intelligence, but maybe worth pulling out as its own thing
knowledge, experience - similar to intelligence. but maybe in this case emphasising access to private relevant information. Think also of information asymmetry in negotiation.
Examples where this shows up in real life already (and where people seem to mostly suck at it, maybe due to not even trying, but there are some attempts to take it seriously: see work by Donaldson and Kymlicka):
adaptive preferences
children
animals (pets, domesticated, and otherwise)
disabled people, esp. with cognitive disabilities
oppressed/minoritised people and peoples
future generations and other non-existent peoples
It may be that the only true soluti...

Jul 9, 2024 • 16min
LW - Poker is a bad game for teaching epistemics. Figgie is a better one. by rossry
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Poker is a bad game for teaching epistemics. Figgie is a better one., published by rossry on July 9, 2024 on LessWrong.
Editor's note: Somewhat after I posted this on my own blog, Max Chiswick cornered me at LessOnline / Manifest and gave me a whole new perspective on this topic. I now believe that there is a way to use poker to sharpen epistemics that works dramatically better than anything I had been considering. I hope to write it up - together with Max - when I have time.
Anyway, I'm still happy to keep this post around as a record of my first thoughts on the matter, and because it's better than nothing in the time before Max and I get around to writing up our joint second thoughts.
As an epilogue to this story, Max and I are now running a beta test for a course on making AIs to play poker and other games. The course will a synthesis of our respective theories of pedagogy re: games, and you can read more here or in the comments. The beta will run July 15-August 15, in-person in SF, and will be free but with limited signups.
Some trading firms are driven by good decisions made by humans. (Some aren't, but we can set those aside. This post is about the ones that are.) Humans don't make better-than-average-quality decisions by default, so the better class of intellectually-driven quantitative trading firm realizes that they are in the business of training humans to make better decisions.
(The second-best class of firm contents themselves with merely selecting talent.) Some firms, famously, use poker to teach traders about decision making under uncertainty.
First, the case for poker-as-educational-tool: You have to make decisions. (Goodbye, Candy Land.) You have to make them under uncertainty. (Goodbye, chess.) If you want to win against smart competition, you have to reverse-engineer the state of your competitors' uncertainty from their decisions, in order to make better decisions yourself. (Goodbye, blackjack.)
It's the last of these that is the rarest among games. In Camel Up - which is a great game for sharpening certain skills - you place bets and make trades on the outcome of a Candy Land-style camel race. Whether you should take one coin for sure or risk one to win five if the red camel holds the lead for another round... Turn after turn, you have to make these calculations and decisions under uncertainty.
But there's no meaningful edge in scrutinizing your opponent's decision to pick the red camel. If they were right about the probabilities, you shouldn't have expected differently. And if they're wrong, it means they made a mistake, not that they know a secret about red camels.
Poker is different. Your decision is rarely dictated by the probabilities alone. Even if you draw the worst possible card, you can win if your opponent has been bluffing and has even worse - or if your next action convinces them that they should fold a hand that would have beaten yours. If you only play the odds that you see, and not the odds you see your opponent showing you, you will on average lose.
So as you grind and grind at poker, first you learn probabilities and how they should affect your decisions, then you learn to see what others' decisions imply about what they see, and then you can work on changing your decisions to avoid leaking what you know to the other players that are watching you. Or so I'm told. I would not describe myself as a particularly skilled poker player. I certainly have not ground and ground and ground.
Here's the thing, though: If you are a trading firm and you want to teach traders about making decisions uncertainty, it's not enough that poker teaches it. Nor is it enough that poker, if you grind for thousands of hours, can teach quite a lot of it. A quantitative trading firm is primarily a socialist collective run for the benefit of its workers, but it...


