Astral Codex Ten Podcast
Jeremiah
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Episodes
Mentioned books
Apr 10, 2021 • 1h 32min
Your Book Review: Order Without Law
[This is the first of many finalists in the book review contest. It's not by me - it's by an ACX reader who will remain anonymous until after voting is done, to prevent their identity from influencing your decisions. I'll be posting about two of these a week for the next few months. When you've read all of them, I'll ask you to vote for your favorite, so remember which ones you liked. The broken footnotes in this one are either my fault or Substack's, so please don't hold it against this entry. Oh, and I promise not all of them are this long. - SA] Shasta County Shasta County, northern California, is a rural area home to many cattle ranchers.1 It has an unusual legal feature: its rangeland can be designated as either open or closed. (Most places in the country pick one or the other.) The county board of supervisors has the power to close range, but not to open it. When a range closure petition is circulated, the cattlemen have strong opinions about it. They like their range open. If you ask why, they'll tell you it's because of what happens if a motorist hits one of their herd. In open range, the driver should have been more careful; "the motorist buys the cow". In closed range, the rancher should have been sure to fence his animals in; he compensates the motorist.
Apr 9, 2021 • 16min
Metis And Bodybuilders
https://astralcodexten.substack.com/p/metis-and-bodybuilders Fitness researcher Menno Henselmans writes about optimal program design for bodybuilders. His thesis is that peer-reviewed studies prove bodybuilder lore is wrong in lots of places. For example: Traditional bro wisdom holds short rest periods of 1-3 minutes are optimal for bodybuilding. There never seemed to be much of a formal argument for why other than that people traditionally trained this way. The real reason was probably that bodybuilders chased the pump and burn they get from shorter rest periods. Later the idea of chasing the pump was rationalized into the theory of metabolic stress. Yet there wasn't a single study to support that shorter rest periods actually benefit muscle growth.
Apr 7, 2021 • 15min
Two Unexpected Multiple Hypothesis Testing Problems
https://astralcodexten.substack.com/p/two-unexpected-multiple-hypothesis I. Start with Lior Pachter's Mathematical analysis of "mathematical analysis of a vitamin D COVID-19 trial". The story so far: some people in Cordoba did a randomized controlled trial of Vitamin D for coronavirus. The people who got the Vitamin D seemed to do much better than those who didn't. But there was some controversy over the randomization, which looked like this Remember, we want to randomly create two groups of similar people, then give Vitamin D to one group and see what happens. If the groups are different to start with, then we won't be able to tell if the Vitamin D did anything or if it was just the pre-existing difference. In this case, they checked for fifteen important ways that the groups could be different, and found they were only significantly different on one - blood pressure. Jungreis and Kellis, two scientists who support this study, say that shouldn't bother us too much. They point out that because of multiple testing (we checked fifteen hypotheses), we need a higher significance threshold before we care about significance in any of them, and once we apply this correction, the blood pressure result stops being significant. Pachter challenges their math - but even aside from that, come on! We found that there was actually a big difference between these groups! You can play around with statistics and show that ignoring this difference meets certain formal criteria for statistical good practice. But the difference is still there and it's real. For all we know it could be driving the Vitamin D results.
Apr 7, 2021 • 17min
2020 Predictions: Calibration Results
https://astralcodexten.substack.com/p/2020-predictions-calibration-results At the beginning of every year, I make predictions. At the end of every year, I score them (this year I'm very late). Here are 2014, 2015, 2016, 2017, 2018, and 2019. And here are the predictions I made for 2020. Some predictions are redacted because they involve my private life or the lives of people close to me. Usually I use strikethrough for things that didn't happen, but since Substack doesn't let me strikethrough text or change its color or do anything interesting, I've had to turn the ones that didn't happen into links. Italicized are getting thrown out because they were confusing or conditional on something that didn't happen. I can't decide if they're true or not. All of these judgments were as of December 31 2020, not as of now. (Remember, link means something that didn't happen, not something I was wrong about. We have a debate every year over whether 50% predictions are meaningful in this paradigm; feel free to continue it.)
Apr 4, 2021 • 15min
Ambidexterity And Cognitive Closure
https://astralcodexten.substack.com/p/ambidexterity-and-cognitive-closure Back in a more superstitious time, people believed left-handers were in league with the Devil. Now, in this age of Science, we realize that was unfair. Yes, left-handers are statistically more likely to be in league with the Devil. But so are right-handers! It's only the ambidextrous who are truly pure! At least this is the conclusion I take from Lyle & Grillo (2020) Why Are Consistently-Handed Individuals More Authoritarian: The Role Of Need For Cognitive Closure. It discusses studies finding that consistently-handed people (ie people who are not ambidextrous) are more likely to support authoritarian governments, demonstrate prejudice against "immigrants, homosexuals, Muslims, Mexicans, atheists, and liberals", and support violations of the Geneva Conventions in hypothetical scenarios. The authors link this to a construct called "need for cognitive closure", ie being very sure you are right and unwilling to consider alternate perspectives. They argue that something about the interaction of brain hemispheres regulates cognitive closure, and that ambidextrous people, with their weak hemispheric dominance, get less of it. They study 235 undergraduates and find results that generally confirm this hypothesis: their ambidextrous subjects support less authoritarian and racist beliefs, and this is partly
Apr 3, 2021 • 35min
[Classic] The Parable Of The Talents
https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/ [Content note: scrupulosity and self-esteem triggers, IQ, brief discussion of weight and dieting. Not good for growth mindset.] I. I sometimes blog about research into IQ and human intelligence. I think most readers of this blog already know IQ is 50% to 80% heritable, and that it's so important for intellectual pursuits that eminent scientists in some fields have average IQs around 150 to 160. Since IQ this high only appears in 1/10,000 people or so, it beggars coincidence to believe this represents anything but a very strong filter for IQ (or something correlated with it) in reaching that level. If you saw a group of dozens of people who were 7'0 tall on average, you'd assume it was a basketball team or some other group selected for height, not a bunch of botanists who were all very tall by coincidence. A lot of people find this pretty depressing. Some worry that taking it seriously might damage the "growth mindset" people need to fully actualize their potential. This is important and I want to discuss it eventually, but not now. What I want to discuss now is people who feel personally depressed. For example, a comment from last week: I'm sorry to leave self a self absorbed comment, but reading this really upset me and I just need to get this off my chest…How is a person supposed to stay sane in a culture that prizes intelligence above everything else – especially if, as Scott suggests, Human Intelligence Really Is the Key to the Future – when they themselves are not particularly intelligent and, apparently, have no potential to ever become intelligent? Right now I basically feel like pond scum.
Apr 1, 2021 • 18min
Oh, The Places You'll Go When Trying To Figure Out The Right Dose Of Escitalopram
https://astralcodexten.substack.com/p/oh-the-places-youll-go-when-trying I. What is the right dose of Lexapro (escitalopram)? The official FDA packet insert recommends a usual dose of 10 mg, and a maximum safe dose of 20 mg. It says studies fail to show 20 mg works any better than 10, but you can use 20 if you really want to. But Jakubovski et al's Dose-Response Relationship Of Selective Serotonin Reuptake Inhibitors tries to figure out which doses of which antidepressants are equivalent to each other, and comes up with the following suggestion (ignore the graph, read the caption) 16.7 mg Lexapro equals 20 mg of paroxetine (Paxil) or fluoxetine (Prozac). But the maximum approved doses of those medications are 60 mg and 80 mg, respectively. If we convert these to mg imipramine equivalents like the study above uses, Prozac maxes out at 400, Paxil at 300, and Lexapro at 120. So Lexapro has a very low maximum dose compared to other similar antidepressants. Why? Because Lexapro (escitalopram) is a derivative of the older drug Celexa (citalopram). Sometime around 2011, the FDA freaked out that high doses of citalopram might cause a deadly heart condition called torsade de pointes, and lowered the maximum dose to prevent this. Since then it's been pretty conclusively shown that the FDA was mostly wrong about this and kind of bungled the whole process. But they forgot to ever unbungle it, so citalopram still has a lower maximum dose than every other antidepressant. When escitalopram was invented, it inherited its parent chemical's unusually-low maximum dose, and remains at that level today [edit: I got the timing messed up, see here]
Mar 26, 2021 • 15min
Toward A Bayesian Theory Of Willpower
https://astralcodexten.substack.com/p/towards-a-bayesian-theory-of-willpower I. What is willpower? Five years ago, I reviewed Baumeister and Tierney's book on the subject. They tentatively concluded it's a way of rationing brain glucose. But their key results have failed to replicate, and people who know more about glucose physiology say it makes no theoretical sense. Robert Kurzban, one of the most on-point critics of the glucose theory, gives his own model of willpower: it's a way of minimizing opportunity costs. But how come my brain is convinced that playing Civilization for ten hours has no opportunity cost, but spending five seconds putting away dishes has such immense opportunity costs that it will probably leave me permanently destitute? I can't find any correlation between the subjective phenomenon of willpower or effort-needingness and real opportunity costs at all.
Mar 26, 2021 • 13min
More Antifragile, Diversity Libertarianism, And Corporate Censorship
https://astralcodexten.substack.com/p/more-antifragile-diversity-libertarianism In yesterday's review of Antifragile, I tried to stick to something close to Taleb's own words. But here's how I eventually found myself understanding an important kind of antifragility. I feel bad about this, because Taleb hates bell curves and tells people to stop using them as examples, but sorry, this is what I've got. Suppose that Distribution 1 represents nuclear plants. It has low variance, so all the plants are pretty similar. Plant A is slightly older and less fancy than Plant B, but it still works about the same. Now we move to Distribution 2. It has high variance. Plant B is the best nuclear plant in the world. It uses revolutionary new technology to squeeze extra power out of each gram of uranium, its staff are carefully-trained experts, and it's won Power Plant Magazine's Reactor Of The Year award five times in a row. Plant A suffers a meltdown after two days, killing everybody. If you live in a region with lots of nuclear plants, you'd prefer they be on the first distribution, the low-variance one. Having some great nuclear plants is nice, but having any terrible ones means catastrophe. Much better for all nuclear plants to be mediocre.
Mar 25, 2021 • 35min
Book Review: Antifragile
https://astralcodexten.substack.com/p/book-review-antifragile Nassim Taleb summarizes the thesis of Antifragile as: Everything gains or loses from volatility. Fragility is what loses from volatility and uncertainty [and antifragility is what gains from it]. The glass on the table is short volatility. The glass is fragile: the less you disrupt it, the better it does. A rock is "robust" - neither fragile nor antifragile - it will do about equally well whether you disrupt it or not. What about antifragile? Taleb's first (and cutest) example is the Hydra, which grows more and more heads the more a hero tries to harm it. What else is like this? Buying options is antifragile. Suppose oil is currently worth $10, and you pay $1 for an option to buy it at $10 next year. If there's a small amount of variance (oil can go up or down 20%), it's kind of a wash. Worst-case scenario, oil goes down 20% to $8, you don't buy it, and you've lost $1 buying the option. Best-case scenario, oil goes up 20% to $12, you exercise your option to buy for $10, you sell it for $12, and you've made a $1 profit - $2 from selling the oil, minus $1 from buying the option. Overall you expect to break even. But if there's large uncertainty - the price of oil can go up or down 1000% - then it's a great deal. Worst-case scenario, oil goes down to negative $90 and you don't buy it, so you still just lost $1. Best case scenario, oil goes up to $110, you exercise your option to buy for $10, and you make $99 ($100 profit minus $1 for the option). So the oil option is antifragile - the more the price varies, the better it will do. The more chaotic things get, the more uncertain and unpredictable the world is, the more oil options start looking like a good deal.


