

Astral Codex Ten Podcast
Jeremiah
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Episodes
Mentioned books

Apr 1, 2021 • 18min
Oh, The Places You'll Go When Trying To Figure Out The Right Dose Of Escitalopram
https://astralcodexten.substack.com/p/oh-the-places-youll-go-when-trying I. What is the right dose of Lexapro (escitalopram)? The official FDA packet insert recommends a usual dose of 10 mg, and a maximum safe dose of 20 mg. It says studies fail to show 20 mg works any better than 10, but you can use 20 if you really want to. But Jakubovski et al's Dose-Response Relationship Of Selective Serotonin Reuptake Inhibitors tries to figure out which doses of which antidepressants are equivalent to each other, and comes up with the following suggestion (ignore the graph, read the caption) 16.7 mg Lexapro equals 20 mg of paroxetine (Paxil) or fluoxetine (Prozac). But the maximum approved doses of those medications are 60 mg and 80 mg, respectively. If we convert these to mg imipramine equivalents like the study above uses, Prozac maxes out at 400, Paxil at 300, and Lexapro at 120. So Lexapro has a very low maximum dose compared to other similar antidepressants. Why? Because Lexapro (escitalopram) is a derivative of the older drug Celexa (citalopram). Sometime around 2011, the FDA freaked out that high doses of citalopram might cause a deadly heart condition called torsade de pointes, and lowered the maximum dose to prevent this. Since then it's been pretty conclusively shown that the FDA was mostly wrong about this and kind of bungled the whole process. But they forgot to ever unbungle it, so citalopram still has a lower maximum dose than every other antidepressant. When escitalopram was invented, it inherited its parent chemical's unusually-low maximum dose, and remains at that level today [edit: I got the timing messed up, see here]

Mar 26, 2021 • 15min
Toward A Bayesian Theory Of Willpower
https://astralcodexten.substack.com/p/towards-a-bayesian-theory-of-willpower I. What is willpower? Five years ago, I reviewed Baumeister and Tierney's book on the subject. They tentatively concluded it's a way of rationing brain glucose. But their key results have failed to replicate, and people who know more about glucose physiology say it makes no theoretical sense. Robert Kurzban, one of the most on-point critics of the glucose theory, gives his own model of willpower: it's a way of minimizing opportunity costs. But how come my brain is convinced that playing Civilization for ten hours has no opportunity cost, but spending five seconds putting away dishes has such immense opportunity costs that it will probably leave me permanently destitute? I can't find any correlation between the subjective phenomenon of willpower or effort-needingness and real opportunity costs at all.

Mar 26, 2021 • 13min
More Antifragile, Diversity Libertarianism, And Corporate Censorship
https://astralcodexten.substack.com/p/more-antifragile-diversity-libertarianism In yesterday's review of Antifragile, I tried to stick to something close to Taleb's own words. But here's how I eventually found myself understanding an important kind of antifragility. I feel bad about this, because Taleb hates bell curves and tells people to stop using them as examples, but sorry, this is what I’ve got. Suppose that Distribution 1 represents nuclear plants. It has low variance, so all the plants are pretty similar. Plant A is slightly older and less fancy than Plant B, but it still works about the same. Now we move to Distribution 2. It has high variance. Plant B is the best nuclear plant in the world. It uses revolutionary new technology to squeeze extra power out of each gram of uranium, its staff are carefully-trained experts, and it's won Power Plant Magazine's Reactor Of The Year award five times in a row. Plant A suffers a meltdown after two days, killing everybody. If you live in a region with lots of nuclear plants, you'd prefer they be on the first distribution, the low-variance one. Having some great nuclear plants is nice, but having any terrible ones means catastrophe. Much better for all nuclear plants to be mediocre.

Mar 25, 2021 • 35min
Book Review: Antifragile
https://astralcodexten.substack.com/p/book-review-antifragile Nassim Taleb summarizes the thesis of Antifragile as: Everything gains or loses from volatility. Fragility is what loses from volatility and uncertainty [and antifragility is what gains from it]. The glass on the table is short volatility. The glass is fragile: the less you disrupt it, the better it does. A rock is “robust” - neither fragile nor antifragile - it will do about equally well whether you disrupt it or not. What about antifragile? Taleb's first (and cutest) example is the Hydra, which grows more and more heads the more a hero tries to harm it. What else is like this? Buying options is antifragile. Suppose oil is currently worth $10, and you pay $1 for an option to buy it at $10 next year. If there's a small amount of variance (oil can go up or down 20%), it's kind of a wash. Worst-case scenario, oil goes down 20% to $8, you don't buy it, and you've lost $1 buying the option. Best-case scenario, oil goes up 20% to $12, you exercise your option to buy for $10, you sell it for $12, and you've made a $1 profit - $2 from selling the oil, minus $1 from buying the option. Overall you expect to break even. But if there's large uncertainty - the price of oil can go up or down 1000% - then it's a great deal. Worst-case scenario, oil goes down to negative $90 and you don't buy it, so you still just lost $1. Best case scenario, oil goes up to $110, you exercise your option to buy for $10, and you make $99 ($100 profit minus $1 for the option). So the oil option is antifragile - the more the price varies, the better it will do. The more chaotic things get, the more uncertain and unpredictable the world is, the more oil options start looking like a good deal.

Mar 24, 2021 • 7min
Adding My Data Point To The Discussion Of Substack Advances
https://astralcodexten.substack.com/p/adding-my-data-point-to-the-discussion [warning: boring inside baseball post] From The Hypothesis: Here's Why Substack's Scam Worked So Well. It summarizes a common Twitter argument that Substack is doing something sinister by offering some writers big advances. The sinister thing differs depending on who's making the argument - in this case, it's making people think they could organically make lots of money on Substack (because they see other writers doing the same) when really the big money comes from Substack paying a pre-selected group money directly. Other people have said it's Substack exercising editorial policy to attract a certain type of person to their site, usually coupled with the theory that the people they choose are problematic. I'm one of the writers Substack paid, which gives me some extra information on how this went down. Here's a stylized interpretation of the email conversation that got it started: SUBSTACK: You should join our new blogging thing! ME: No. SUBSTACK: It's really good!

Mar 20, 2021 • 49min
Book Review: The New Sultan
Explore the rise of Erdogan from democracy to dictatorship, Erdogan's educational background, the soft coup in Turkey, the rise of the AK party, and the erosion of democracy in Turkey.

Mar 18, 2021 • 17min
Sleep Is The Mate Of Death
https://astralcodexten.substack.com/p/sleep-is-the-mate-of-death Melancholic depressive patients report that they feel worst in the morning, just after waking up, get better as the day goes on, and feel least affected in the evening just before bed. Continue the trend, and you might wonder how depressed people would feel after spending 24 or 36 or 48 hours awake. Some scientists made them stay awake to check, and the answer is: they feel great! About 70% of cases of treatment-resistant depression go away completely if the patient stays awake long enough. This would be a great depression cure, except that the depression comes back as soon as they go to sleep. There's a lot of great work going on to figure out how to make cure-by-sleep-deprivation last longer - see the Chronotherapeutics Manual for more details. But forget the practical side of this for now. It looks like sleep is somehow renewing these people's depressions. As if depression is caused by some injury during sleep, heals part of the way during an average day (or all the way during an extra-long day of sleep deprivation) and then the same injury gets re-inflicted during sleep the next night.

Mar 15, 2021 • 16min
Mantic Monday: Mantic Matt Y
https://astralcodexten.substack.com/p/mantic-monday-mantic-matt-y The current interest in forecasting grew out of Iraq-War-era exasperation with the pundit class. Pundits were constantly saying stuff, like "Saddam definitely has WMDs, trust me, I'm an expert", then getting proven wrong, then continuing to get treated as authorities and thought leaders. Occasionally they would apologize, but they'd be back to telling us what we Had To Believe the next week. You don't want a rule that if a pundit ever gets anything wrong, we stop trusting them forever. Warren Buffett gets some things wrong, Zeynep Tufecki gets some things wrong, even Nostradamus would have gotten some things wrong if he'd said anything clearly enough to pin down what he meant. The best we can hope for is people with a good win-loss record. But how do you measure win-loss record? Lots of people worked on this (especially Philip Tetlock) and we ended up with the kind of probabilistic predictions a lot of people use now. But not pundits. We never did get the world where pundits, bloggers, and other commentators post predictions clearly in a way where they can check up on them later.

Mar 13, 2021 • 6min
Richard Nixon Vs. Cool
https://astralcodexten.substack.com/p/richard-nixon-vs-cool In the highlights post on class, I wrote: When I was in middle school, I used to wonder - there are cool kids and uncool kids, right? But suppose all the uncool kids agreed to think of themselves as cool, and to make fun of the currently-cool kids. Then you would just have two groups of kids, each considering themselves superior and looking down on the other. And the currently-uncool-kid group would be bigger and probably win, insofar as it’s possible to win these things. So why don’t they do that? I have lots of partial answers, but still no satisfying one. I feel the same way about the [cultural] upper class. IR responded in the comments: In Rick Perlstein's excellent "Nixonland", he says that Richard Nixon had exactly this idea in college, and managed to make it work pretty well. He also ties this in to Nixon's future success at building a Republican "silent majority" coalition of anti-hippie reaction vs. the latte-sipping NYT-reading 70s liberal "consensus". If I may quote at length:

Mar 13, 2021 • 1h 9min
[Classic] Contra Grant On Exaggerated Differences
https://slatestarcodex.com/2017/08/07/contra-grant-on-exaggerated-differences/ Contra Grant On Exaggerated Differences I. An article by Adam Grant called Differences Between Men And Women Are Vastly Exaggerated is going viral, thanks in part to a share by Facebook exec Sheryl Sandberg. It’s a response to an email by a Google employee saying that he thought Google’s low female representation wasn’t a result of sexism, but a result of men and women having different interests long before either gender thinks about joining Google. Grant says that gender differences are small and irrelevant to the current issue. I disagree. Grant writes: It’s always precarious to make claims about how one half of the population differs from the other half—especially on something as complicated as technical skills and interests. But I think it’s a travesty when discussions about data devolve into name-calling and threats. As a social scientist, I prefer to look at the evidence.