Astral Codex Ten Podcast
Jeremiah
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Episodes
Mentioned books
Feb 26, 2020 • 22min
Book Review: Just Giving
https://slatestarcodex.com/2020/02/24/book-review-just-giving/ I. Traditional book reviews tend to focus on a single book, such as Just Giving by Rob Reich. We ought, however, to be reviewing a broader question: what is the role of books in a liberal democratic society? And what role should they play? Books were first invented during the early Bronze Age. Plato states people fiercely opposed the first books; in his dialogue Phaedrus, he recalls the Egyptian priests' objection to early writing: [Writing] will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. Contrast the Egyptian scribes' reception with the ceaseless praise given to the authors of our age. Rather than asking about the purposes of writing and the power of authors, we tend instead to celebrate writers, large and small, for their brilliance. But in our age, these are questions we should pose with greater urgency. Scholarly literature like Just Giving is an unaccountable, nontransparent, and perpetual exercise of power. It deserves more criticism than it has received.
Feb 18, 2020 • 9min
Sleep Support: An Individual Randomized Controlled Trial
https://slatestarcodex.com/2020/02/17/sleep-support-an-individual-randomized-controlled-trial/ I worry my sleep quality isn't great. On weekends, no matter when I go to bed, I sleep until 11 or 12. When I wake up, I feel like I've overslept. But if I try to make myself get up earlier, I feel angry and want to go back to sleep. A supplement company I trust, Nootropics Depot, recently released a new product called Sleep Support. It advertises that, along with helping you fall asleep faster, it can "improve sleep quality" by "improv[ing] sleep architecture, allowing you to achieve higher quality and more refreshing sleep." I decided to try it. The first night I took it, I woke up naturally at 9 the next morning, with no desire to go back to sleep. This has never happened before. It shocked me. And the next morning, the same thing happened. I started recommending the supplement to all my friends, some of whom also reported good results. I decided the next step was to do a randomized controlled trial. I obtained sugar pills, and put both the sugar pills and the Sleep Support pills inside bigger capsules so I couldn't tell which was which. The recommended dose was two Sleep Support pills per night, so for my 24 night trial I created 12 groups of two Sleep Support pills and 12 groups of two placebo pills.
Feb 16, 2020 • 3min
Addendum to "Targeting Meritocracy"
https://slatestarcodex.com/2020/02/14/addendum-to-targeting-meritocracy/ I've always been dissatisfied with Targeting Meritocracy and the comments it got. My position seemed so obvious to me – and the opposite position so obvious to other people – that we both had to be missing something. Reading it over, I think I was missing the idea of conflict vs mistake theory. I wrote the post from a mistake theory perspective. The government exists to figure out how to solve problems. Good government officials are the ones who can figure out solutions and implement them effectively. That means we want people who are smart and competent. Since meritocracy means promoting the smartest and most competent people, it is tautologically correct. The only conceivable problem is if we make mistakes in judging intelligence and competence, which is what I spend the rest of the post worrying about.
Feb 15, 2020 • 5min
Confirmation Bias As Misfire of Normal Bayesian Reasoning
https://slatestarcodex.com/2020/02/12/confirmation-bias-as-misfire-of-normal-bayesian-reasoning/ From the subreddit: Humans Are Hardwired To Dismiss Facts That Don't Fit Their Worldview. Once you get through the preliminary Trump supporter and anti-vaxxer denunciations, it turns out to be an attempt at an evo psych explanation of confirmation bias: Our ancestors evolved in small groups, where cooperation and persuasion had at least as much to do with reproductive success as holding accurate factual beliefs about the world. Assimilation into one's tribe required assimilation into the group's ideological belief system. An instinctive bias in favor of one's in-group" and its worldview is deeply ingrained in human psychology. I think the article as a whole makes good points, but I'm increasingly uncertain that confirmation bias can be separated from normal reasoning. Suppose that one of my friends says she saw a coyote walk by her house in Berkeley. I know there are coyotes in the hills outside Berkeley, so I am not too surprised; I believe her.
Feb 15, 2020 • 5min
Welcome (?), Infowars Readers
https://slatestarcodex.com/2020/02/12/welcome-infowars-readers/ Hello to all the new readers I've gotten from, uh, Paul Watson of Infowars. Before anything else, consider reading this statement by the CDC about vaccines. Still here? Fine. Infowars linked here with the headline Survey Finds People Who Identify As Left Wing More Likely To Have Been Diagnosed With A Mental Illness. This is accurate only insofar as the result uses the publicly available data I provide. The claim about mental illness was made by Twitter user Philippe Lemoine and not by me. In general, if a third party analyzes SSC survey data, I would prefer that media sources reporting on their analysis attribute it to them, and not to SSC. As far as I can tell, Lemoine's analysis is accurate enough, but needs some clarifications: 1. Both extreme rightists and extreme leftists are more likely than moderates to have been diagnosed with most conditions.
Feb 12, 2020 • 18min
Autogenderphilia Is Common and Not Especially Related to Transgender
https://slatestarcodex.com/2020/02/10/autogenderphilia-is-common-and-not-especially-related-to-transgender/ "Autogynephilia" means becoming aroused by imagining yourself as a woman. "Autoandrophilia" means becoming aroused by imagining yourself as a man. There's no term that describes both, but we need one, so let's say autogenderphilia. These conditions are famous mostly because a few sexologists, especially Ray Blanchard and Michael Bailey, speculate that they are the most common cause of transgender. They point to studies showing most trans women endorse autogynephilia. Most trans people disagree with this theory, sometimes very strongly, and accuse it of reducing transgender to a fetish. Without wading into the moral issues around it, I thought it would be interesting to get data from the SSC survey. The following comes partly from my own analyses and partly from wulfrickson's look at the public survey data on r/TheMotte. The survey asked the following questions:
Feb 9, 2020 • 22min
Suicide Hotspots of the World
https://slatestarcodex.com/2020/02/05/suicide-hotspots-of-the-world/ [Content warning: suicide, rape, child abuse. Thanks to MC for some help with research.] I. Guyana has the highest national suicide rate in the world, 30 people per year per 100,000. Guyana has poverty and crime and those things, but no more so than neighboring Brazil (suicide rate of 6) or Venezuela (suicide rate of 4). What's going on? One place to start: Guyana is a multi-ethnic country. Is its sky-high suicide rate focused in one ethnic group? The first answer I found was this article by a social justice warrior telling us it constitutes racial "essentialism" to even ask the question. But in the process of telling us exactly what kind of claims we should avoid, she mentions someone bringing up that "80% of the reported suicides are carried out by Indo-Guyanese". I feel like one of those classicists who has reconstructed a lost heresy through hostile quotations in Irenaeus. Indo-Guyanese aren't American Indians; they're from actual India. Apparently thousands of Indians immigrated to Guyana as indentured laborers in the late 1800s. Most went to Guyana, and somewhat fewer went to neighboring Suriname. Suriname also has a sky-high suicide rate, but slightly less than Guyana's, to the exact degree that its Indian population is slightly less than Guyana's. Basically no Indians went anywhere else in South America, and nowhere else in South America has anywhere near the suicide rate of these two countries. The most Indian regions of Guyana also have the highest suicide rate. Hmmm. Does India itself have high suicide rates? On average, yes. But India has a lot of weird suicide microclimates. Statewide rates range from from 38 in Sikkim (higher than any country in the world) to 0.5 in Bihar (lower than any country in the world except Barbados). Indo-Guyanese mostly come from Bihar and other low-suicide regions. While I can't rule out that the Indo-Guyanese come from some micro-micro-climate of higher suicidality, this guy claims to have traced them back to some of their ancestral villages and found that those villages have low suicide rates.
Feb 2, 2020 • 34min
Book Review: Human Compatible
https://slatestarcodex.com/2020/01/30/book-review-human-compatible/ I. Clarke's First Law goes: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. Stuart Russell is only 58. But what he lacks in age, he makes up in distinction: he's a computer science professor at Berkeley, neurosurgery professor at UCSF, DARPA advisor, and author of the leading textbook on AI. His new book Human Compatible states that superintelligent AI is possible; Clarke would recommend we listen. I'm only half-joking: in addition to its contents, Human Compatible is important as an artifact, a crystallized proof that top scientists now think AI safety is worth writing books about. Nick Bostrom's Superintelligence: Paths, Dangers, Strategies previously filled this role. But Superintelligence was in 2014, and by a philosophy professor. From the artifactual point of view, HC is just better – more recent, and by a more domain-relevant expert. But if you also open up the books to see what's inside, the two defy easy comparison. S:PDS was unabashedly a weird book. It explored various outrageous scenarios (what if the AI destroyed humanity to prevent us from turning it off? what if it put us all in cryostasis so it didn't count as destroying us? what if it converted the entire Earth into computronium?) with no excuse beyond that, outrageous or not, they might come true. Bostrom was going out on a very shaky limb to broadcast a crazy-sounding warning about what might be the most important problem humanity has ever faced, and the book made this absolutely clear. HC somehow makes risk from superintelligence not sound weird. I can imagine my mother reading this book, nodding along, feeling better educated at the end of it, agreeing with most of what it says (it's by a famous professor! I'm sure he knows his stuff!) and never having a moment where she sits bolt upright and goes what? It's just a bizarrely normal, respectable book. It's not that it's dry and technical – HC is much more accessible than S:PDS, with funny anecdotes from Russell's life, cute vignettes about hypothetical robots, and the occasional dad joke. It's not hiding any of the weird superintelligence parts. Rereading it carefully, they're all in there – when I leaf through it for examples, I come across a quote from Moravec about how "the immensities of cyberspace will be teeming with unhuman superminds, engaged in affairs that are to human concerns as ours are to those of bacteria". But somehow it all sounds normal. If aliens landed on the White House lawn tomorrow, I believe Stuart Russell could report on it in a way that had people agreeing it was an interesting story, then turning to the sports page. As such, it fulfills its artifact role with flying colors.
Jan 29, 2020 • 11min
Assortative Mating and Autism
https://slatestarcodex.com/2020/01/28/assortative-mating-and-autism/ Introduction Assortative mating is when similar people marry and have children. Some people worry about assortative mating in Silicon Valley: highly analytical tech workers marry other highly analytical tech workers. If highly analytical tech workers have more autism risk genes than the general population, assortative mating could put their children at very high risk of autism. How concerned should this make us? Methods / Sample Characteristics I used the 2020 Slate Star Codex survey to investigate this question. It had 8,043 respondents selected for being interested in a highly analytical blog about topics like science and economics. The blog is associated with – and draws many of its readers from – the rationalist and effective altruist movements, both highly analytical. More than half of respondents worked in programming, engineering, math, or physics. 79% described themselves as atheist or agnostic. 65% described themselves as more interested in STEM than the humanities; only 15% said the opposite. According to Kogan et al (2018), about 2.5% of US children are currently diagnosed with autism spectrum disorders. The difference between "autism" and "autism spectrum disorder" is complicated, shifts frequently, and is not very well-known to the public; this piece will treat them interchangeably from here on. There are no surveys of what percent of adults are diagnosed with autism; it is probably lower since most diagnoses happen during childhood and the condition was less appreciated in past decades. These numbers may be affected by parents' education level and social class; one study shows that children in wealthy neighborhoods were up to twice as likely to get diagnosed as poorer children.
Jan 26, 2020 • 26min
Book Review Review: Little Soldiers
https://slatestarcodex.com/2020/01/22/book-review-review-little-soldiers/ Little Soldiers is a book by Lenora Chu about the Chinese education system. I haven't read it. This is a review of Dormin111's review of Little Soldiers. Dormin describes the "plot": The author is a second-generation Chinese-American woman, raised by demanding Asian parents. Her parents made her work herself to the bone to get perfect grades in school, practice piano, get into Ivy League schools, etc. She resisted and resented the hell she was forced to go through (though she got into Stanford, so she couldn't have resisted too hard). Skip a decade. She is grown up, married, and has a three year old child. Her husband (a white guy named Rob) gets a job in China, so they move to Shanghai. She wants their three-year-old son to be bilingual/bicultural, so she enrolls him in Soong Qing Ling, the Harvard of Chinese preschools. The book is about her experiences there and what it taught her about various aspects of Chinese education. Like the lunches: During his first week at Soong Qing Ling, Rainey began complaining to his mom about eating eggs. This puzzled Lenora because as far as she knew, Rainey refused to eat eggs and never did so at home. But somehow he was eating them at school. After much coaxing (three-year-olds aren't especially articulate), Lenora discovered that Rainey was being force-fed eggs. By his telling, every day at school, Rainey's teacher would pass hardboiled eggs to all students and order them to eat. When Rainey refused (as he always did), the teacher would grab the egg and shove it in his mouth. When Rainey spit the egg out (as he always did), the teacher would do the same thing. This cycle would repeat 3-5 times with louder yelling from the teacher each time until Rainey surrendered and ate the egg.


