

Astral Codex Ten Podcast
Jeremiah
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Episodes
Mentioned books

Apr 15, 2022 • 17min
Links For April
https://astralcodexten.substack.com/p/links-for-april-644 1: History of the belief that garlic and magnets are natural enemies. 2: Jacob Wood’s Graph Of The Blogosphere. ACX’s neighborhood: You can also see Jacob’s description of how he made it here. It looks like it starts with some index blogs, follows them to blogs they link, and so on. I don’t know how much this captures “the whole blogosphere” vs. “blogs X degrees or fewer away from the starting blog”. It looks like a pretty complete selection of big politics/econ blogs to me, but I don’t know if there are fashion blogs or movie blogs in a totally separate universe bigger than any of us. Also, Marginal Revolution confirmed as center of the blogosphere. 3: Wondering why so many Russian and Ukrainian cities have Greek names (eg Sebastopol)? Catherine the Great had a secret plan to resurrect Byzantium and install her appropriately-named grandson Constantine as New Roman Emperor. Step 1 was to found a lot of new cities with Greek names. Step 2 was to ally with the Austrian Empire. Then the Austrians got distracted with other things and they never reached Step 3. 4: Congratulations to last year’s book review contest winner Lars Doucet, who was interviewed by Jerusalem Demsas in a Vox article on Georgism (the article prefers the term “land value tax” and never mentions George by name, which is a surprising but I think defensible choice). 5: Data from amitheasshole.reddit.com - “Posters were 64% female; post subjects (the person with whom the poster had a dispute) were 62% female. Posters had average age 31, subjects averaged 33. Male posters were significantly more likely to be the assholes…” H/T worldoptimization

Apr 13, 2022 • 1h 30min
Obscure Pregnancy Interventions: Much More Than You Wanted To Know
https://astralcodexten.substack.com/p/obscure-pregnancy-interventions-much This is intended as a sequel to my old Biodeterminist’s Guide To Parenting. It’s less ambitious, in that it focuses only on pregnancy; but also more ambitious, in that it tries to be right. I wrote Biodeterminist’s Guide in 2012, before the replication crisis was well understood, and I had too low a bar for including random crazy hypotheses. On the other hand, everyone else has too high a bar for including random crazy hypotheses! If you look at standard pregnancy advice, it’s all stuff like “take prenatal vitamins” and “avoid alcohol” and “don’t strike your abdomen repeatedly with blunt objects”. It’s fine, but it’s the equivalent of college counselors who say “get good grades and try hard on the SAT.” Meanwhile, there are tiger mothers who are making their kids play oboe 10 hours/day because they heard the Harvard music department has clout with Admissions and is short on oboists. What’s the pregnancy-advice version of that? That’s what we’re doing here. Do not take this guide as a list of things that you have to do, or (God forbid) that you should feel guilty for not doing. Take it as a list of the most extreme things you could do if you were neurotic and had no sense of proportion. Here are my headline findings:

5 snips
Apr 12, 2022 • 8min
Men Will Literally Have Completely Different Mental Processes Instead Of Going To Therapy
https://astralcodexten.substack.com/p/men-will-literally-have-completely People are debating “therapy: good or bad?” again: There are dozens of kinds of therapy: reliving your traumas, practicing mindfulness, analyzing dreams, uncovering your latent desire to have sex with your mother. But most people on both sides of this debate are talking about what psychiatrists call “supportive therapy” - unstructured talking about your feelings and what’s going on in your life. I know the responsible thing to say is something like “this is helpful for some people but not others”. I will say that, in the end. But I have a lot of sympathy for the people debating it. I have such a strong intuition of “why would this possibly work?” that it’s always shocked me when other people say it does. And I know other people with such a strong intuition of “obviously this would work!” that it shocks them to hear other people even question it. Yet my patients seem to line up about half and half: some of them find therapy really great, others not helpful at all. Whenever I try to understand this, I find myself coming back to this tweet:

Apr 12, 2022 • 27min
Deceptively Aligned Mesa-Optimizers: It's Not Funny If I Have To Explain It
https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers A Machine Alignment Monday post, 4/11/22 I. Our goal here is to popularize obscure and hard-to-understand areas of AI alignment, and surely this meme (retweeted by Eliezer last week) qualifies: So let’s try to understand the incomprehensible meme! Our main source will be Hubinger et al 2019, Risks From Learned Optimization In Advanced Machine Learning Systems. Mesa- is a Greek prefix which means the opposite of meta-. To “go meta” is to go one level up; to “go mesa” is to go one level down (nobody has ever actually used this expression, sorry). So a mesa-optimizer is an optimizer one level down from you. Consider evolution, optimizing the fitness of animals. For a long time, it did so very mechanically, inserting behaviors like “use this cell to detect light, then grow toward the light” or “if something has a red dot on its back, it might be a female of your species, you should mate with it”. As animals became more complicated, they started to do some of the work themselves. Evolution gave them drives, like hunger and lust, and the animals figured out ways to achieve those drives in their current situation. Evolution didn’t mechanically instill the behavior of opening my fridge and eating a Swiss Cheese slice. It instilled the hunger drive, and I figured out that the best way to satisfy it was to open my fridge and eat cheese.

Apr 12, 2022 • 32min
Spring Meetups In Seventy Cities
https://astralcodexten.substack.com/p/spring-meetups-in-seventy-cities Lots of people only want to go to meetups a few times a year. And they all want to go to the same big meetups as all the other people who only go a few times a year. In 2022, we set up one big well-telegraphed meetup in the fall as a Schelling point for these people. This year, we’re setting up two. We’ll have the fall meetup as usual. If you only want to go to one meetup a year, go to that one. But we’ll also have a spring round. If you only go to two meetups a year, come to this one too! You can find a list of cities and times below. If you want to add your city to the list, fill in this form; if you have questions, ask meetupsmingyuan@gmail.com .

Apr 12, 2022 • 47min
Dictator Book Club: Xi Jinping
https://astralcodexten.substack.com/p/dictator-book-club-xi-jinping [Previous entries: Erdogan, Modi, Orban] The Third Revolution, by Elizabeth Economy, promises to explain “the transformative changes underway in China today”. But like her namesake, Dr. Economy doesn’t always allocate resources the way I would like. I came to the book with questions like: How did the pre-Xi Chinese government work? How was it different from dictatorship? What safeguards did it have against it? Why hadn’t previous Chinese leaders become dictators? And: How did Xi come to power? How did he defeat those safeguards? Had previous Chinese leaders wanted more power? How come they failed to get it, but Xi succeeded? Third Revolution barely touched on any of this. It mostly explained Xi’s domestic and foreign policies. Some of this was relevant: a lot of Xi’s policies involve repression to prop up his rule. But none of it answered my key questions. So this is less of a book review than other Dictator Book Club entries. It’s a look through recent Chinese history, with The Third Revolution as a very loose inspiration.

Apr 12, 2022 • 24min
Highlights From The Comments On Self-Determination
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-self 1: Rosemary (writes Parallel Republic) says: I think a preference for the status quo has to weigh in to some extent. All else being equal, sure, I agree with the “any group large enough that it isn’t ludicrous on its face has a right to self-determination” standard. But all else is almost never equal. Someone wants to secede and someone else wants to conquer—and all of that is enormously disruptive to many other someones. So I think there’s an immediately obvious utilitarian bias towards the status quo of, oh, the last decade or so. Governments are heavy, complicated things, and I think a group who wants to disrupt that needs to make an affirmative argument based on something other than “self determination” that this is a good idea and all the disruption is worth it for the sake of things being better in the long run. Which unfortunately gets us nowhere because it brings us right back to debates about culture and history etc.

10 snips
Apr 5, 2022 • 1h 2min
Yudkowsky Contra Christiano On AI Takeoff Speeds
https://astralcodexten.substack.com/p/yudkowsky-contra-christiano-on-ai Previously in series: Yudkowsky Contra Ngo On Agents, Yudkowsky Contra Cotra On Biological Anchors Prelude: Yudkowsky Contra Hanson In 2008, thousands of blog readers - including yours truly, who had discovered the rationality community just a few months before - watched Robin Hanson debate Eliezer Yudkowsky on the future of AI. Robin thought the AI revolution would be a gradual affair, like the Agricultural or Industrial Revolutions. Various people invent and improve various technologies over the course of decades or centuries. Each new technology provides another jumping-off point for people to use when inventing other technologies: mechanical gears → steam engine → railroad and so on. Over the course of a few decades, you’ve invented lots of stuff and the world is changed, but there’s no single moment when “industrialization happened”. Eliezer thought it would be lightning-fast. Once researchers started building human-like AIs, some combination of adding more compute, and the new capabilities provided by the AIs themselves, would quickly catapult AI to unimaginably superintelligent levels. The whole process could take between a few hours and a few years, depending on what point you measured from, but it wouldn’t take decades. You can imagine the graph above as being GDP over time, except that Eliezer thinks AI will probably destroy the world, which might be bad for GDP in some sense. If you come up with some way to measure (in dollars) whatever kind of crazy technologies AIs create for their own purposes after wiping out humanity, then the GDP framing will probably work fine. For transhumanists, this debate has a kind of iconic status, like Lincoln-Douglas or the Scopes Trial. But Robin’s ideas seem a bit weird now (they also seemed a bit weird in 2008) - he thinks AIs will start out as uploaded human brains, and even wrote an amazing science-fiction-esque book of predictions about exactly how that would work. Since machine learning has progressed a lot faster than brain uploading has, this is looking less likely and probably makes his position less relevant than in 2008. The gradualist torch has passed to Paul Christiano, who wrote a 2018 post Takeoff Speeds revisiting some of Hanson’s old arguments and adding new ones. (I didn’t realize this until talking to Paul, but “holder of the gradualist torch” is a relative position - Paul still thinks there’s about a 1/3 chance of a fast takeoff.) Around the end of last year, Paul and Eliezer had a complicated, protracted, and indirect debate, culminating in a few hours on the same Discord channel. Although the real story is scattered over several blog posts and chat logs, I’m going to summarize it as if it all happened at once. Gradatim Ferociter Paul sums up his half of the debate as: There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.) That is - if any of this “transformative AI revolution” stuff is right at all, then at some point GDP is going to go crazy (even if it’s just GDP as measured by AIs, after humans have been wiped out). Paul thinks it will go crazy slowly. Right now world GDP doubles every ~25 years. Paul thinks it will go through an intermediate phase (doubles within 4 years) before it gets to a truly crazy phase (doubles within 1 year).

Apr 2, 2022 • 15min
The Low-Hanging Fruit Argument: Models And Predictions
https://astralcodexten.substack.com/p/the-low-hanging-fruit-argument-models A followup to Contra Hoel On Aristocratic Tutoring: Imagine scientists venturing off in some research direction. At the dawn of history, they don’t need to venture very far before discovering a new truth. As time goes on, they need to go further and further. Actually, scratch that, nobody has good intuitions for truth-space. Imagine some foragers who have just set up a new camp. The first day, they forage in the immediate vicinity of the camp, leaving the ground bare. The next day, they go a little further, and so on. There’s no point in traveling miles and miles away when there are still tasty roots and grubs nearby. But as time goes on, the radius of denuded ground will get wider and wider. Eventually, the foragers will have to embark on long expeditions with skilled guides just to make it to the nearest productive land. Let’s add intelligence to this model. Imagine there are fruit trees scattered around, and especially tall people can pick fruits that shorter people can’t reach. If you are the first person ever to be seven feet tall, then even if the usual foraging horizon is very far from camp, you can forage very close to camp, picking the seven-foot-high-up fruits that no previous forager could get. So there are actually many different horizons: a distant horizon for ordinary-height people, a nearer horizon for tallish people, and a horizon so close as to be almost irrelevant for giants. Finally, let’s add the human lifespan. At night, the wolves come out and eat anyone who hasn’t returned to camp. So the the maximum distance anyone will ever be able to forage is a day’s walk from camp (technically half a day, so I guess let’s imagine that everyone can teleport back to camp whenever they want).

Apr 1, 2022 • 34min
Idol Words
https://astralcodexten.substack.com/p/idol-words The woman was wearing sunglasses, a visor, a little too much lipstick, and a camera around her neck. “Excuse me,” she asked. “Is this the temple with the three omniscient idols? Where one always tells the truth, one always lies, and one answers randomly?” The center idol’s eyes glowed red, and it spoke with a voice from everywhere and nowhere, a voice like the whoosh of falling waters or the flash of falling stars. “No!” the great voice boomed. “Oh,” said the woman. “Because my Uber driver said - ". She cut herself off. “Well, do you know how to get there?” “It is here!” said the otherworldly voice. “You stand in it now!” “Didn’t you just say this wasn’t it?”