Astral Codex Ten Podcast
Jeremiah
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Episodes
Mentioned books
Mar 3, 2022 • 6min
Microaddictions
https://astralcodexten.substack.com/p/microaddictions Everyone always says you should "eat mindfully". I tried this once and it was weird. For example, I noticed that only the first few bites of a tasty food actually tasted good. After that I habituated and lost it. Not only that, but there was a brief period when I finished eating the food which was below hedonic baseline. This seems pretty analogous to addiction, tolerance, and withdrawal. If you use eg heroin, I'm told it feels very good the first few times. After that it gets gradually less euphoric, until eventually you need it to feel okay at all. If you quit, you feel much worse than normal (withdrawal) for a while until you even out. I claim I went through this whole process in the space of a twenty minute dinner. I notice this most strongly with potato chips. Presumably this is pretty common, given their branding:
Mar 1, 2022 • 58min
Ukraine Warcasting
https://astralcodexten.substack.com/p/ukraine-warcasting Yeah, I know you're saturated with Ukraine content. Yeah, I know everyone wants to relate their hobbyhorse to Ukraine. But I think it's genuinely useful to talk about prediction markets right now. Current conventional wisdom is that the invasion was a miscalculation on Putin's part, after he surrounded himself with so many yes-men that he lost touch with reality. But Ukraine miscalculated too; until almost the day of the invasion, Zelenskyy was saying everything would be okay. And if there's a nuclear exchange, it will be because of miscalculation - I don't know what the miscalculation will be, just that nobody goes into a nuclear exhange because they want to. Preserving people's access to reality and helping them avoid miscalculations are peacekeeping measures, sometimes very important ones. The first part of this post looks at various markets' predictions of how the war will go from here (Zvi published something like this a few hours before I could, so this will mostly duplicate his work). The second part very briefly tries to evaluate which markets have been most accurate so far - though this is a topic which deserves at least paper-length treatment. The third part looks at which pundits deserve eternal glory for publicly making strong true predictions, and which pundits deserve . . . something else, for doing . . . other things.
Feb 26, 2022 • 1min
Austin Meetup Correction
https://astralcodexten.substack.com/p/austin-meetup-correction?utm_source=url Austin meetup is still this Sunday, 2/27, 12-3. But the location has been switched to Moontower Cider Company at 1916 Tillery St. The organizer is still sbarta@gmail.com , and you can still contact him if you have any questions. As per usual procedure, everyone is invited. Please feel free to come even if you feel awkward about it, even if you're not "the typical ACX reader", even if you're worried people won't like you, etc. You may (but don't have to) RSVP here.
Feb 24, 2022 • 1h 11min
Biological Anchors: A Trick That Might Or Might Not Work
https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might?utm_source=url Introduction I've been trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents. Now we're up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra's talking about and what's going on. The Open Philanthropy Project ("Open Phil") is a big effective altruist foundation interested in funding AI safety. It's got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it's very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce a report on when human-level AI would arrive. It says the resulting document is "informal" - but it's 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it kind of formal. The report finds a 10% chance of "transformative AI" by 2031, a 50% chance by 2052, and an almost 80% chance by 2100. Eliezer rejects their methodology and expects AI earlier (he doesn't offer many numbers, but here he gives Bryan Caplan 50-50 odds on 2030, albeit not totally seriously). He made the case in his own very long essay, Biology-Inspired AGI Timelines: The Trick That Never Works, sparking a bunch of arguments and counterarguments and even more long essays.
Feb 23, 2022 • 31min
Links For February
https://astralcodexten.substack.com/p/links-for-february?utm_source=url [Remember, I haven't independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can't guarantee I will have caught them all by the time you read this.] 1: The newest studies don't find evidence that extracurriculars like chess, second languages, playing an instrument, etc can improve in-school learning. 2: Did you know: Spanish people consider it good luck to eat twelve grapes at midnight on New Years, one at each chime of the clock tower in Madrid. This has caused enough choking deaths that doctors started a petition to make the clock tower chime more slowly. 3: At long last, scientists have discovered a millipede that really does have (more than) a thousand legs, Eumillipes persephone, which lives tens of meters underground in Australia and in your nightmares. Recent progress in this area inspired me to Fermi-estimate a millipede version of Moore's Law, which suggests we should be up to megapedes by 2140 and gigapedes by 2300.
Feb 22, 2022 • 18min
Play Money And Reputation Systems
https://astralcodexten.substack.com/p/play-money-and-reputation-systems?utm_source=url For now, US-based prediction markets can't use real money without clearing near-impossible regulatory hurdles. So smaller and more innovative projects will have to stick with some kind of play money or reputation-based system. I used to be really skeptical here, but Metaculus and Manifold have softened my stance. So let's look closer at how and whether these kinds of systems work. Any play money or reputation system has to confront two big design decisions: Should you reward absolute accuracy, relative accuracy, or some combination of both? Should your scoring be zero-sum, positive-sum, or negative sum? Relative Vs. Absolute Accuracy As far as I know, nobody suggests rewarding only absolute accuracy; the debate is between relative accuracy vs. some combination of both. Why? If you rewarded only absolute accuracy, it would be trivially easy to make money predicting 99.999% on "will the sun rise tomorrow" style questions.
Feb 19, 2022 • 2min
Austin Meetup Next Sunday
https://astralcodexten.substack.com/p/austin-meetup-next-sunday?utm_source=url I'll be in Austin on Sunday, 2/27, and the meetup group there has kindly agreed to host me and anyone else who wants to show up. We'll be at RichesArt (an art gallery with an outdoor space) at 2511 E 6th St Unit A from noon to 3. The organizer is sbarta@gmail.com , you can contact him if you have any questions. As per usual procedure, everyone is invited. Please feel free to come even if you feel awkward about it, even if you're not "the typical ACX reader", even if you're worried people won't like you, etc. You may (but don't have to) RSVP here.
Feb 18, 2022 • 12min
The Gods Only Have Power Because We Believe In Them
https://astralcodexten.substack.com/p/the-gods-only-have-power-because?utm_source=url [with apologies to Terry Pratchett and TVTropes] "Is it true," asked the student, "that the gods only have power because we believe in them?" "Yes," said the sage. "Then why not appear openly? How many more people would believe in the Thunderer if, upon first gaining enough worshipers to cast lightning at all, he struck all of the worst criminals and tyrants?" "Because," said the sage, "the gods only gain power through belief, not knowledge. You know there are trees and clouds; are they thereby gods? Just as lightning requires close proximity of positive and negative charge, so divinity requires close proximity of belief and doubt. The closer your probability estimate of a god's existence is to 50%, the more power they gain from you. Complete atheism and complete piety alike are useless to them."
49 snips
Feb 18, 2022 • 1h 16min
Book Review: Sadly, Porn
https://astralcodexten.substack.com/p/book-review-sadly-porn I. Freshman English class says all books need a conflict. Man vs. Man, Man vs. Self, whatever. The conflict in Sadly, Porn is Author vs. Reader. The author - the pseudonymous "Edward Teach, MD" - is a spectacular writer. Your exact assessment of his skill will depend on where you draw the line between writing ability and other virtues - but where he's good, he's amazing. Nobody else takes you for quite the same kind of ride. He's also impressively erudite, drawing on the Greek and Latin classics, the Bible, psychoanalytic literature, and all of modern movies and pop culture. Sometimes you read the scholars of two hundred years ago and think "they just don't make those kinds of guys anymore". They do and Teach is one of them. If you read his old blog, The Last Psychiatrist, you have even more reasons to appreciate him. His expertise in decoding scientific studies and in psychopharmacology helped me a lot as a med student and resident. His political and social commentary was delightfully vicious, but also seemed genuinely aimed at helping his readers become better people. My point is: the author is a multitalented person who I both respect and want to respect. This sets up the conflict.
Feb 15, 2022 • 21min
Mantic Monday: Ukraine Cube Manifold
https://astralcodexten.substack.com/p/mantic-monday-ukraine-cube-manifold?r=fm577 Ukraine Thanks to Clay Graubard for doing my work for me: These run from about 48% to 60%, but I think the differences are justified by the slightly different wordings of the question and definitions of "invasion". You see a big jump last Friday when the US government increased the urgency of their own warnings. I ignored this on Friday because I couldn't figure out what their evidence was, but it looks like the smart money updated a lot on it. A few smaller markets that Clay didn't include: Manifold is only at 36% despite several dozen traders. I think they're just wrong - but I'm not going to use any more of my limited supply of play money to correct it, thus fully explaining the wrongness. Futuur is at 47%, but also thinks there's an 18% chance Russia invades Lithuania, so I'm going to count this as not really mature. Insight Prediction, a very new site I've never seen before, claims to have $93,000 invested and a probability of 22%, which is utterly bizarre; I'm too suspicious and confused to invest, and maybe everyone else is too. (PredictIt, Polymarket, and Kalshi all avoid this question. I think PredictIt has a regulatory agreement that limits them to politics. Polymarket and Kalshi might just not be interested, or they might be too PR-sensitive to want to look like they're speculating on wars where thousands of people could die.) What happens afterwards? Clay beats me again: For context:


