

Astral Codex Ten Podcast
Jeremiah
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Episodes
Mentioned books

Feb 18, 2022 • 12min
The Gods Only Have Power Because We Believe In Them
https://astralcodexten.substack.com/p/the-gods-only-have-power-because?utm_source=url [with apologies to Terry Pratchett and TVTropes] “Is it true,” asked the student, “that the gods only have power because we believe in them?” “Yes,” said the sage. “Then why not appear openly? How many more people would believe in the Thunderer if, upon first gaining enough worshipers to cast lightning at all, he struck all of the worst criminals and tyrants?” “Because,” said the sage, “the gods only gain power through belief, not knowledge. You know there are trees and clouds; are they thereby gods? Just as lightning requires close proximity of positive and negative charge, so divinity requires close proximity of belief and doubt. The closer your probability estimate of a god’s existence is to 50%, the more power they gain from you. Complete atheism and complete piety alike are useless to them.”

49 snips
Feb 18, 2022 • 1h 16min
Book Review: Sadly, Porn
https://astralcodexten.substack.com/p/book-review-sadly-porn I. Freshman English class says all books need a conflict. Man vs. Man, Man vs. Self, whatever. The conflict in Sadly, Porn is Author vs. Reader. The author - the pseudonymous “Edward Teach, MD” - is a spectacular writer. Your exact assessment of his skill will depend on where you draw the line between writing ability and other virtues - but where he’s good, he’s amazing. Nobody else takes you for quite the same kind of ride. He’s also impressively erudite, drawing on the Greek and Latin classics, the Bible, psychoanalytic literature, and all of modern movies and pop culture. Sometimes you read the scholars of two hundred years ago and think “they just don’t make those kinds of guys anymore”. They do and Teach is one of them. If you read his old blog, The Last Psychiatrist, you have even more reasons to appreciate him. His expertise in decoding scientific studies and in psychopharmacology helped me a lot as a med student and resident. His political and social commentary was delightfully vicious, but also seemed genuinely aimed at helping his readers become better people. My point is: the author is a multitalented person who I both respect and want to respect. This sets up the conflict.

Feb 15, 2022 • 21min
Mantic Monday: Ukraine Cube Manifold
https://astralcodexten.substack.com/p/mantic-monday-ukraine-cube-manifold?r=fm577 Ukraine Thanks to Clay Graubard for doing my work for me: These run from about 48% to 60%, but I think the differences are justified by the slightly different wordings of the question and definitions of “invasion”. You see a big jump last Friday when the US government increased the urgency of their own warnings. I ignored this on Friday because I couldn’t figure out what their evidence was, but it looks like the smart money updated a lot on it. A few smaller markets that Clay didn’t include: Manifold is only at 36% despite several dozen traders. I think they’re just wrong - but I’m not going to use any more of my limited supply of play money to correct it, thus fully explaining the wrongness. Futuur is at 47%, but also thinks there’s an 18% chance Russia invades Lithuania, so I’m going to count this as not really mature. Insight Prediction, a very new site I’ve never seen before, claims to have $93,000 invested and a probability of 22%, which is utterly bizarre; I’m too suspicious and confused to invest, and maybe everyone else is too. (PredictIt, Polymarket, and Kalshi all avoid this question. I think PredictIt has a regulatory agreement that limits them to politics. Polymarket and Kalshi might just not be interested, or they might be too PR-sensitive to want to look like they’re speculating on wars where thousands of people could die.) What happens afterwards? Clay beats me again: For context:

Feb 13, 2022 • 24min
Highlights From The Comments On Motivated Reasoning And Reinforcement Learning
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-motivated I. Comments From People Who Actually Know What They’re Talking About Gabriel writes: The brain trains on magnitude and acts on sign. That is to say, there are two different kinds of "module" that are relevant to this problem as you described, but they're not RL and other; they're both other. The learning parts are not precisely speaking reinforcement learning, at least not by the algorithm you described. They're learning the whole map of value, like a topographic map. Then the acting parts find themselves on the map and figure out which way leads upward toward better outcomes. More precisely then: The brain learns to predict value and acts on the gradient of predicted value. The learning parts are trying to find both opportunities and threats, but not unimportant mundane static facts. This is why, for example, people are very good at remembering and obsessing over intensely negative events that happened to them -- which they would not be able to do in the RL model the post describes! We're also OK at remembering intensely positive events that happened to us. But ordinary observations of no particular value mostly make no lasting impression. You could test this by a series of 3 experiments, in each of which you have a screen flash several random emoji on screen, and each time a specific emoji is shown to the subject, you either (A) penalize the subject such as with a shock, or (B) reward the subject such as with sweet liquid when they're thirsty, or (C) give the subject a stimulus that has no significant magnitude, whether positive or negative, such as changing the pitch of a quiet ongoing buzz that they were not told was relevant. I'd expect subjects in both conditions A and B to reliably identify the key emoji, whereas I'd expect quite a few subjects in condition C to miss it. By learning associates with a degree of value, whether positive or negative, it's possible to then act on the gradient in pursuit of whatever available option has highest value. This works reliably and means we can not only avoid hungry lions and seek nice ripe bananas, but we also do

Feb 11, 2022 • 1h 28min
ACX Grants ++: The Second Half
https://astralcodexten.substack.com/p/acx-grants-the-second-half This is the closing part of ACX Grants. Projects that I couldn’t fully fund myself were invited to submit a brief description so I could at least give them free advertising here. You can look them over and decide if any seem worth donating your money, time, or some other resource to. I’ve removed obvious trolls, a few for-profit businesses without charitable value who tried to sneak in under the radar, and a few that violated my sensibilities for one or another reason. I have not removed projects just because they’re terrible, useless, or definitely won’t work. My listing here isn’t necessarily an endorsement; caveat lector. Still, some of them are good projects and deserve more attention than I was able to give them. Many applicants said they’d hang around the comments section here, so if you have any questions, ask! (bolded titles are my summaries and some of them might not be accurate or endorsed by the applicant) You can find the first 66 of these here.

Feb 10, 2022 • 51min
So You Want To Run A Microgrants Program
https://astralcodexten.substack.com/p/so-you-want-to-run-a-microgrants I. Medical training is a wild ride. You do four years of undergrad in some bio subject, ace your MCATs, think you’re pretty hot stuff. Then you do your med school preclinicals, study umpteen hours a day, ace your shelf exams, and it seems like you're pretty much there. Then you start your clinical rotations, get a real patient in front of you, and you realize - oh god, I know absolutely nothing about medicine. This is also how I felt about running a grants program. I support effective altruism, a vast worldwide movement focused on trying to pick good charities. Sometimes I go to their conferences, where they give lectures about how to pick good charities. Or I read their online forum, where people write posts about how to pick good charities. I've been to effective altruist meetups, where we all come together and talk about good charity picking. So I felt like, maybe, I don't know, I probably knew some stuff about how to pick good charities. And then I solicited grant proposals, and I got stuff like this: A. $60K to run simulations checking if some chemicals were promising antibiotics. B. $60K for a professor to study the factors influencing cross-cultural gender norms C. $50K to put climate-related measures on the ballot in a bunch of states. D. $30K to research a solution for African Swine Fever and pitch it to Uganda E. $40K to replicate psych studies and improve incentives in social science Which of these is the most important?

Feb 9, 2022 • 17min
Heuristics That Almost Always Work
https://astralcodexten.substack.com/p/heuristics-that-almost-always-work The Security Guard He works in a very boring building. It basically never gets robbed. He sits in his security guard booth doing the crossword. Every so often, there’s a noise, and he checks to see if it’s robbers, or just the wind. It’s the wind. It is always the wind. It’s never robbers. Nobody wants to rob the Pillow Mart in Topeka, Ohio. If a building on average gets robbed once every decade or two, he might go his entire career without ever encountering a real robber. At some point, he develops a useful heuristic: it he hears a noise, he might as well ignore it and keep on crossing words: it’s just the wind, bro. This heuristic is right 99.9% of the time, which is pretty good as heuristics go. It saves him a lot of trouble.

Feb 9, 2022 • 2min
Two Small Corrections And Updates
https://astralcodexten.substack.com/p/two-small-corrections-and-updates 1: I titled part of my post yesterday “RIP Polymarket”, which was a mistake. Polymarket would like to remind everyone that they are very much alive, with a real-money market available to anyone outside the US, and some kind of compliant US product (maybe a play-money market) in the works. 2: Sam M and Eric N want to remind you that you have until the end of next week to get your 2022 prediction contest entries in. Also: We have some plans to compare (aggregates of) ACX reader predictions against various prediction markets. But there are probably much cooler things we can do which we haven't thought of yet! If you run a prediction market and have an idea for an interesting collaboration that involves sharing our data before it's publicly released, get in touch with us through the contest feedback form. If it's something time sensitive (e.g. an experiment that needs to be started before the contest submission deadline), make sure you do so soon. If you don't run a prediction market but still have an idea for something interesting we can do with the contest data, leave a comment on this open thread and we'll hopefully see it." You can reach them through this form.

Feb 8, 2022 • 19min
The Passage Of Polymarket
The podcast discusses the recent $1.4 million fine imposed on PolyMarket, the legal status of prediction markets, and the competitive landscape. It explores the consequences faced by PolyMarket and emphasizes the importance of prediction markets. The chapter also discusses the restrictive nature of prediction markets in the US and the limitations of manifold markets. It explores the future of prediction markets, challenges facing crypto, and the limitations of the internet in terms of privacy and censorship resistance.

Feb 5, 2022 • 4min
Book Review Contest Rules 2022
https://astralcodexten.substack.com/p/book-review-contest-rules-2022 Okay, we’re officially doing this again. Write a review of a book. There’s no official word count requirement, but last year’s finalists and winners were often between 2,000 and 10,000 words. There’s no official recommended style, but check the style of last year’s finalists and winners or my ACX book reviews (1, 2, 3) if you need inspiration. Please limit yourself to one entry per person or team. Then send me your review through this Google Form. The form will ask for your name, email, the title of the book, and a link to a Google Doc. The Google Doc should have your review exactly as you want me to post it if you’re a finalist. DON’T INCLUDE YOUR NAME OR ANY HINT ABOUT YOUR IDENTITY IN THE GOOGLE DOC ITSELF, ONLY IN THE FORM. I want to make this contest as blinded as possible, so I’m going to hide that column in the form immediately and try to judge your docs on their merit. (does this mean you can’t say something like “This book about war reminded me of my own experiences as a soldier” because that gives a hint about your identity? My rule of thumb is - if I don’t know who you are, and the average ACX reader doesn’t know who you are, you’re fine. I just want to prevent my friends / other judges’ friends / Internet semi-famous people from having an advantage. If you’re in one of those categories and think your personal experience would give it away, please don’t write about your personal experience.) PLEASE MAKE SURE THE GOOGLE DOC IS UNLOCKED AND I CAN READ IT. By default, nobody can read Google Docs except the original author. You’ll have to go to Share, then on the bottom of the popup click on “Restricted” and change to “Anyone with the link”. If you send me a document I can’t read, I will probably disqualify you, sorry. First prize will get at least $2,500, second prize at least $1,000, third prize at least $500; I might increase these numbers later on. All winners and finalists will get free publicity (including links to any other works you want me to link to) and free ACX subscriptions. And all winners will get the right to pitch me new articles if they want (nobody ever takes me up on this). Your due date is April 5th. Good luck! If you have any questions, ask them in the comments. And remember, the form for submitting entries is here.