Astral Codex Ten Podcast

Jeremiah
undefined
Mar 11, 2022 • 12min

Advice For Unwoke Academic?

https://astralcodexten.substack.com/p/advice-for-unwoke-academic An academic recently asked me for advice. A lucky career development has now made him almost un-fire-able, and he wants to join the fight for academic freedom. We talked about two different strategies: Fabian Strategy: Become a beloved pillar of his college community. Volunteer for all those committees everyone always tries to weasel out of. When some wokeness-related issue comes up - merit vs. diversity hiring, wokeness study class requirements for majors, firing professors who say unwoke things, etc - use his reputation and position to fight back. Kindly but firmly make it clear that he opposes wokeness, and that other academics in the same position are not alone. Occasionally, when the college administrators make some extreme and obvious overstep - something “we’ve cancelled all yoga classes because they’re cultural appropriation”-level unpopular - escalate it, make sure everyone in the world hears about it, then claim the easy victory when they back down. Berserker Strategy: Pick fights. Literally pick the fights - study up on college policy, get to know the administrators well enough to understand which policies they’re forced to follow and which ones they’ll cave on immediately, learn the relevant laws, lawyer up, be 99% sure he can win any fight he picks - but then pick fights. Invite controversial speakers, knowing that there will be big protests. Then make sure there are lots of cameras around as hundreds of college students hurl garbage and expletives at some kindly old sociologist who said biological sex was real one time or whatever. Do this consistently, in a way that probably makes him lots of enemies and ensures he’ll never get any position of power, but which keeps this issue in front of everyone’s eyeballs. Make sure that everyone sees him successfully standing up to the mob, having his speakers speak, and continuing to be employed and happy. If the college tries to shut him down, sue them and win, in a way that will make colleges more reluctant to shut people down in the future.
undefined
Mar 9, 2022 • 30min

Zounds! It's Zulresso and Zuranolone!

https://astralcodexten.substack.com/p/zounds-its-zulresso-and-zuranolone How excited should we be about the latest class of antidepressants? 1: What is Zulresso? Wikipedia describes Zulresso as “A bat-winged, armless toad with tentacles instead of a face... ” - no! sorry! That’s Zvilpogghua, one of the Great Old Ones from the Lovecraft mythos. Zulresso is the brand name of allopregnanolone (aka brexanolone), a new medication for post-partum depression. It’s interesting as a potential missing link between hormones and normal mood regulation. 2: What do you mean by “missing link between hormones and normal mood regulation?” Allopregnanolone is a naturally-occuring metabolite of the female hormone progesterone. In 1981, scientists found it was present in unusually high concentrations in the brain (including male brains), suggesting that maybe the brain was making it separately and using it for something. They did some tests and found that it was a positive allosteric modulator of GABA.  
undefined
Mar 4, 2022 • 24min

What Are We Arguing About When We Argue About Rationality?

https://astralcodexten.substack.com/p/what-are-we-arguing-about-when-we The backstory: Steven Pinker wrote a book about rationality. The book concludes it is good. People should learn how to be more rational, and then we will have fewer problems. Howard Gardner, well-known wrong person, sort of criticized the book. The criticism was facile, a bunch of stuff like “rationality is important, but relationships are also important, so there”. Pinker’s counterargument is dubious: Gardner’s essay avoids rationality pretty carefully. But even aside from that, it feels like Pinker is cheating, or missing the point, or being annoying. Gardner can’t be arguing that rationality is completely useless in 100% of situations. And if there’s any situation at all where you’re allowed to use rationality, surely it would be in annoying Internet arguments with Steven Pinker. We could turn Pinker’s argument back on him: he frames his book as a stirring defense of rationality against anti-rationalists. But why does he identify these people as anti-rationalists? Sure, they themselves identify as anti-rationalist. But why should he believe them? After all, they use rationality to make their case. If they won, what bad thing would happen? Even in whatever dystopian world they created, people would still use rationality to make cases.
undefined
Mar 3, 2022 • 6min

Microaddictions

https://astralcodexten.substack.com/p/microaddictions Everyone always says you should “eat mindfully”. I tried this once and it was weird. For example, I noticed that only the first few bites of a tasty food actually tasted good. After that I habituated and lost it. Not only that, but there was a brief period when I finished eating the food which was below hedonic baseline. This seems pretty analogous to addiction, tolerance, and withdrawal. If you use eg heroin, I’m told it feels very good the first few times. After that it gets gradually less euphoric, until eventually you need it to feel okay at all. If you quit, you feel much worse than normal (withdrawal) for a while until you even out. I claim I went through this whole process in the space of a twenty minute dinner. I notice this most strongly with potato chips. Presumably this is pretty common, given their branding:
undefined
Mar 1, 2022 • 58min

Ukraine Warcasting

https://astralcodexten.substack.com/p/ukraine-warcasting Yeah, I know you’re saturated with Ukraine content. Yeah, I know everyone wants to relate their hobbyhorse to Ukraine. But I think it’s genuinely useful to talk about prediction markets right now. Current conventional wisdom is that the invasion was a miscalculation on Putin’s part, after he surrounded himself with so many yes-men that he lost touch with reality. But Ukraine miscalculated too; until almost the day of the invasion, Zelenskyy was saying everything would be okay. And if there’s a nuclear exchange, it will be because of miscalculation - I don’t know what the miscalculation will be, just that nobody goes into a nuclear exhange because they want to. Preserving people’s access to reality and helping them avoid miscalculations are peacekeeping measures, sometimes very important ones. The first part of this post looks at various markets’ predictions of how the war will go from here (Zvi published something like this a few hours before I could, so this will mostly duplicate his work). The second part very briefly tries to evaluate which markets have been most accurate so far - though this is a topic which deserves at least paper-length treatment. The third part looks at which pundits deserve eternal glory for publicly making strong true predictions, and which pundits deserve . . . something else, for doing . . . other things.
undefined
Feb 26, 2022 • 1min

Austin Meetup Correction

https://astralcodexten.substack.com/p/austin-meetup-correction?utm_source=url   Austin meetup is still this Sunday, 2/27, 12-3. But the location has been switched to Moontower Cider Company at 1916 Tillery St. The organizer is still sbarta@gmail.com , and you can still contact him if you have any questions. As per usual procedure, everyone is invited. Please feel free to come even if you feel awkward about it, even if you’re not “the typical ACX reader”, even if you’re worried people won’t like you, etc. You may (but don’t have to) RSVP here.
undefined
Feb 24, 2022 • 1h 11min

Biological Anchors: A Trick That Might Or Might Not Work

https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might?utm_source=url Introduction I've been trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents. Now we’re up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra's talking about and what's going on. The Open Philanthropy Project ("Open Phil") is a big effective altruist foundation interested in funding AI safety. It's got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it’s very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce a report on when human-level AI would arrive. It says the resulting document is "informal" - but it’s 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it kind of formal. The report finds a 10% chance of “transformative AI” by 2031, a 50% chance by 2052, and an almost 80% chance by 2100. Eliezer rejects their methodology and expects AI earlier (he doesn’t offer many numbers, but here he gives Bryan Caplan 50-50 odds on 2030, albeit not totally seriously). He made the case in his own very long essay, Biology-Inspired AGI Timelines: The Trick That Never Works, sparking a bunch of arguments and counterarguments and even more long essays.
undefined
Feb 23, 2022 • 31min

Links For February

https://astralcodexten.substack.com/p/links-for-february?utm_source=url [Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.] 1: The newest studies don’t find evidence that extracurriculars like chess, second languages, playing an instrument, etc can improve in-school learning. 2: Did you know: Spanish people consider it good luck to eat twelve grapes at midnight on New Years, one at each chime of the clock tower in Madrid. This has caused enough choking deaths that doctors started a petition to make the clock tower chime more slowly. 3: At long last, scientists have discovered a millipede that really does have (more than) a thousand legs, Eumillipes persephone, which lives tens of meters underground in Australia and in your nightmares. Recent progress in this area inspired me to Fermi-estimate a millipede version of Moore’s Law, which suggests we should be up to megapedes by 2140 and gigapedes by 2300.
undefined
Feb 22, 2022 • 18min

Play Money And Reputation Systems

https://astralcodexten.substack.com/p/play-money-and-reputation-systems?utm_source=url For now, US-based prediction markets can’t use real money without clearing near-impossible regulatory hurdles. So smaller and more innovative projects will have to stick with some kind of play money or reputation-based system. I used to be really skeptical here, but Metaculus and Manifold have softened my stance. So let’s look closer at how and whether these kinds of systems work. Any play money or reputation system has to confront two big design decisions: Should you reward absolute accuracy, relative accuracy, or some combination of both? Should your scoring be zero-sum, positive-sum, or negative sum? Relative Vs. Absolute Accuracy As far as I know, nobody suggests rewarding only absolute accuracy; the debate is between relative accuracy vs. some combination of both. Why? If you rewarded only absolute accuracy, it would be trivially easy to make money predicting 99.999% on “will the sun rise tomorrow” style questions.
undefined
Feb 19, 2022 • 2min

Austin Meetup Next Sunday

https://astralcodexten.substack.com/p/austin-meetup-next-sunday?utm_source=url   I’ll be in Austin on Sunday, 2/27, and the meetup group there has kindly agreed to host me and anyone else who wants to show up. We’ll be at RichesArt (an art gallery with an outdoor space) at 2511 E 6th St Unit A from noon to 3. The organizer is sbarta@gmail.com , you can contact him if you have any questions. As per usual procedure, everyone is invited. Please feel free to come even if you feel awkward about it, even if you’re not “the typical ACX reader”, even if you’re worried people won’t like you, etc. You may (but don’t have to) RSVP here.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app