Astral Codex Ten Podcast
Jeremiah
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Episodes
Mentioned books
5 snips
Dec 11, 2022 • 22min
Why I'm Less Than Infinitely Hostile To Cryptocurrency
https://astralcodexten.substack.com/p/why-im-less-than-infinitely-hostile Go anywhere in Silicon Valley these days and start saying the word "cryp - ". Before you get to the second syllable, everyone around you will chant in unison "PONZIS 100% SCAMS ZERO-LEGITIMATE-USE-CASES SPEEDRUNNING-THE-HISTORY-OF-FINANCIAL-FRAUD!" It's really quite impressive. I'm no true believer. But I'm less than infinitely hostile to crypto. This is becoming a pretty rare position, so let me explain why: Crypto Is Full Of Extremely Clear Use Cases, Which It Already Succeeds At Very Well Look at the graph of countries that use crypto the most (source):
Dec 11, 2022 • 9min
Know Your GABA-A Receptor Subunits
https://astralcodexten.substack.com/p/know-your-gaba-a-receptor-subunits Many psychiatric drugs and supplements affect GABA, the brain's main inhibitory neurotransmitter. But some have different effects than others. Why? This is rarely a productive question to ask in psychiatry, and this situation is no exception. But if you persist long enough, someone will eventually tell you to study GABA receptor subunits, which I am finally getting around to doing. GABA-A is the most common type of GABA receptor. Seen from the side, it looks like a bell pepper; seen from above, it looks like a tech company logo.
Dec 2, 2022 • 25min
Book Review: First Sixth Of Bobos In Paradise
https://astralcodexten.substack.com/p/book-review-first-sixth-of-bobos I. David Brooks' Bobos In Paradise is an uneven book. The first sixth is a daring historical thesis that touches on every aspect of 20th-century America. The next five-sixths are the late-90s equivalent of "millennials just want avocado toast!" I'll review the first sixth here, then see if I can muster enough enthusiasm to get to the rest later. The daring thesis: a 1950s change in Harvard admissions policy destroyed one American aristocracy and created another. Everything else is downstream of the aristocracy, so this changed the whole character of the US. The pre-1950s aristocracy went by various names; the Episcopacy, the Old Establishment, Boston Brahmins. David Brooks calls them WASPs, which is evocative but ambiguous. He doesn't just mean Americans who happen to be white, Anglo-Saxon, and Protestant - there are tens of millions of those! He means old-money blue-blooded Great-Gatsby-villain WASPs who live in Connecticut, go sailing, play lacrosse, belong to country clubs, and have names like Thomas R. Newbury-Broxham III. Everyone in their family has gone to Yale for eight generations; if someone in the ninth generation got rejected, the family patriarch would invite the Chancellor of Yale to a nice game of golf and mention it in a very subtle way, and the Chancellor would very subtly apologize and say that of course a Newbury-Broxham must go to Yale, and whoever is responsible shall be very subtly fired forthwith. The old-money WASPs were mostly descendants of people who made their fortunes in colonial times (or at worst the 1800s); they were a merchant aristocracy. As the descendants of merchants, they acted as standard-bearers for the bourgeois virtues: punctuality, hard work, self-sufficiency, rationality, pragmatism, conformity, ruthlessness, whatever made your factory out-earn its competitors. By the 1950s they were several generations removed from any actual hustling entrepreneur. Still, at their best the seed ran strong and they continued to embody some of these principles. Brooks tentatively admires the WASP aristocracy for their ethos of noblesse oblige - many become competent administrators, politicians, and generals. George H. W. Bush, scion of a rich WASP family, served with distinction in World War II - the modern equivalent would be Bill Gates' or Charles Koch's kids volunteering as front-line troops in Afghanistan.
Dec 1, 2022 • 35min
Highlights From The Comments On Semaglutide
This podcast explores the world of Semaglutide, a weight loss drug, covering topics such as obtaining it at a lower cost, comparisons to other weight loss drugs, challenges to the speaker's claims, the duration of Semaglutide usage, personal anecdotes, and supply shortages. It also discusses the potential impact of Semaglutide on healthcare costs and its sustained weight loss benefits. The podcast addresses skepticism towards the prediction of cutting obesity rates in half by 2050, explores the body's tendency to return to a set point weight, and features personal anecdotes discussing the effectiveness and experiences of individuals using Semaglutide for weight loss.
Nov 30, 2022 • 38min
Can This AI Save Teenage Spy Alex Rider From A Terrible Fate?
We're showcasing a hot new totally bopping, popping musical track called "bromancer era? bromancer era?? bromancer era???" His subtle sublime thoughts raced, making his eyes literally explode. https://astralcodexten.substack.com/p/can-this-ai-save-teenage-spy-alex "He peacefully enjoyed the light and flowers with his love," she said quietly, as he knelt down gently and silently. "I also would like to walk once more into the garden if I only could," he said, watching her. "I would like that so much," Katara said. A brick hit him in the face and he died instantly, though not before reciting his beloved last vows: "For psp and other releases on friday, click here to earn an early (presale) slot ticket entry time or also get details generally about all releases and game features there to see how you can benefit!" — Talk To Filtered Transformer Rating: 0.1% probability of including violence "Prosaic alignment" is the most popular paradigm in modern AI alignment. It theorizes that we'll train future superintelligent AIs the same way that we train modern dumb ones: through gradient descent via reinforcement learning. Every time they do a good thing, we say "Yes, like this!", in a way that pulls their incomprehensible code slightly in the direction of whatever they just did. Every time they do a bad thing, we say "No, not that!," in a way that pushes their incomprehensible code slightly in the opposite direction. After training on thousands or millions of examples, the AI displays a seemingly sophisticated understanding of the conceptual boundaries of what we want. For example, suppose we have an AI that's good at making money. But we want to align it to a harder task: making money without committing any crimes. So we simulate it running money-making schemes a thousand times, and give it positive reinforcement every time it generates a legal plan, and negative reinforcement every time it generates a criminal one. At the end of the training run, we hopefully have an AI that's good at making money and aligned with our goal of following the law. Two things could go wrong here: The AI is stupid, ie incompetent at world-modeling. For example, it might understand that we don't want it to commit murder, but not understand that selling arsenic-laden food will kill humans. So it sells arsenic-laden food and humans die. The AI understands the world just fine, but didn't absorb the categories we thought it absorbed. For example, maybe none of our examples involved children, and so the AI learned not to murder adult humans, but didn't learn not to murder children. This isn't because the AI is too stupid to know that children are humans. It's because we're running a direct channel to something like the AI's "subconscious", and we can only talk to it by playing this dumb game of "try to figure out the boundaries of the category including these 1,000 examples". Problem 1 is self-resolving; once AIs are smart enough to be dangerous, they're probably smart enough to model the world well. How bad is Problem 2? Will an AI understand the category boundaries of what we want easily and naturally after just a few examples? Will it take millions of examples and a desperate effort? Or is there some reason why even smart AIs will never end up with goals close enough to ours to be safe, no matter how many examples we give them? AI scientists have debated these questions for years, usually as pure philosophy. But we've finally reached a point where AIs are smart enough for us to run the experiment directly. Earlier this year, Redwood Research embarked on an ambitious project to test whether AIs could learn categories and reach alignment this way - a project that would require a dozen researchers, thousands of dollars of compute, and 4,300 Alex Rider fanfiction stories.
Nov 25, 2022 • 29min
Semaglutidonomics
140 million obese Americans x $15,000/year for obesity drugs = . . . uh oh, that can't be right. https://astralcodexten.substack.com/p/semaglutidonomics Semaglutide started off as a diabetes medication. Pharma company Novo Nordisk developed it in the early 2010s, and the FDA approved it under the brand names Ozempic® (for the injectable) and Rybelsus® (for the pill). I think "Ozempic" sounds like one of those unsinkable ocean liners, and "Rybelsus" sounds like a benevolent mythological blacksmith. Patients reported significant weight loss as a side effect. Semaglutide was a GLP-1 agonist, a type of drug that has good theoretical reasons to affect weight, so Novo Nordisk studied this and found that yes, it definitely caused people to lose a lot of weight. More weight than any safe drug had ever caused people to lose before. In 2021, the FDA approved semaglutide for weight loss under the brand name Wegovy®. "Wegovy" sounds like either a cooperative governance platform, or some kind of obscure medieval sin. Weight loss pills have a bad reputation. But Wegovy is a big step up. It doesn't work for everybody. But it works for 66-84% of people, depending on your threshold. (Source) Of six major weight loss drugs, only two - Wegovy and Qsymia - have a better than 50-50 chance of helping you lose 10% of your weight. Qsymia works partly by making food taste terrible; it can also cause cognitive issues. Wegovy feels more natural; patients just feel full and satisfied after they've eaten a healthy amount of food. You can read the gushing anecdotes here (plus some extra anecdotes in the comments). Wegovy patients also lose more weight on average than Qsymia patients - 15% compared to 10%. It's just a really impressive drug. Until now, doctors didn't really use medication to treat obesity; the drugs either didn't work or had too many side effects. They recommended either diet and exercise (for easier cases) or bariatric surgery (for harder ones). Semaglutide marks the start of a new generation of weight loss drugs that are more clearly worthwhile. Modeling Semaglutide Accessibility 40% of Americans are obese - that's 140 million people. Most of them would prefer to be less obese. Suppose that a quarter of them want semaglutide. That's 35 million prescriptions. Semaglutide costs about $15,000 per year, multiply it out, that's about $500 billion. Americans currently spend $300 billion per year total on prescription drugs. So if a quarter of the obese population got semaglutide, that would cost almost twice as much as all other drug spending combined. It would probably bankrupt half the health care industry. So . . . most people who want semaglutide won't get it? Unclear. America's current policy for controlling medical costs is to buy random things at random prices, then send all the bills to an illiterate reindeer-herder named Yagmuk, who burns them for warmth. Anything could happen! Right now, only about 50,000 Americans take semaglutide for obesity. I'm basing this off this report claiming "20,000 weekly US prescriptions" of Wegovy; since it's taken once per week, maybe this means there are 20,000 users? Or maybe each prescription contains enough Wegovy to last a month and there are 80,000 users? I'm not sure, but it's somewhere in the mid five digits, which I'm rounding to 50,000. That's only 0.1% of the potential 35 million. The next few sections of this post are about why so few people are on semaglutide, and whether we should expect that to change. I'll start by going over my model of what determines semaglutide use, then look at a Morgan Stanley projection of what will happen over the next decade. Step 1: Awareness I model semaglutide use as interest * awareness * prescription accessibility * affordability. I already randomly guessed interest at 25%, so the next step is awareness. How many people are aware of semaglutide? The answer is: a lot more now than when I first started writing this article! Novo Nordisk's Wegovy Gets Surprise Endorsement From Elon Musk, says the headline. And here's Google Trends:
Nov 23, 2022 • 5min
"Is Wine Fake?" In Asterisk Magazine
I wrote an article on whether wine is fake. It's not here, it's at asteriskmag.com, the new rationalist / effective altruist magazine. Congratulations to my friend Clara for making it happen. Stories include: Modeling The End Of Monkeypox: I'm especially excited about this one. The top forecaster (of 7,000) in the 2021 Good Judgment competition explains his predictions for monkeypox. If you've ever rolled your eyes at a column by some overconfident pundit, this is maybe the most opposite-of-that thing ever published. Book Review - What We Owe The Future: You've read mine, this is Kelsey Piper's. Kelsey is always great, and this is a good window into the battle over the word "long-termism". Making Sense Of Moral Change: Interview with historian Christopher Brown on the end of the slave trade. "There is a false dichotomy between sincere activism and self-interested activism. Abolitionists were quite sincerely horrified by slavery and motivated to end it, but their fight for abolition was not entirely altruistic." How To Prevent The Next Pandemic: MIT professor Kevin Esvelt talks about applying the security mindset to bioterrorism. "At least 38,000 people can assemble an influenza virus from scratch. If people identify a new [pandemic] virus . . . then you just gave 30,000 people access to an agent that is of nuclear-equivalent lethality." Rebuilding After The Replication Crisis: This is Stuart Ritchie, hopefully you all know him by now. "Fundamentally, how much more can we trust a study published in 2022 compared to one from 2012?" Why Isn't The Whole World Rich? Professor Dietrich Vollrath's introduction to growth economics. What caused the South Korean miracle, and why can't other countries copy it? Is Wine Fake? By me! How come some people say blinded experts can tell the country, subregion, and year of any wine just by tasting it, but other people say blinded experts get fooled by white wines dyed red? China's Silicon Future: Why does China have so much trouble building advanced microchips? How will the CHIPS act affect its broader economic rise? By Karson Elmgren.
Nov 22, 2022 • 29min
Mantic Monday: Twitter Chaos Edition
Plus FTX charges, scandal markets - and oh yeah, wasn't there some kind of midterm recently? https://astralcodexten.substack.com/p/mantic-monday-twitter-chaos-edition Twitter! This is all going to be so, so obsolete by the time I finish writing it and hit the "send post" button. But here goes: 395 traders on this, so one of Manifold's biggest markets, probably representative. The small print defines a major outage as one that lasts more than an hour. See here for a good explanation of why some people expect Twitter outages. https://www.metaculus.com/questions/embed/13499/Polymarket is within 2% of Manifold. Metaculus here has slightly stricter criteria but broadly agrees. 71 traders, still pretty good, but I find it meaningless without a way to distinguish between "everything collapses, Elon sells it for peanuts to scavengers" vs. "Elon saves Twitter, then hands it over to a minion while he moves on to a company building giant death zeppelins". Oh, here we go. 20 traders, they think Musk will stay in charge. 23 traders. Twitter was profitable in 2018 and 2019, then went back to being net negative in 2020 and 2021 (I don't know why) . I don't think it's been very profitable lately, so it would be a feather in Musk's cap if he accomplished this. 24 traders. Twitter's mDAU have consistently gone up in the past. DAU is slightly different and I think more likely to include bots. 26 traders. One thing I like about Manifold is that it lets you choose any point along the gradient from "completely objective" (eg Twitter's reported DAU count) to "completely subject" (eg whether the person who made the market thinks something is better or worse). This at least uses a poll as its resolution method. But the poll will be in the comments of this market, which means it will mostly be by people who invested in this market, who'll have strong incentives to manipulate it. Maybe Manifold should add a polling platform to their service?
Nov 18, 2022 • 46min
The Psychopharmacology Of The FTX Crash
https://astralcodexten.substack.com/p/the-psychopharmacology-of-the-ftx Must not blog about FTX . . . must not blog about . . . ah, $#@% it Tyler Cowen linked Milky Eggs' excellent overview of the FTX crash. I'm unqualified to comment on any of the financial or regulatory aspects. But it turns out there's a psychopharmacology angle, which I am qualified to talk about, so let's go. I wrote this pretty rushed because it's an evolving news story. Sorry if it's less polished than usual.1 1: Was SBF Using A Medication That Can Cause Overspending And Compulsive Gambling As A Side Effect? Probably yes, and maybe it could have had some small effect, but probably not as much as the people discussing it on Twitter think. Milky Eggs reports a claim by an employee that Sam was on "a patch for designer stimulants that mainlined them into his blood to give him a constant buzz at all times". This could be a hyperbolic description of Emsam, a patch form of the antidepressant/antiparkinsonian agent selegiline. The detectives at the @AutismCapital Twitter account found a photo of SBF, zoomed in on a scrap of paper on his desk, and recognized it as an Emsam wrapper.
Nov 13, 2022 • 35min
Contra Resident Contrarian On Unfalsifiable Internal States
https://astralcodexten.substack.com/p/contra-resident-contrarian-on-unfalsifiable I. Contra Resident Contrarian . . . Resident Contrarian writes On Unfalsifiable Internal States, where he defends his skepticism of jhana and other widely-claimed hard-to-falsify internal states. It's long, but I'll quote a part that seemed especially important to me: I don't really want to do the part of this article that's about how it's reasonable to doubt people in some contexts. But to get to the part I want to talk about, I sort of have to. There is a thriving community of people pretending to have a bunch of multiple personalities on TikTok. They are (they say) composed of many quirky little somebodies, complete with different fun backstories. They get millions of views talking about how great life is when lived as multiples, and yet almost everyone who encounters these videos in the wild goes "What the hell is this? Who pretends about this kind of stuff?" There's an internet community of people, mostly young women, who pretend to be sick. They call themselves Spoonies; it's a name derived from the idea that physically and mentally well people have unlimited "spoons", or mental/physical resources they use to deal with their day. Spoonies are claiming to have fewer spoons, but also en masse have undiagnosable illnesses. They trade tips on how to force their doctors to give them diagnoses: > In a TikTok video, a woman with over 30,000 followers offers advice on how to lie to your doctor. "If you have learned to eat salt and follow internet instructions and buy compression socks and squeeze your thighs before you stand up to not faint…and you would faint without those things, go into that appointment and tell them you faint." Translation: You know your body best. And if twisting the facts (like saying you faint when you don't) will get you what you want (a diagnosis, meds), then go for it. One commenter added, "I tell docs I'm adopted. They'll order every test under the sun"—because adoption means there may be no family history to help with diagnoses. And doctors note being able to sort of track when particular versions of illnesses get flavor-of-the-week status: > Over the pandemic, neurologists across the globe noticed a sharp uptick in teen girls with tics, according to a report in the Wall Street Journal. Many at one clinic in Chicago were exhibiting the same tic: uncontrollably blurting out the word "beans." It turned out the teens were taking after a popular British TikToker with over 15 million followers. The neurologist who discovered the "beans" thread, Dr. Caroline Olvera at Rush University Medical Center, declined to speak with me—because of "the negativity that can come from the TikTok community," according to a university spokesperson. Almost no one who encounters them assumes they are actually sick. Are there individuals in each of these communities that are "for real"? Probably, especially in the case of the Spoonies; undiagnosed or undiagnosable illnesses are a real thing. Are most of them legitimate? The answer seems to be a pretty clear "no". I'm not bringing them up to bully them; I suspect that there are profiteers and villains in both communities, but there's also going to be a lot of people driven to it as a form of coping with something else, like how we used to regard cutting and similar forms of self-harm. And, you know, a spectrum of people in between those two poles, like you'd expect with nearly anything. But it's relevant to bring up because there seem to be far more Spoonies and DID TikTok-fad folks than people who say they orgasm looking at blankets because they did some hard thinking (or non-thinking) earlier. So when Scott says something that boils down to "this is credible, because a lot of people say they experience this", I have to mention that there's groups that say they experience a lot of stuff in just the same way that basically nobody believes is experiencing anything close to what they say they are. Granting that this is not the part of the article RC wants to write, he starts by bringing up "spoonies" and people with multiple personalities as people who it's reasonable to doubt. I want to go over both cases before responding to the broader point. II. . . . On Spoonies "Spoonies" are people with unexplained medical symptoms. RC says he thinks a few may be for real, but most aren't. I have the opposite impression. Certainly RC's examples don't prove what he thinks they prove. He brings up one TikToker's advice: In a TikTok video, a woman with over 30,000 followers offers advice on how to lie to your doctor. "If you have learned to eat salt and follow internet instructions and buy compression socks and squeeze your thighs before you stand up to not faint…and you would faint without those things, go into that appointment and tell them you faint." Translation: You know your body best. And if twisting the facts (like saying you faint when you don't) will get you what you want (a diagnosis, meds), then go for it. One commenter added, "I tell docs I'm adopted. They'll order every test under the sun"—because adoption means there may be no family history to help with diagnoses. This person is using a deliberately eye-catching title (Lies To Tell Your Doctor) to get clicks. But if you read what they're saying, it's reasonable and honest! They're saying "If you used to faint all the time, and then after making a bunch of difficult lifestyle changes you can now mostly avoid fainting, and your doctor asks 'do you have a fainting problem yes/no', answer yes!" THIS IS GOOD ADVICE. Imagine that one day you wake up and suddenly you have terrible leg pain whenever you walk. So you mostly don't walk anywhere. Or if you do have to walk, you use crutches and go very slowly, because then it doesn't hurt. And given all of this, you don't experience leg pain. If you tell your doctor "I have leg pain", are you lying ? You might think this weird situation would never come up - surely the patient would just explain the whole situation clearly? One reason it might come up is that all this is being done on a form - "check the appropriate box, do you faint yes/no?". Another reason it might come up is that a nurse or someone takes your history and they check off boxes on a form. Another reason it might come up is that everything about medical communication is inexplicably terrible; this is why you spend umptillion hours in med school learning "history taking" instead of just saying "please tell me all relevant information, one rational human being to another".


