

Astral Codex Ten Podcast
Jeremiah
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Episodes
Mentioned books

Aug 10, 2021 • 23min
Contra Drum On The Fish Oil Story
https://astralcodexten.substack.com/p/contra-drum-on-the-fish-oil-story Support the author on Substack: astralcodexten.substack.com Then support the podcast: www.patreon.com/sscpodcast I. Kevin Drum questions my interpretation of the infant fish oil story. (it's actually more complicated - I posted a shorter version, later corrected it with a longer version based on the account of one of the doctors involved but said it basically supported my shorter version, he also found the longer version and was about to publish an article saying he had debunked my shorter version, then noticed I had seen the same article and thought it supported me, and he thinks I was wrong to believe this) He writes: This is headshakingly dense. As a hit on the FDA, his post wasn't right at all — not its basic structure and not anything else about it. He even admits that although Gura criticizes plenty of other actors, the FDA isn't one of them...I have no idea how you can write "they usually carry out their mandate well" in one place and then, in your main post, just go ahead and repeat your original belief—backed by an example you know is wrong—that the FDA does stupid and destructive things on practically a daily basis. This is why I'm automatically skeptical of anything on the web that's excessively critical of the FDA.

Aug 8, 2021 • 24min
Details Of The Infant Fish Oil Story
https://astralcodexten.substack.com/p/details-of-the-infant-fish-oil-story I. In my recent post on the FDA, I mentioned a story about a fish-oil-based infant nutritional fluid called Omegaven. The FDA took too long to approve it, and lots of infants died. I plucked that from the anti-FDA blogosphere, where it had been floating around for a while in various incarnations. I tried to check it before publishing, but only enough to confirm the basic outline. A concerned reader sent me a Cochrane paper suggesting that the fluid was no better than previous treatments, which would potentially exonerate the FDA. This was concerning enough that I decided spend a longer time trying to figure out the specifics.

Aug 7, 2021 • 34min
Highlights From The Comments On Acemoglu And AI
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-acemoglu Eugene Norman writes: This… “People have said climate change could cause mass famine and global instability by 2100. But actually, climate change is contributing to hurricanes and wildfires right now! So obviously those alarmists are wrong and nobody needs to worry about future famine and global instability at all.” …isn’t a good analogy at all. Because nobody is arguing that climate change now doesn’t lead to increased climate change in the future. They are the same thing but accelerated. However there’s no certainty that narrow AI leads you a super intelligence. In fact it won’t. There’s no becoming self aware in the algorithms. I’m against this for two reasons. First, self-awareness is spooky. I honestly have no idea what self-awareness is or what it even potentially could be. I hate having this disc

Aug 6, 2021 • 34min
Adumbrations Of Aducanumab
https://astralcodexten.substack.com/p/adumbrations-of-aducanumab Lots of people have been writing about aducanumab, but this Atlantic article in particular bothers me. Backing up: aducanumab, aka Aduhelm, is a new “Alzheimers drug” recently approved by the FDA. I use the scare quotes because it’s pretty unclear whether it actually treats Alzheimers. It definitely treats beta-amyloid plaques, and beta-amyloid plaques are kind of nasty-looking brain structures that seem to be related to Alzheimers somehow. But we’re not sure exactly how they’re related, they might not be related in a way where removing them treats Alzheimers, and the best studies don’t find that the drug helps patients feel better or remember things more. Aducanumab doesn’t meet normal FDA standards for approval, but the FDA approved it anyway under one of their many “fast track” programs for promising drugs. This has been pretty roundly criticized, because although aducanumab might or might not work, it definitely costs $50,000/year/patient. Even if it worked great, that would be a hard pill to swallow (no pun intended, Aduheim is an IV infusion), but it’s especially galling since it might not work at all. Doctors will probably prescribe it despite its questionable value, and someone will end up paying the extraordinary price tag. (Who? Nobody knows. The patient? Insurance companies? Taxpayers? Unrelated patients at the same hospital? Could be anyone! The whole point of the US health insurance system is to make sure nobody ever figures out who bears any particular cost, so that there's no constituency for keeping prices low. If you check your bank account one day and find it's down $50,000 for no reason, I guess you were the guy who ended up on the hook for this one. Sorry!)

Aug 5, 2021 • 24min
What Should We Make Of Sasha Chapin's Claim That Taking LSD Restored His Sense Of Smell After COVID?
https://astralcodexten.substack.com/p/what-should-we-make-of-sasha-chapins I. Substack blogger Sasha Chapin writes that COVID-19 Took My Sense Of Smell, LSD Brought It Back. He got coronavirus, and like many people lost his sense of smell (medical term: dysosmia or anosmia). Ten days after recovery, he still couldn’t smell anything. He looked on Twitter and found some anecdotal reports that psychedelics had helped with this, so he took LSD and tried to smell some stuff while tripping. He says it “totally worked. Fully and near-instantaneously. Like a light switch turning on.” The details: My idea was that I’d do some scent training while on LSD, to—hand-wavey lay neuroscience incoming—stimulate whatever olfactory neurogenesis might occur. Before tripping, I laid out my fragrance collection, along with a few ingredients from the pantry. All-in-all, there were about fifty things to smell, and, as the LSD started kicking in, I started making my way through the selection. At that moment, my sense of smell was still somewhat there but mostly not. However, something odd was happening; I could detect some of the fragrances’ nuances that I couldn’t pick up earlier that day, and what I detected shifted from moment to moment. It was like I was listening to a piece of music with random instruments dropping in and out of the mix. This was still a kind of anosmia, but a different kind, and it almost felt as if my olfaction was re-negotiating reality in real time. And then another weird thing happened. For a couple of hours, I got acute short-term parosmia (distorted smell.) My nose felt dry, and a weird puke-y smell filled my mind. According to some research I’d done, in anosmic patients parosmia sometimes precedes recovery, so, though this was quite unpleasant, I felt hopeful that this was some part of the regeneration process. I cleaned the house, my wife took me shopping, we went to Home Depot, and then had dinner. We got home soon after, about seven hours after my trip began, and I returned to my fragrance collection. Cue triumphant music: all of them were now smellable, in high-definition. My anosmia was gone. Moreover, some were more pleasant than before; iris was more palatable to me than it ever had been. This was a moment I won’t soon forget. Some fragrances—especially Dzing!—gave me full-body chills. The next day, my sense of smell was still there, but it fluctuated; it was partial in the morning, then full in the evening. Since then, it’s been back basically 100%. (And the improved understanding of iris has persisted.) The number one explanation for incredible Internet medical stories is always “placebo effect”. Number two is “coincidence”, number three is “they made it up”. All of these top the list for Sasha’s experience too. Still, enough people have said something like this that I think it’s worth trying to figure out if there’s any plausible mechanism. II. Anosmia sucks worse than you would expect. For one thing, smell is linked to taste, so most things taste bad or weird or neutral. For another, it’s correlated with much higher risk of depression, and some preliminary work suggests this could be causal (possible mechanism: the brain is getting fewer forms of stimulation?) Some studies find that exposing rats to very strong scents makes them less depressed; it would be funny if this was how aromatherapy worked in humans. So COVID induced anosmia is actually a serious problem. According to annoying people who refuse to provide useful information, between 3% and 98% of people who get coronavirus lose some sense of smell. A meta-analysis that pools all these studies gives a best estimate of around 40%. Lots of respiratory viruses cause some smell loss when they infect your nasal passages, but coronavirus is worse than usual. Milder cases cause more olfactory problems than more severe cases, suggesting that the immune response is at least as involved as the virus itself. The coronavirus cannot infect neurons directly, but might infect other cells in the nose, including cells which support neurons and help regenerate the olfactory epithelium. About half of COVID patients recover their smell in a few weeks, but some cases linger for up to a year. By the end of a year 95%+ have recovered; given that between 3% - 12% of people have random smell disturbances at any given time anyway, I interpret this latter figure less as “some people never recover” and more as “we reach the point where it’s impossible to distinguish from background problems”. Sasha says he was only ten days in when he took LSD, so this is well inside the window where we would expect him to eventually recover anyway. But it still doesn’t make sense that he recovered within the space of a few hours, or that he felt his smell was stronger than before.

Aug 4, 2021 • 21min
Model City Monday 8/2/21
https://astralcodexten.substack.com/p/model-city-monday-8221 Support the author here: astralcodexten.substack.com Then, you can support this podcast at: www.patreon.com/sscpodcast Greenhouse Effect Honduras remains the country to watch in the charter city sphere, with its ZEDE law allowing unprecedented levels of freedom and protection. I’d previously written about two Honduran projects, the high-tech island hub of Prospera and the industrial heartland project of Ciudad Morazan. Now there’s a third: ZEDE Orquidea (“Orchid Zone”). I’m not really impressed with their publicity effort (my browser insists their website is a security hazard and won’t let me access it). My only real source of information is this Reddit post by another charter city enthusiast, who writes:

Aug 2, 2021 • 11min
Updated Look At Long-Term AI Risks
https://astralcodexten.substack.com/p/updated-look-at-long-term-ai-risks The last couple of posts here talked about long-term risks from AI, so I thought I’d highlight the results of a new expert survey on exactly what they are. There have been a lot of these surveys recently, but this one is a little different. Starting from the beginning: in 2012-2014, Muller and Bostrom surveyed 550 people with various levels of claim to the title "AI expert" on the future of AI. People in philosophy of AI or other very speculative fields gave numbers around 20% chance of AI causing an "existential catastrophe" (eg human extinction); people in normal technical AI research gave numbers around 7%. In 2016-2017, Grace et al surveyed 1634 experts, 5% of whom predicted an extremely catastrophic outcome. Both of these surveys were vulnerable to response bias (eg the least speculative-minded people might think the whole issue was stupid and not even return the survey). The new paper - Carlier, Clarke, and Schuett (not currently public, sorry, but you can read the summary here) - isn't exactly continuing in this tradition. Instead of surveying all AI experts, it surveys people who work in "AI safety and governance", ie people who are already concerned with AI being potentially dangerous, and who have dedicated their careers to addressing this. As such, they were more concerned on average than the people in previous surveys, and gave a median ~10% chance of AI-related catastrophe (~5% in the next 50 years, rising to ~25% if we don’t make a directed effort to prevent it; means were a bit higher than medians). Individual experts' probability estimates ranged from 0.1% to 100% (this is how you know you’re doing good futurology). None of that is really surprising. What's new here is that they surveyed the experts on various ways AI could go wrong, to see which ones the experts were most concerned about. Going through each of them in a little more detail: 1. Superintelligence: This is the "classic" scenario that started the field, ably described by people like Nick Bostrom and Eliezer Yudkowsky. AI progress goes from human-level to vastly-above-human-level very quickly, maybe because slightly-above-human-level AIs themselves are speeding it along, or maybe because it turns out that if you can make an IQ 100 AI for $10,000 worth of compute, you can make an IQ 500 AI for $50,000. You end up with one (or a few) completely unexpected superintelligent AIs, which wield far-future technology and use it in unpredictable ways based on untested goal structures.

Jul 30, 2021 • 13min
When Does Worrying About Things Trade Off Against Worrying About Other Things?
https://astralcodexten.substack.com/p/when-does-worrying-about-things-trade On yesterday’s post, some people tried to steelman Acemoglu’s argument into something like this: There’s a limited amount of public interest in AI. The more gets used up on the long-term risk of superintelligent AI, the less is left for near-term AI risks like unemployment or autonomous weapons. Sure, maybe Acemoglu didn’t explain his dismissal of long-term risks very well. But given that he thinks near-term risks are bigger than long-term ones, it’s fair to argue that we should shift our limited budget of risk awareness more towards the former at the expense of the latter. I agree this potentially makes sense. But how would you treat each of the following arguments?: (1): Instead of worrying about police brutality, we should worry about the police faking evidence to convict innocent people. (2): Instead of worrying about Republican obstructionism in Congress, we should worry about the potential for novel variants of COVID to wreak devastation in the Third World. (3): Instead of worrying about nuclear war, we should worry about the smaller conflicts going on today, like the deadly civil war in Ethiopia.

Jul 29, 2021 • 19min
Contra Acemoglu On...Oh God, We're Doing This Again, Aren't We?
https://astralcodexten.substack.com/p/contra-acemoglu-onoh-god-were-doing The Washington Post has published yet another "luminary in unrelated field discovers AI risk, pronounces it stupid" article. This time it's Daron Acemoglu. I respect Daron Acemoglu and appreciate the many things his work has revealed about economics. In particular, I respect him so much that I wish he would stop embarrassing himself by writing this kind of article (I feel the same way about Steven Pinker and Ted Chiang). In service of this goal, I want to discuss the piece briefly. I’ll start with what I think is its main flaw, then nitpick a few other things: 1: The Main Flaw: “AI Is Dangerous Now, So It Can’t Be Dangerous Later" This is the basic structure around which this article is written. It goes: 1. Some people say that AI might be dangerous in the future. 2. But AI is dangerous now! 3. So it can’t possibly be dangerous in the future. 4. QED! I have no idea why Daron Acemoglu and every single other person who writes articles on AI for the popular media thinks this is such a knockdown argument. But here we are. He writes: AI detractors have focused on the potential danger to human civilization from a super-intelligence if it were to run amok. Such warnings have been sounded by tech entrepreneurs Bill Gates and Elon Musk, physicist Stephen Hawking and leading AI researcher Stuart Russell.

Jul 28, 2021 • 21min
Mantic Monday 7/26
https://astralcodexten.substack.com/p/mantic-monday-726 This Week In Markets PredictIt remains easy to use, high-volume, and focused almost entirely on horse-race political questions. At least we might get rid of Cuomo. Polymarket remains a fun alternative way to learn about the news. I only heard about the monkeypox issue a few days ago, and hearing “22% chance of it spreading” is both faster and more useful than some article that dithers for a few paragraphs and finally concludes that “health officials warn Americans not to panic”. I would count it a minor victory if one day news sources routinely included this in their articles, eg “Polymarket, a major prediction engine, estimates a 22% chance that at least one other person will catch the disease.” Extra credit for the last market, which seems to be successfully predicting a scalar instead of a binary outcome - I’ve seen Metaculus experiment with this technology, but this is the first time I’ve spotted it at Polymarket using real money. Some of the more interesting new Metaculus markets. The space telescope one is especially interesting in the context of whether we could use prediction markets to predict (and maybe manage) government delays and cost overruns. The telescope is currently scheduled for launch in October 2025, so the market expects it to be about five years late. For context, the previous space telescope, James Webb, was originally scheduled for 2007 and (if everything goes well) will launch later this year. God Help Us, Let’s Try Predicting The Coronavirus Some More Anxiety is growing about the new Delta variant of coronavirus. What do the prediction markets say? Here’s Polymarket: