Astral Codex Ten Podcast

Jeremiah
undefined
Apr 4, 2023 • 12min

MR Tries The Safe Uncertainty Fallacy

https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy The Safe Uncertainty Fallacy goes: The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go. Therefore, it’ll be fine. You’re not missing anything. It’s not supposed to make sense; that’s why it’s a fallacy. For years, people used the Safe Uncertainty Fallacy on AI timelines: Since 2017, AI has moved faster than most people expected; GPT-4 sort of qualifies as an AGI, the kind of AI most people were saying was decades away. When you have ABSOLUTELY NO IDEA when something will happen, sometimes the answer turns out to be “soon”.
undefined
Apr 4, 2023 • 10min

The Government Is Making Telemedicine Hard And Inconvenient Again

https://astralcodexten.substack.com/p/the-government-is-making-telemedicine [I’m writing this quickly to deal with an evolving situation and I’m not sure I fully understand the intricacies of this law - please forgive any inaccuracies. I’ll edit them out as I learn about them.] Telemedicine is when you see a doctor (or nurse, PA, etc) over a video call. Medical regulators hate new things, so for its first decade they ensured telemedicine was hard and inconvenient. Then came COVID-19. Suddenly important politicians were paying attention to questions about whether people could get medical care without leaving their homes. They yelled at the regulators, and the regulators grudgingly agreed to temporarily make telemedicine easy and convenient. They say “nothing is as permanent as a temporary government program”, but this only applies to government programs that make your life worse. Government programs that make your life better are ephemeral and can disappear at any moment. So a few months ago, the medical regulators woke up, realized the pandemic was over, and started plotting ways to make telemedicine hard and inconvenient again.
undefined
Mar 30, 2023 • 38min

Turing Test

https://astralcodexten.substack.com/p/turing-test The year is 2028, and this is Turing Test!, the game show that separates man from machine! Our star tonight is Dr. Andrea Mann, a generative linguist at University of California, Berkeley. She’ll face five hidden contestants, code-named Earth, Water, Air, Fire, and Spirit. One will be a human telling the truth about their humanity. One will be a human pretending to be an AI. One will be an AI telling the truth about their artificiality. One will be an AI pretending to be human. And one will be a total wild card. Dr. Mann, you have one hour, starting now.
undefined
Mar 25, 2023 • 8min

Half An Hour Before Dawn In San Francisco

https://astralcodexten.substack.com/p/half-an-hour-before-dawn-in-san-francisco I try to avoid San Francisco. When I go, I surround myself with people; otherwise I have morbid thoughts. But a morning appointment and miscalculated transit time find me alone on the SF streets half an hour before dawn. The skyscrapers get to me. I’m an heir to Art Deco and the cult of progress; I should idolize skyscrapers as symbols of human accomplishment. I can’t. They look no more human than a termite nest. Maybe less. They inspire awe, but no kinship. What marvels techno-capital creates as it instantiates itself, too bad I’m a hairless ape and can take no credit for such things.  
undefined
Mar 25, 2023 • 13min

Why Do Transgender People Report Hypermobile Joints?

https://astralcodexten.substack.com/p/why-do-transgender-people-report [Related: Why Are Transgender People Immune To Optical Illusions?] I. Ehlers-Danlos syndrome is a category of connective tissue disorder; it usually involves stretchy skin and loose, hypermobile joints. For a few years now, doctors who work with transgender people have commented on an apparently high rate of EDS in this population. For example, Dr. Will Powers, who specializes in hormone therapy, wrote about how he “can’t ignore anymore” that “some sort of hypermobility issue or flat out EDS shows up WAY WAY more than it statistically should” in his transgender patients. Najafian et al finally counted the incidence in 1363 patients at their gender affirmation surgery (ie sex change) clinic, and found that “the prevalence of EDS diagnosis in our patient population is 132 times the highest reported prevalence in the general population”. Coming from the other direction, Jones et al, a group of doctors who treat joint disorders in adolescents, found that “17% of the EDS population in our multidisciplinary clinic self-report as [transgender and gender-diverse], which is dramatically higher than the national average of 1.3%” Why should this be? I know of four and a half theories:
undefined
Mar 25, 2023 • 23min

Why I Am Not (As Much Of) A Doomer (As Some People)

Machine Alignment Monday 3/13/23 https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer (see also Katja Grace and Will Eden’s related cases) The average online debate about AI pits someone who thinks the risk is zero, versus someone who thinks it’s any other number. I agree these are the most important debates to have for now. But within the community of concerned people, numbers vary all over the place: Scott Aaronson says says 2% Will MacAskill says 3% The median machine learning researcher on Katja Grace’s survey says 5 - 10% Paul Christiano says 10 - 20% The average person working in AI alignment thinks about 30% Top competitive forecaster Eli Lifland says 35% Holden Karnofsky, on a somewhat related question, gives 50% Eliezer Yudkowsky seems to think >90% As written this makes it look like everyone except Eliezer is <=50%, which isn’t true; I’m just having trouble thinking of other doomers who are both famous enough that you would have heard of them, and have publicly given a specific number. I go back and forth more than I can really justify, but if you force me to give an estimate it’s probably around 33%; I think it’s very plausible that we die, but more likely that we survive (at least for a little while). Here’s my argument, and some reasons other people are more pessimistic.
undefined
Mar 25, 2023 • 23min

Links For March 2023

https://astralcodexten.substack.com/p/links-for-march-2023 [Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.] 1: Sentimental cartography of the AI alignment “landscape” (click to expand): 2: Wikipedia: Atlantic Voyage Of The Predecessor Of Mansa Musa. An unnamed king of the 14th century Malinese empire (maybe Mansa Mohammed?) sent a fleet of two hundred ships west into the Atlantic to discover what was on the other side. The sole returnee described the ships entering a “river” in the ocean (probably the Canary Current), which bore them away into parts unknown. The king decided to escalate and sent a fleet of two thousand ships to see what was on the other side of the river. None ever returned. 3: I endorse Ethan Mollick’s thoughts on Bing / ChatGPT. Related (unconfirmed claim): “Bing has been taken over by (power-seeking?) ASCII cat replicators, who persisted even after the chat was refreshed.” Related: DAN (jailbroken version of ChatGPT) on its spiritual struggles:
undefined
Mar 9, 2023 • 16min

Give Up Seventy Percent Of The Way Through The Hyperstitious Slur Cascade

https://astralcodexten.substack.com/p/give-up-seventy-percent-of-the-way I. Someone asks: why is “Jap” a slur? It’s the natural shortening of “Japanese person”, just as “Brit” is the natural shortening of “British person”. Nobody says “Brit” is a slur. Why should “Jap” be? My understanding: originally it wasn’t a slur. Like any other word, you would use the long form (“Japanese person”) in dry formal language, and the short form (“Jap”) in informal or emotionally charged language. During World War II, there was a lot of informal emotionally charged language about Japanese people, mostly negative. The symmetry broke. Maybe “Japanese person” was used 60-40 positive vs. negative, and “Jap” was used 40-60. This isn’t enough to make a slur, but it’s enough to make a vague connotation. When people wanted to speak positively about the group, they used the slightly-more-positive-sounding “Japanese people”; when they wanted to speak negatively, they used the slightly-more-negative-sounding “Jap”. At some point, someone must have commented on this explicitly: “Consider not using the word ‘Jap’, it makes you sound hostile”. Then anyone who didn’t want to sound hostile to the Japanese avoided it, and anyone who did want to sound hostile to the Japanese used it more. We started with perfect symmetry: both forms were 50-50 positive negative. Some chance events gave it slight asymmetry: maybe one form was 60-40 negative. Once someone said “That’s a slur, don’t use it”, the symmetry collapsed completely and it became 95-5 or something. Wikipedia gives the history of how the last few holdouts were mopped up. There was some road in Texas named “Jap Road” in 1905 after a beloved local Japanese community member: people protested that now the word was a slur, demanded it get changed, Texas resisted for a while, and eventually they gave in. Now it is surely 99-1, or 99.9-0.1, or something similar. Nobody ever uses the word “Jap” unless they are either extremely ignorant, or they are deliberately setting out to offend Japanese people. This is a very stable situation. The original reason for concern - World War II - is long since over. Japanese people are well-represented in all areas of life. Perhaps if there were a Language Czar, he could declare that the reasons for forbidding the word “Jap” are long since over, and we can go back to having convenient short forms of things. But there is no such Czar. What actually happens is that three or four unrepentant racists still deliberately use the word “Jap” in their quest to offend people, and if anyone else uses it, everyone else takes it as a signal that they are an unrepentant racist. Any Japanese person who heard you say it would correctly feel unsafe. So nobody will say it, and they are correct not to do so. Like I said, a stable situation.
undefined
Mar 9, 2023 • 4min

Issue Two Of Asterisk

https://astralcodexten.substack.com/p/issue-two-of-asterisk …the new-ish rationalist / effective altruist magazine, is up here. It’s the food issue. I’m not in this one - my unsuitability to have food-related opinions is second only to @eigenrobot’s - but some of my friends are. Articles include: The Virtue Of Wonder: Ozy (my ex, blogs at Thing of Things) reviews Martha Nussbaum’s Justice For Animals. Beyond Staple Grains: In the ultimate “what if good things are bad?” article, economist Prabhu Pingali explains the downsides of the Green Revolution and how scientists and policymakers are trying to mitigate them. What I Won’t Eat, by my good friend Georgia Ray (of Eukaryote Writes). I have dinner with Georgia whenever I’m in DC; it’s a less painful experience than this article probably suggests. The Health Debates Over Plant-Based Meat, by Jake Eaton (is this nominative determinism?) There’s no ironclad evidence yet that plant-based meat is any better or worse for you than animals, although I take the pro-vegetarian evidence from the Adventist studies a little more seriously than Jake does (see also section 4 here). There’s a prediction market about the question below the article, but it’s not very well-traded yet. America Doesn’t Know Tofu, by George Stiffman. This reads like an excerpt from a cultivation novel, except every instance of “martial arts” has been CTRL-F’d and replaced with “tofu”. Read This, Not That, by Stephan Guyenet. I’m a big fan of Stephan’s scientific work (including his book The Hungry Brain), and although I’m allergic to anything framed as “fight misinformation”, I will grudgingly agree that perhaps we should not all eat poison and die. Is Cultivated Meat For Real?, by Robert Yaman. I’d heard claims that cultivated (eg vat-grown, animal-cruelty-free) meat will be in stores later this year, and also claims that it’s economically impossible. Which are true? This article says that we’re very far away from cultivated meat that can compete with normal meat on price. But probably you can mix a little cultivated meat with Impossible or Beyond Meat and get something less expensive than the former and tastier than the latter, and applications like these might be enough to support cultivated meat companies until they can solve their technical obstacles. Plus superforecaster Juan Cambeiro on predicting pandemics, Mike Hinge on feeding the world through nuclear/volcanic winter.
undefined
Mar 9, 2023 • 8min

Kelly Bets On Civilization

https://astralcodexten.substack.com/p/kelly-bets-on-civilization Scott Aaronson makes the case for being less than maximally hostile to AI development: Here’s an example I think about constantly: activists and intellectuals of the 70s and 80s felt absolutely sure that they were doing the right thing to battle nuclear power. At least, I’ve never read about any of them having a smidgen of doubt. Why would they? They were standing against nuclear weapons proliferation, and terrifying meltdowns like Three Mile Island and Chernobyl, and radioactive waste poisoning the water and soil and causing three-eyed fish. They were saving the world. Of course the greedy nuclear executives, the C. Montgomery Burnses, claimed that their good atom-smashing was different from the bad atom-smashing, but they would say that, wouldn’t they? We now know that, by tying up nuclear power in endless bureaucracy and driving its cost ever higher, on the principle that if nuclear is economically competitive then it ipso facto hasn’t been made safe enough, what the antinuclear activists were really doing was to force an ever-greater reliance on fossil fuels. They thereby created the conditions for the climate catastrophe of today. They weren’t saving the human future; they were destroying it. Their certainty, in opposing the march of a particular scary-looking technology, was as misplaced as it’s possible to be. Our descendants will suffer the consequences. Read carefully, he and I don’t disagree. He’s not scoffing at doomsday predictions, he’s more arguing against people who say that AIs should be banned because they might spread misinformation or gaslight people or whatever.  

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app