Astral Codex Ten Podcast

Jeremiah
undefined
Apr 14, 2023 • 8min

Spring Meetups Everywhere 2023

https://astralcodexten.substack.com/p/spring-meetups-everywhere-2023 Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times. This year we have spring meetups planned in over eighty cities, from Tokyo to Punta Cana in the Dominican Republic. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen. You can find the list below, in the following order: Africa Asia-Pacific (including Australia) Europe (including UK) Latin America North America (including Canada)
undefined
Apr 9, 2023 • 28min

Book Review: The Arctic Hysterias

https://astralcodexten.substack.com/p/book-review-the-arctic-hysterias I. Strange things are done in the midnight sun, say the poets who wrote of old. The Arctic trails have their secret tales that would make your blood run cold. The Northern Lights have seen queer sights, but the queerest they ever did see are chronicled in The Arctic Hysterias, psychiatrist Edward Foulks’ description of the culture-bound disorders of the Eskimos1 For example, kayak phobia:
undefined
Apr 9, 2023 • 11min

Most Technologies Aren't Races

https://astralcodexten.substack.com/p/most-technologies-arent-races [Disclaimer: I’m not an AI policy person, the people who are have thought about these scenarios in more depth, and if they disagree with this I’ll link to their rebuttals] Some people argue against delaying AI because it might make China (or someone else) “win” the AI “race”. But suppose AI is “only” a normal transformative technology, no more important than electricity, automobiles, or computers. Who “won” the electricity “race”? Maybe Thomas Edison, but that didn’t cause Edison’s descendants to rule the world as emperors, or make Menlo Park a second Rome. It didn’t even especially advantage America. Edison personally got rich, the overall balance of power didn’t change, and today all developed countries have electricity.
undefined
Apr 9, 2023 • 27min

Highlights From The Comments On Telemedicine Regulations

https://astralcodexten.substack.com/p/highlights-from-the-comments-on-telemedicine [Original post: The Government Is Making Telemedicine Hard And Inconvenient Again] Table Of Contents: 1: Isn’t drug addiction very bad? 2: Is telemedicine worse than regular medicine? 3: What about “pill mills”? 4: Do people force the blind to fill out forms before they can access Braille? 5: Was I unfairly caricaturing Christian doctors? 6: Which part of the government is responsible for this regulation? 7: How do other countries do this?
undefined
Apr 4, 2023 • 12min

MR Tries The Safe Uncertainty Fallacy

https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy The Safe Uncertainty Fallacy goes: The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go. Therefore, it’ll be fine. You’re not missing anything. It’s not supposed to make sense; that’s why it’s a fallacy. For years, people used the Safe Uncertainty Fallacy on AI timelines: Since 2017, AI has moved faster than most people expected; GPT-4 sort of qualifies as an AGI, the kind of AI most people were saying was decades away. When you have ABSOLUTELY NO IDEA when something will happen, sometimes the answer turns out to be “soon”.
undefined
Apr 4, 2023 • 10min

The Government Is Making Telemedicine Hard And Inconvenient Again

https://astralcodexten.substack.com/p/the-government-is-making-telemedicine [I’m writing this quickly to deal with an evolving situation and I’m not sure I fully understand the intricacies of this law - please forgive any inaccuracies. I’ll edit them out as I learn about them.] Telemedicine is when you see a doctor (or nurse, PA, etc) over a video call. Medical regulators hate new things, so for its first decade they ensured telemedicine was hard and inconvenient. Then came COVID-19. Suddenly important politicians were paying attention to questions about whether people could get medical care without leaving their homes. They yelled at the regulators, and the regulators grudgingly agreed to temporarily make telemedicine easy and convenient. They say “nothing is as permanent as a temporary government program”, but this only applies to government programs that make your life worse. Government programs that make your life better are ephemeral and can disappear at any moment. So a few months ago, the medical regulators woke up, realized the pandemic was over, and started plotting ways to make telemedicine hard and inconvenient again.
undefined
Mar 30, 2023 • 38min

Turing Test

https://astralcodexten.substack.com/p/turing-test The year is 2028, and this is Turing Test!, the game show that separates man from machine! Our star tonight is Dr. Andrea Mann, a generative linguist at University of California, Berkeley. She’ll face five hidden contestants, code-named Earth, Water, Air, Fire, and Spirit. One will be a human telling the truth about their humanity. One will be a human pretending to be an AI. One will be an AI telling the truth about their artificiality. One will be an AI pretending to be human. And one will be a total wild card. Dr. Mann, you have one hour, starting now.
undefined
Mar 25, 2023 • 8min

Half An Hour Before Dawn In San Francisco

https://astralcodexten.substack.com/p/half-an-hour-before-dawn-in-san-francisco I try to avoid San Francisco. When I go, I surround myself with people; otherwise I have morbid thoughts. But a morning appointment and miscalculated transit time find me alone on the SF streets half an hour before dawn. The skyscrapers get to me. I’m an heir to Art Deco and the cult of progress; I should idolize skyscrapers as symbols of human accomplishment. I can’t. They look no more human than a termite nest. Maybe less. They inspire awe, but no kinship. What marvels techno-capital creates as it instantiates itself, too bad I’m a hairless ape and can take no credit for such things.  
undefined
Mar 25, 2023 • 13min

Why Do Transgender People Report Hypermobile Joints?

https://astralcodexten.substack.com/p/why-do-transgender-people-report [Related: Why Are Transgender People Immune To Optical Illusions?] I. Ehlers-Danlos syndrome is a category of connective tissue disorder; it usually involves stretchy skin and loose, hypermobile joints. For a few years now, doctors who work with transgender people have commented on an apparently high rate of EDS in this population. For example, Dr. Will Powers, who specializes in hormone therapy, wrote about how he “can’t ignore anymore” that “some sort of hypermobility issue or flat out EDS shows up WAY WAY more than it statistically should” in his transgender patients. Najafian et al finally counted the incidence in 1363 patients at their gender affirmation surgery (ie sex change) clinic, and found that “the prevalence of EDS diagnosis in our patient population is 132 times the highest reported prevalence in the general population”. Coming from the other direction, Jones et al, a group of doctors who treat joint disorders in adolescents, found that “17% of the EDS population in our multidisciplinary clinic self-report as [transgender and gender-diverse], which is dramatically higher than the national average of 1.3%” Why should this be? I know of four and a half theories:
undefined
Mar 25, 2023 • 23min

Why I Am Not (As Much Of) A Doomer (As Some People)

Machine Alignment Monday 3/13/23 https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer (see also Katja Grace and Will Eden’s related cases) The average online debate about AI pits someone who thinks the risk is zero, versus someone who thinks it’s any other number. I agree these are the most important debates to have for now. But within the community of concerned people, numbers vary all over the place: Scott Aaronson says says 2% Will MacAskill says 3% The median machine learning researcher on Katja Grace’s survey says 5 - 10% Paul Christiano says 10 - 20% The average person working in AI alignment thinks about 30% Top competitive forecaster Eli Lifland says 35% Holden Karnofsky, on a somewhat related question, gives 50% Eliezer Yudkowsky seems to think >90% As written this makes it look like everyone except Eliezer is <=50%, which isn’t true; I’m just having trouble thinking of other doomers who are both famous enough that you would have heard of them, and have publicly given a specific number. I go back and forth more than I can really justify, but if you force me to give an estimate it’s probably around 33%; I think it’s very plausible that we die, but more likely that we survive (at least for a little while). Here’s my argument, and some reasons other people are more pessimistic.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app