Astral Codex Ten Podcast

Jeremiah
undefined
Aug 12, 2022 • 17min

A Cyclic Theory Of Subcultures

https://astralcodexten.substack.com/p/a-cyclic-theory-of-subcultures David Chapman’s Geeks, MOPs, and Sociopaths In Subculture Evolution is rightfully a classic, but it doesn’t match my own experience. Either through good luck or poor observational skills, I’ve never seen a lot of sociopath takeovers. Instead, I’ve seen a gradual process of declining asabiyyah. Good people start out working together, then work together a little less, then turn on each other, all while staying good people and thinking they alone embody the true spirit of the movement. I find Peter Turchin’s theories of civilizational cycles oddly helpful here, maybe moreso than for civilizations themselves. Riffing off his phase structure:
undefined
Aug 9, 2022 • 22min

Why Not Slow AI Progress?

Machine Alignment Monday 8/8/22 https://astralcodexten.substack.com/p/why-not-slow-ai-progress The Broader Fossil Fuel Community Imagine if oil companies and environmental activists were both considered part of the broader “fossil fuel community”. Exxon and Shell would be “fossil fuel capabilities”; Greenpeace and the Sierra Club would be “fossil fuel safety” - two equally beloved parts of the rich diverse tapestry of fossil fuel-related work. They would all go to the same parties - fossil fuel community parties - and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron. This is how AI safety works now. AI capabilities - the work of researching bigger and better AI - is poorly differentiated from AI safety - the work of preventing AI from becoming dangerous. Two of the biggest AI safety teams are at DeepMind and OpenAI, ie the two biggest AI capabilities companies. Some labs straddle the line between capabilities and safety research. Probably the people at DeepMind and OpenAI think this makes sense. Building AIs and aligning AIs could be complementary goals, like building airplanes and preventing the airplanes from crashing. It sounds superficially plausible. But a lot of people in AI safety believe that unaligned AI could end the world, that we don’t know how to align AI yet, and that our best chance is to delay superintelligent AI until we do know. Actively working on advancing AI seems like the opposite of that plan. So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies. Nothing violent or illegal - doing violent illegal things is the best way to lose 100% of your support immediately. But maybe glare a little at your friend who goes into AI capabilities research, instead of getting excited about how cool their new project is. Or agitate for government regulation of AI - either because you trust the government to regulate wisely, or because you at least expect them to come up with burdensome rules that hamstring the industry. While there are salient examples of government regulatory failure, some regulations - like the EU’s ban on GMO or the US restrictions on nuclear power - have effectively stopped their respective industries.
undefined
Aug 7, 2022 • 31min

Your Book Review: Exhaustion

Finalist #13 in the Book Review Contest https://astralcodexten.substack.com/p/your-book-review-exhaustion [This is one of the finalists in the 2022 book review contest. It’s not by me - it’s by an ACX reader who will remain anonymous until after voting is done, to prevent their identity from influencing your decisions. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked - SA] I. Imagine you find yourself, over the course of a few weeks or months, becoming steadily more tired. You’re not doing any more work or other activities than you usually do, but nonetheless you find that you are able to do less and less before running out of energy. You start to pick and choose your battles – do I really feel up to this gym session? Do I really need to go to this work function? – and little by little your world begins to shrink. The sense of exhaustion becomes more pervasive, and occurs from when you wake up until you go to sleep. Any exertion leads to you paying for it in a general worsening of exhaustion and malaise that makes you question whether the activity was worth it. Eventually, you learn your lesson and withdraw from even the most basic activities – sometimes you don’t get out of bed, have trouble feeding yourself, and find your thinking has become clouded and sluggish ( a phenomenon sometimes called ‘brain fog’). Sleep becomes difficult, activities become less enjoyable, and you find that you are restless and anxious despite spending almost all your time attempting to rest.
undefined
Aug 5, 2022 • 15min

Absurdity Bias, Neom Edition

https://astralcodexten.substack.com/p/absurdity-bias-neom-edition Alexandros M expresses concern about my post on Neom. My post mostly just makes fun of Neom. My main argument against it is absurdity: a skyscraper the height of WTC1 and the length of Ireland? Come on, that’s absurd! But isn’t the absurdity heuristic a cognitive bias? Didn’t lots of true things sound absurd before they turned out to be true (eg evolution, quantum mechanics)? Don’t I specifically believe in things many people have found self-evidently absurd (eg the multiverse, AI risk)? Shouldn’t I be more careful about “this sounds silly to me, so I’m going to make fun of it”?
undefined
9 snips
Aug 5, 2022 • 25min

Slightly Against Underpopulation Worries

https://astralcodexten.substack.com/p/slightly-against-underpopulation So I hear there’s an underpopulation crisis now. I think the strong version of this claim - that underpopulation could cause human extinction - is 100% false. The weaker version - that it could make life unpleasant in some countries - is true. But I don’t think it’s at the top of any list of things to worry about. 1: Declining Birth Rates Won’t Drive Humans Extinct, Come On Not only are we not going to go extinct because of underpopulation, population is going to continue to rise for the next 80 years. Although growth rate may hit zero a little after 2100, it will be centuries before the human population gets any lower than it is today - if it ever does. This is mostly because of sub-Saharan Africa (especially Nigeria) where birth rates remain very high. Although these are going down, in some cases faster than expected, current best projections say they will stay high enough to keep population growing for the rest of the century. 2: Immigrant-Friendly Countries Will Keep Growing Here are Our World In Data’s projections for US and UK populations:
undefined
Aug 3, 2022 • 24min

Model City Monday 8/1/22

https://astralcodexten.substack.com/p/model-city-monday-8122 Neom Neom Neom Suppose you are an oil-rich country. You drill the oil and get very rich, for now. But someday you will run out of oil, or the world will switch to green sustainable energy, and then you will stop being very rich. Seems bad. There are two main classes of solution to this problem. Norway’s solution is to invest the oil money into a sovereign wealth fund; after they run out of oil, they can stay rich off investment income. Dubai’s solution is to use the oil money to build a really impressive city, then hope that rich people (tourists, emigres, and multinational companies seeking regional hubs) will relocate there, and then they can tax those rich people. The Norwegian solution has a lot to recommend it. It’s a lot more certain: getting steady returns on capital is a solved problem in a way that development economics isn’t. And it scales better: there are a pretty limited number of rich people willing to move to new desert cities, and multinational companies only need one regional hub per region. Still, for a certain type of oil sheikh, building the world’s biggest everything has a certain unquantifiable charm.
undefined
Jul 30, 2022 • 43min

Your Book Review: Viral

https://astralcodexten.substack.com/p/your-book-review-viral Finalist #12 in the Book Review Contest       [This is one of the finalists in the 2022 book review contest. It’s not by me - it’s by an ACX reader who will remain anonymous until after voting is done, to prevent their identity from influencing your decisions. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked - SA] Introduction   Alina Chan and Matt Ridley’s Viral is a book about the investigation into the origins of the COVID-19 pandemic. In case you haven’t been following, there’s been a shift in the scientific consensus on this topic. For about the first year of the pandemic, it was widely accepted that SARS-CoV-2, the virus behind COVID-19, had a natural origin, meaning that it first spread to humans naturally from an animal (also called a zoonotic origin). Any suggestion that it could have come from a lab was dismissed as a conspiracy theory. Then, sometime around spring 2021 something changed. Well-known, respected scientists began to voice the opinion that SARS-CoV-2 might have come from a lab, or that it’s at least a plausible hypothesis that deserves an investigation. The scientific consensus abruptly shifted from “definitely natural origin” to “both natural origin and lab origin are viable hypotheses that should be investigated.” Viral is a deep dive into this issue from all angles, covering the basics of virology, the history and epidemiology of the COVID-19 pandemic, the response of scientific and governmental institutions, and various pieces of evidence for both hypotheses. It doesn’t contain any new, bombshell revelations, but it’s a neat, accessible summary of the scattered bits of information that have been uncovered since the start of the pandemic. In this review I’ll try to distill some of the most important information and discuss my own interpretation of it.  I enjoyed the book and recommend it to anyone interested in the topic. However, many of the authors’ points (especially on technical issues) have counterpoints from other scientists who lean more heavily towards the natural origins hypothesis. So I think it’s best to include the book as part of a “package-deal” recommendation, rather than presenting it as a perfectly objective source. The last section of this review will include some more recommended sources to check out, including writing from advocates of the natural origins hypothesis with counterpoints to claims made in the book. I’ll also link one here in case you don’t make it that far. In my view, the book actually deals with two separate topics. The first is the object-level question – where did COVID come from? The second is the meta-level question – what can we say about the ability and willingness of different institutions to answer the question of the pandemic’s origins? 
undefined
Jul 30, 2022 • 24min

Links For July '22

https://astralcodexten.substack.com/p/links-for-july-095 [Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.] 1: Rude compounds on Reddit (source, original). Thousands of cocksuckers, shitlords, and libtards, but far fewer cocktards, shitsuckers, and liblords. Also disappointingly few trumpgoblins:
undefined
Jul 29, 2022 • 29min

Highlights From The Comments On Criticism Of Criticism Of Criticism

https://astralcodexten.substack.com/p/highlights-from-the-comments-on-criticism 1: I said in the original post that I wrote this because I knew someone would write the opposite article (that organizations accept specific criticism in order to fend off paradigmatic criticism), and then later Zvi did write an article kind of like that. He writes: It is the dream of anyone who writes a post called Criticism of [a] Criticism Contest to then have a sort-of reply called Criticism of Criticism of Criticism. The only question now is, do I raise to 4? I [wrote my article the way I did] for several reasons, including (1) a shorter post would have taken a lot longer, (2) when I posted a Tweet instead a central response was 'why don't you say exactly what things are wrong here', (3) any one of them might be an error but if basically every sentence/paragraph is doing the reversal thing you should stop and notice it and generalize it (4) you talk later about how concrete examples are better, so I went for concrete examples, (5) they warn against 'punching down' and this is a safe way to do this while 'punch up' and not having to do infinite research, (6) when something is the next natural narrative beat that goes both ways, (7) things are next-beats for reasons and I do think it's fair that most Xs in EA's place that do this are 'faking it' in this sense, (8) somehow people haven't realized I'm a toon and I did it in large part because it was funny and had paradoxical implications, (9) I also wrote it out because I wanted to better understand exactly what I had unconsciously/automatically noticed. For 7, notice in particular that the psychiatrists are totally faking it here, they are clearly being almost entirely performative and you could cross out every reference to psychiatry and write another profession and you'd find the same talks at a different conference. If someone decided not to understand this and said things like 'what specific things here aren't criticizing [X]', you'd need to do a close reading of some kind until people saw it, or come up with another better option. Also note that you can (A) do the thing they're doing at the conference, (B) do the thing where you get into some holy war and start a fight or (C) you can actually question psychiatry in general (correctly or otherwise) but if you do that at the conference people will mostly look at you funny and find a way to ignore you.
undefined
Jul 27, 2022 • 11min

Forer Statements As Updates And Affirmations

https://astralcodexten.substack.com/p/forer-statements-as-updates-and-affirmations The Forer Effect is a trick used by astrologers, psychics, and social psychologists. Given a list of statements like these: You have a great need for other people to like and admire you. You have a tendency to be critical of yourself. You have a great deal of unused capacity which you have not turned to your advantage. While you have some personality weaknesses, you are generally able to compensate for them. Your sexual adjustment has presented problems for you. Disciplined and self-controlled outside, you tend to be worrisome and insecure inside. At times you have serious doubts as to whether you have made the right decision or done the right thing. You prefer a certain amount of change and variety and become dissatisfied when hemmed in by restrictions and limitations. You pride yourself as an independent thinker and do not accept others' statements without satisfactory proof. You have found it unwise to be too frank in revealing yourself to others. At times you are extroverted, affable, sociable, while at other times you are introverted, wary, reserved. Some of your aspirations tend to be pretty unrealistic. Security is one of your major goals in life.  

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app