

Astral Codex Ten Podcast
Jeremiah
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Episodes
Mentioned books

5 snips
Jul 30, 2023 • 20min
Highlights From The Comments On British Economic Decline
People are talking about British economic decline. Not just the decline from bestriding the world in the 19th century to today. A more recent, more profound decline, starting in the early 2000s, when it fell off the track of normal developed-economy growth. See for example this graph from We Are In An Unprecedented Era Of UK Relative Macroeconomic Decline: https://astralcodexten.substack.com/p/highlights-from-the-comments-on-british

Jul 25, 2023 • 28min
The Extinction Tournament
This month’s big news in forecasting: the Forecasting Research Institute has released the results of the Existential Risk Persuasion Tournament (XPT). XPT was supposed to use cutting-edge forecasting techniques to develop consensus estimates of the danger from various global risks like climate change, nuclear war, etc. The plan was: get domain experts (eg climatologists, nuclear policy experts) and superforecasters (people with a proven track record of making very good predictions) in the same room. Have them talk to each other. Use team-based competition with monetary prizes to incentivize accurate answers. Between the domain experts’ knowledge and the superforecasters’ prediction-making ability, they should be able to converge on good predictions. They didn’t. In most risk categories, the domain experts predicted higher chances of doom than the superforecasters. No amount of discussion could change minds on either side. https://astralcodexten.substack.com/p/the-extinction-tournament

Jul 20, 2023 • 18min
Contra The xAI Alignment Plan
Elon Musk has a new AI company, xAI. I appreciate that he seems very concerned about alignment. From his Twitter Spaces discussion: I think I have been banging the drum on AI safety now for a long time. If I could press pause on AI or advanced AI digital superintelligence, I would. It doesn’t seem like that is realistic . . . I could talk about this for a long time, it’s something that I’ve thought about for a really long time and actually was somewhat reluctant to do anything in this space because I am concerned about the immense power of a digital superintelligence. It’s something that, I think is maybe hard for us to even comprehend. He describes his alignment strategy in that discussion and a later followup: The premise is have the AI be maximally curious, maximally truth-seeking, I'm getting a little esoteric here, but I think from an AI safety standpoint, a maximally curious AI - one that's trying to understand the universe - I think is going to be pro-humanity from the standpoint that humanity is just much more interesting than not . . . Earth is vastly more interesting than Mars. . . that's like the best thing I can come up with from an AI safety standpoint. I think this is better than trying to explicitly program morality - if you try to program morality, you have to ask whose morality. And even if you're extremely good at how you program morality into AI, there's the morality inversion problem - Waluigi - if you program Luigi, you inherently get Waluigi. I would be concerned about the way OpenAI is programming AI - about this is good, and that's not good. https://astralcodexten.substack.com/p/contra-the-xai-alignment-plan

41 snips
Jul 20, 2023 • 3h 13min
Your Book Review: The Educated Mind
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] “The promise of a new educational theory”, writes Kieran Egan, “has the magnetism of a newspaper headline like ‘Small Earthquake in Chile: Few Hurt’”. But — could a new kind of school make the world rational? I discovered the work of Kieran Egan in a dreary academic library. The book I happened to find — Getting it Wrong from the Beginning — was an evisceration of progressive schools. As I worked at one at the time, I got a kick out of this. To be sure, broadsides against progressivist education aren’t exactly hard to come by. But Egan’s account went to the root, deeper than any critique I had found. Better yet, as I read more, I discovered he was against traditionalist education, too — and that he had constructed a new paradigm that incorporated the best of both. https://astralcodexten.substack.com/p/your-book-review-the-educated-mind

Jul 17, 2023 • 20min
Contra The Social Model Of Disability
What is the Social Model Of Disability? I’ll let its proponents describe it in their own words (emphases and line breaks mine) The Social Model Of Disability Explained (top Google result for the term): Individual limitations are not the cause of disability. Rather, it is society’s failure to provide appropriate services and adequately ensure that the needs of disabled people are taken into account in societal organization. Disability rights group Scope: The model says that people are disabled by barriers in society, not by their impairment or difference. The American Psychological Association: It is [the] environment that creates the handicaps and barriers, not the disability. From this perspective, the way to address disability is to change the environment and society, rather than people with disabilities. Foundation For People With Learning Disabilities: The social model of disability proposes that what makes someone disabled is not their medical condition, but the attitudes and structures of society. University of California, San Francisco: Disabilities are restrictions imposed by society. Impairments are the effects of any given condition. The solution, according to this model, lies not in fixing the person, but in changing our society. Medical care, for example, should not focus on cures or treatments in order to rid our bodies of functional impairments. Instead, this care should focus on enhancing our daily function in society. The Social Model’s main competitor is the Interactionist Model Of Disability, which says that disability is caused by an interaction of disease and society, and that it can be addressed by either treating the underlying condition or by adding social accommodations. In contrast to the Interactionist Model, the Social Model insists that disability is only due to society and not disease, and that it may only be addressed through social changes and not medical treatments. . . . this isn’t how the Social Model gets taught in real classrooms. Instead, it’s contrasted with “the Medical Model”, a sort of Washington Generals of disability models which nobody will admit to believing. The Medical Model is “disability is only caused by disease , society never contributes in any way, and nobody should ever accommodate it at all . . . ” Then the people describing it add “. . . and also, it says disabled people should be stigmatized, and not treated as real humans, and denied basic rights”. Why does the first part imply the second? It doesn’t matter, because “the Medical Model” was invented as a bogeyman to force people to run screaming into the outstretched arms of the Social Model. https://astralcodexten.substack.com/p/contra-the-social-model-of-disability

Jul 14, 2023 • 7min
Why Match School And Student Rank?
Matt Yglesias’ five-year old son asks: why do we send the top students to the best colleges? Why not send the weakest students to the best colleges, since they need the most help? This is one of those questions that’s so naive it loops back and becomes interesting again. https://astralcodexten.substack.com/p/why-match-school-and-student-rank

Jul 13, 2023 • 33min
Your Book Review: Secret Government
Finalist #8 in the Book Review Contest [This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] There is widespread agreement among philosophers, political commentators, and the general public that transparency in government is an unalloyed good. Louis Brandeis famously articulates the common wisdom: “Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman” (page 1). Support for transparency is bipartisan. On his first day in office, Barack Obama said “My administration is committed to creating an unprecedented level of openness in Government.” (page 1). On the Republican National Committee’s website, one reads “Republicans believe that transparency is essential for good governance. Elected officials should be held accountable for their actions in Washington, D.C.” (page 2) And so it is. Legislators’ votes are published and stored in public online databases, their deliberations are televised, and their every action is extensively documented. https://astralcodexten.substack.com/p/your-book-review-secret-government

Jul 13, 2023 • 19min
Links For July 2023
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.] https://astralcodexten.substack.com/p/links-for-july-2023

Jul 13, 2023 • 25min
Tales Of Takeover In CCF-World
Machine Alignment Monday, 7/3/2023 Tom Davidson’s Compute-Centric Framework report forecasts a continuous but fast AI takeoff, where people hand control of big parts of the economy to millions of near-human-level AI assistants . I mentioned earlier that the CCF report comes out of Open Philanthropy’s school of futurism, which differs from the Yudkowsky school where a superintelligent AI quickly takes over. Open Philanthropy is less explicitly apocalyptic than Yudkowsky, but they have concerns of their own about the future of humanity. I talked some people involved with the CCF report about possible scenarios. Thanks especially to Daniel Kokotajlo of OpenAI for his contributions. https://astralcodexten.substack.com/p/tales-of-takeover-in-ccf-world

Jul 6, 2023 • 33min
Your Book Review: Safe Enough?
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked] The date is June 9, 1985. The place is the Davis-Besse nuclear plant near Toledo, Ohio. It is just after 1:35 am, and the plant has a small malfunction: "As the assistant supervisor entered the control room, he saw that one of the main feedwater pumps had tripped offline." But instead of stabilizing, one safety system after another failed to engage. https://astralcodexten.substack.com/p/your-book-review-safe-enough