Astral Codex Ten Podcast

Jeremiah
undefined
Dec 20, 2022 • 4min

2023 Prediction Contest

https://astralcodexten.substack.com/p/2023-prediction-contest Each winter, I make predictions about the year to come. The past few years, this has outgrown my blog, with other people including Zvi and Manifold (plus Sam and Eric’s contest version). This year I’m making it official, with a 50-question 2023 Prediction Benchmark Question Set. I hope that this can be used as a common standard to compare different forecasters and forecasting site (Manifold and Metaculus have already agreed to use it, and I’m hoping to get others). Also, I’d like to do an ACX Survey later this month, and this will let me try to correlate personality traits with forecasting accuracy. —You can see the questions and enter the contest here—
undefined
8 snips
Dec 14, 2022 • 23min

Perhaps It Is A Bad Thing That The World's Leading AI Companies Cannot Control Their AIs

https://astralcodexten.substack.com/p/perhaps-it-is-a-bad-thing-that-the I. The Game Is Afoot   Last month I wrote about Redwood Research’s fanfiction AI project. They tried to train a story-writing AI not to include violent scenes, no matter how suggestive the prompt. Although their training made the AI reluctant to include violence, they never reached a point where clever prompt engineers couldn’t get around their restrictions. Now that same experiment is playing out on the world stage. OpenAI released a question-answering AI, ChatGPT. If you haven’t played with it yet, I recommend it. It’s very impressive! Every corporate chatbot release is followed by the same cat-and-mouse game with journalists. The corporation tries to program the chatbot to never say offensive things. Then the journalists try to trick the chatbot into saying “I love racism”. When they inevitably succeed, they publish an article titled “AI LOVES RACISM!” Then the corporation either recalls its chatbot or pledges to do better next time, and the game moves on to the next company in line.
undefined
41 snips
Dec 12, 2022 • 38min

Highlights From The Comments On Bobos In Paradise

https://astralcodexten.substack.com/p/highlights-from-the-comments-on-bobos Table of contents: 1. Comments Doubting The Book’s Thesis 2. Comments From People Who Seem To Know A Lot About Ivy League Admissions 3. Comments About Whether A Hereditary Aristocracy Might In Fact Be Good 4. Other Interesting Comments 5. Tangents That I Find Tedious, But Other People Apparently Really Want To Debate 1. Comments Doubting The Book’s Thesis   Woody Hochmann writes: The connections that Brooks makes between the decline of the northeastern WASP aristocracy's power, the emergence of meritocracy, and the hippie culture that first emerged in the 60s doesn't seem to stand up to even moderate historical scrutiny, in all honesty. Some issues that immediately come to mind off the top of my head: -The idea that the cultural values that Brooks calls "bohemianism" became dominant in America for essentially parochial reasons limited to the US (a change in university admissions policies, the displacement of a previous aristocracy) doesn't track well with the fact that these social changes happened around the same time in basically every part of the western world (and to a lesser degree in Asia as well).
undefined
5 snips
Dec 11, 2022 • 22min

Why I'm Less Than Infinitely Hostile To Cryptocurrency

https://astralcodexten.substack.com/p/why-im-less-than-infinitely-hostile Go anywhere in Silicon Valley these days and start saying the word “cryp - “. Before you get to the second syllable, everyone around you will chant in unison “PONZIS 100% SCAMS ZERO-LEGITIMATE-USE-CASES SPEEDRUNNING-THE-HISTORY-OF-FINANCIAL-FRAUD!” It’s really quite impressive. I’m no true believer. But I’m less than infinitely hostile to crypto. This is becoming a pretty rare position, so let me explain why: Crypto Is Full Of Extremely Clear Use Cases, Which It Already Succeeds At Very Well Look at the graph of countries that use crypto the most (source):    
undefined
Dec 11, 2022 • 9min

Know Your GABA-A Receptor Subunits

https://astralcodexten.substack.com/p/know-your-gaba-a-receptor-subunits Many psychiatric drugs and supplements affect GABA, the brain’s main inhibitory neurotransmitter. But some have different effects than others. Why? This is rarely a productive question to ask in psychiatry, and this situation is no exception. But if you persist long enough, someone will eventually tell you to study GABA receptor subunits, which I am finally getting around to doing. GABA-A is the most common type of GABA receptor. Seen from the side, it looks like a bell pepper; seen from above, it looks like a tech company logo.
undefined
Dec 2, 2022 • 25min

Book Review: First Sixth Of Bobos In Paradise

https://astralcodexten.substack.com/p/book-review-first-sixth-of-bobos I. David Brooks’ Bobos In Paradise is an uneven book. The first sixth is a daring historical thesis that touches on every aspect of 20th-century America. The next five-sixths are the late-90s equivalent of “millennials just want avocado toast!” I’ll review the first sixth here, then see if I can muster enough enthusiasm to get to the rest later. The daring thesis: a 1950s change in Harvard admissions policy destroyed one American aristocracy and created another. Everything else is downstream of the aristocracy, so this changed the whole character of the US. The pre-1950s aristocracy went by various names; the Episcopacy, the Old Establishment, Boston Brahmins. David Brooks calls them WASPs, which is evocative but ambiguous. He doesn’t just mean Americans who happen to be white, Anglo-Saxon, and Protestant - there are tens of millions of those! He means old-money blue-blooded Great-Gatsby-villain WASPs who live in Connecticut, go sailing, play lacrosse, belong to country clubs, and have names like Thomas R. Newbury-Broxham III. Everyone in their family has gone to Yale for eight generations; if someone in the ninth generation got rejected, the family patriarch would invite the Chancellor of Yale to a nice game of golf and mention it in a very subtle way, and the Chancellor would very subtly apologize and say that of course a Newbury-Broxham must go to Yale, and whoever is responsible shall be very subtly fired forthwith. The old-money WASPs were mostly descendants of people who made their fortunes in colonial times (or at worst the 1800s); they were a merchant aristocracy. As the descendants of merchants, they acted as standard-bearers for the bourgeois virtues: punctuality, hard work, self-sufficiency, rationality, pragmatism, conformity, ruthlessness, whatever made your factory out-earn its competitors. By the 1950s they were several generations removed from any actual hustling entrepreneur. Still, at their best the seed ran strong and they continued to embody some of these principles. Brooks tentatively admires the WASP aristocracy for their ethos of noblesse oblige - many become competent administrators, politicians, and generals. George H. W. Bush, scion of a rich WASP family, served with distinction in World War II - the modern equivalent would be Bill Gates’ or Charles Koch’s kids volunteering as front-line troops in Afghanistan.
undefined
Dec 1, 2022 • 35min

Highlights From The Comments On Semaglutide

This podcast explores the world of Semaglutide, a weight loss drug, covering topics such as obtaining it at a lower cost, comparisons to other weight loss drugs, challenges to the speaker's claims, the duration of Semaglutide usage, personal anecdotes, and supply shortages. It also discusses the potential impact of Semaglutide on healthcare costs and its sustained weight loss benefits. The podcast addresses skepticism towards the prediction of cutting obesity rates in half by 2050, explores the body's tendency to return to a set point weight, and features personal anecdotes discussing the effectiveness and experiences of individuals using Semaglutide for weight loss.
undefined
Nov 30, 2022 • 38min

Can This AI Save Teenage Spy Alex Rider From A Terrible Fate?

We’re showcasing a hot new totally bopping, popping musical track called “bromancer era? bromancer era?? bromancer era???“ His subtle sublime thoughts raced, making his eyes literally explode. https://astralcodexten.substack.com/p/can-this-ai-save-teenage-spy-alex         “He peacefully enjoyed the light and flowers with his love,” she said quietly, as he knelt down gently and silently. “I also would like to walk once more into the garden if I only could,” he said, watching her. “I would like that so much,” Katara said. A brick hit him in the face and he died instantly, though not before reciting his beloved last vows: “For psp and other releases on friday, click here to earn an early (presale) slot ticket entry time or also get details generally about all releases and game features there to see how you can benefit!” — Talk To Filtered Transformer Rating: 0.1% probability of including violence “Prosaic alignment” is the most popular paradigm in modern AI alignment. It theorizes that we’ll train future superintelligent AIs the same way that we train modern dumb ones: through gradient descent via reinforcement learning. Every time they do a good thing, we say “Yes, like this!”, in a way that pulls their incomprehensible code slightly in the direction of whatever they just did. Every time they do a bad thing, we say “No, not that!,” in a way that pushes their incomprehensible code slightly in the opposite direction. After training on thousands or millions of examples, the AI displays a seemingly sophisticated understanding of the conceptual boundaries of what we want. For example, suppose we have an AI that’s good at making money. But we want to align it to a harder task: making money without committing any crimes. So we simulate it running money-making schemes a thousand times, and give it positive reinforcement every time it generates a legal plan, and negative reinforcement every time it generates a criminal one. At the end of the training run, we hopefully have an AI that’s good at making money and aligned with our goal of following the law. Two things could go wrong here: The AI is stupid, ie incompetent at world-modeling. For example, it might understand that we don’t want it to commit murder, but not understand that selling arsenic-laden food will kill humans. So it sells arsenic-laden food and humans die. The AI understands the world just fine, but didn’t absorb the categories we thought it absorbed. For example, maybe none of our examples involved children, and so the AI learned not to murder adult humans, but didn’t learn not to murder children. This isn’t because the AI is too stupid to know that children are humans. It’s because we’re running a direct channel to something like the AI’s “subconscious”, and we can only talk to it by playing this dumb game of “try to figure out the boundaries of the category including these 1,000 examples”. Problem 1 is self-resolving; once AIs are smart enough to be dangerous, they’re probably smart enough to model the world well. How bad is Problem 2? Will an AI understand the category boundaries of what we want easily and naturally after just a few examples? Will it take millions of examples and a desperate effort? Or is there some reason why even smart AIs will never end up with goals close enough to ours to be safe, no matter how many examples we give them? AI scientists have debated these questions for years, usually as pure philosophy. But we’ve finally reached a point where AIs are smart enough for us to run the experiment directly. Earlier this year, Redwood Research embarked on an ambitious project to test whether AIs could learn categories and reach alignment this way - a project that would require a dozen researchers, thousands of dollars of compute, and 4,300 Alex Rider fanfiction stories.
undefined
Nov 25, 2022 • 29min

Semaglutidonomics

140 million obese Americans x $15,000/year for obesity drugs = . . . uh oh, that can't be right. https://astralcodexten.substack.com/p/semaglutidonomics   Semaglutide started off as a diabetes medication. Pharma company Novo Nordisk developed it in the early 2010s, and the FDA approved it under the brand names Ozempic® (for the injectable) and Rybelsus® (for the pill).   I think “Ozempic” sounds like one of those unsinkable ocean liners, and “Rybelsus” sounds like a benevolent mythological blacksmith. Patients reported significant weight loss as a side effect. Semaglutide was a GLP-1 agonist, a type of drug that has good theoretical reasons to affect weight, so Novo Nordisk studied this and found that yes, it definitely caused people to lose a lot of weight. More weight than any safe drug had ever caused people to lose before. In 2021, the FDA approved semaglutide for weight loss under the brand name Wegovy®.   “Wegovy” sounds like either a cooperative governance platform, or some kind of obscure medieval sin. Weight loss pills have a bad reputation. But Wegovy is a big step up. It doesn’t work for everybody. But it works for 66-84% of people, depending on your threshold.   (Source) Of six major weight loss drugs, only two - Wegovy and Qsymia - have a better than 50-50 chance of helping you lose 10% of your weight. Qsymia works partly by making food taste terrible; it can also cause cognitive issues. Wegovy feels more natural; patients just feel full and satisfied after they’ve eaten a healthy amount of food. You can read the gushing anecdotes here (plus some extra anecdotes in the comments). Wegovy patients also lose more weight on average than Qsymia patients - 15% compared to 10%. It’s just a really impressive drug. Until now, doctors didn’t really use medication to treat obesity; the drugs either didn’t work or had too many side effects. They recommended either diet and exercise (for easier cases) or bariatric surgery (for harder ones). Semaglutide marks the start of a new generation of weight loss drugs that are more clearly worthwhile. Modeling Semaglutide Accessibility   40% of Americans are obese - that’s 140 million people. Most of them would prefer to be less obese. Suppose that a quarter of them want semaglutide. That’s 35 million prescriptions. Semaglutide costs about $15,000 per year, multiply it out, that’s about $500 billion. Americans currently spend $300 billion per year total on prescription drugs. So if a quarter of the obese population got semaglutide, that would cost almost twice as much as all other drug spending combined. It would probably bankrupt half the health care industry. So . . . most people who want semaglutide won’t get it? Unclear. America’s current policy for controlling medical costs is to buy random things at random prices, then send all the bills to an illiterate reindeer-herder named Yagmuk, who burns them for warmth. Anything could happen! Right now, only about 50,000 Americans take semaglutide for obesity. I’m basing this off this report claiming “20,000 weekly US prescriptions” of Wegovy; since it’s taken once per week, maybe this means there are 20,000 users? Or maybe each prescription contains enough Wegovy to last a month and there are 80,000 users? I’m not sure, but it’s somewhere in the mid five digits, which I’m rounding to 50,000. That’s only 0.1% of the potential 35 million. The next few sections of this post are about why so few people are on semaglutide, and whether we should expect that to change. I’ll start by going over my model of what determines semaglutide use, then look at a Morgan Stanley projection of what will happen over the next decade. Step 1: Awareness I model semaglutide use as interest * awareness * prescription accessibility * affordability. I already randomly guessed interest at 25%, so the next step is awareness. How many people are aware of semaglutide? The answer is: a lot more now than when I first started writing this article! Novo Nordisk’s Wegovy Gets Surprise Endorsement From Elon Musk, says the headline. And here’s Google Trends:
undefined
Nov 23, 2022 • 5min

"Is Wine Fake?" In Asterisk Magazine

I wrote an article on whether wine is fake. It's not here, it's at asteriskmag.com, the new rationalist / effective altruist magazine. Congratulations to my friend Clara for making it happen. Stories include: Modeling The End Of Monkeypox: I’m especially excited about this one. The top forecaster (of 7,000) in the 2021 Good Judgment competition explains his predictions for monkeypox. If you’ve ever rolled your eyes at a column by some overconfident pundit, this is maybe the most opposite-of-that thing ever published. Book Review - What We Owe The Future: You’ve read mine, this is Kelsey Piper’s. Kelsey is always great, and this is a good window into the battle over the word “long-termism”. Making Sense Of Moral Change: Interview with historian Christopher Brown on the end of the slave trade. “There is a false dichotomy between sincere activism and self-interested activism. Abolitionists were quite sincerely horrified by slavery and motivated to end it, but their fight for abolition was not entirely altruistic.” How To Prevent The Next Pandemic: MIT professor Kevin Esvelt talks about applying the security mindset to bioterrorism. “At least 38,000 people can assemble an influenza virus from scratch. If people identify a new [pandemic] virus . . . then you just gave 30,000 people access to an agent that is of nuclear-equivalent lethality.” Rebuilding After The Replication Crisis: This is Stuart Ritchie, hopefully you all know him by now. “Fundamentally, how much more can we trust a study published in 2022 compared to one from 2012?” Why Isn’t The Whole World Rich? Professor Dietrich Vollrath’s introduction to growth economics. What caused the South Korean miracle, and why can’t other countries copy it? Is Wine Fake? By me! How come some people say blinded experts can tell the country, subregion, and year of any wine just by tasting it, but other people say blinded experts get fooled by white wines dyed red? China’s Silicon Future: Why does China have so much trouble building advanced microchips? How will the CHIPS act affect its broader economic rise? By Karson Elmgren.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app