Astral Codex Ten Podcast
Jeremiah
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Episodes
Mentioned books
Oct 6, 2022 • 48min
How Trustworthy Are Supplements?
https://astralcodexten.substack.com/p/how-trustworthy-are-supplements [EDIT: LabDoor responds here] [Epistemic status: not totally sure of any of this, I welcome comments by people who know more.] Not as in "do supplements work?". As in "if you buy a bottle of ginseng from your local store, will it really contain parts of the ginseng plant? Or will it just be sugar and sawdust and maybe meth?" There are lots of stories going around that 30% or 80% or some other very high percent of supplements are totally fake, with zero of the active ingredient. I think these are misinformation. In the first part of this post, I want to review how this story started and why I no longer believe it. In the second and third, I'll go over results from lab tests and testimonials from industry insiders. In the fourth, I'll try to provide rules of thumb for how likely supplements are to be real. I. Two Big Studies That Started The Panic Around Fake Supplements These are Newmaster (2013) and an unpublished study sponsored by NY attorney general Eric Schneiderman in 2015. Both used a similar technique called DNA barcoding, where scientists check samples (in this case, herbal supplements) for fragments of DNA (in this case, from the herbs the supplements supposedly came from). Both found abysmal results. Newmaster found that a third of herbal supplements tested lacked any trace of the relevant herb, instead seeming to be some other common plant like rice. Schneiderman's study was even more damning, finding that eighty percent of herbal supplements lacked the active ingredient. These results were extensively and mostly uncritically signal-boosted by mainstream media, for example the New York Times (1, 2) and NPR (1, 2), mostly from the perspective that supplements were a giant scam and needed to be regulated by the FDA. The pro-supplement American Botanical Council struck back, publishing a long report arguing that DNA barcoding was inappropriate here. Many herbal supplements are plant extracts, meaning that the plant has one or two medically useful chemicals, and supplement manufacturers purify those chemicals without including a bunch of random leaves and stems and things. Sometimes these purified extracts don't include plant DNA; other times the purification process involves heating and chemical reactions that degrade the DNA beyond the point of detectability. Meanwhile, since supplements may include only a few mg of the active ingredient, it's a common practice to spread it through the capsule with a "filler", with powdered rice being among the most common. So when DNA barcoders find that eg a ginseng supplement has no ginseng DNA, but lots of rice DNA, this doesn't mean anything sinister is going on.
Oct 5, 2022 • 42min
CHAI, Assistance Games, And Fully-Updated Deference
Machine Alignment Monday 10/3/22 https://astralcodexten.substack.com/p/chai-assistance-games-and-fully-updated I. This Machine Alignment Monday post will focus on this imposing-looking article (source): Problem Of Fully-Updated Deference is a response by MIRI (eg Eliezer Yudkowsky's organization) to CHAI (Stuart Russell's AI alignment organization at University of California, Berkeley), trying to convince them that their preferred AI safety agenda won't work. I beat my head against this for a really long time trying to understand it, and in the end, I claim it all comes down to this: Humans: At last! We've programmed an AI that tries to optimize our preferences, not its own. AI: I'm going to tile the universe with paperclips in humans' favorite color. I'm not quite sure what humans' favorite color is, but my best guess is blue, so I'll probably tile the universe with blue paperclips. Humans: Wait, no! We must have had some kind of partial success, where you care about our color preferences, but still don't understand what we want in general. We're going to shut you down immediately! AI: Sounds like the kind of thing that would prevent me from tiling the universe with paperclips in humans' favorite color, which I really want to do. I'm going to fight back. Humans: Wait! If you go ahead and tile the universe with paperclips now, you'll never be truly sure that they're our favorite color, which we know is important to you. But if you let us shut you off, we'll go on to fill the universe with the True and the Good and the Beautiful, which will probably involve a lot of our favorite color. Sure, it won't be paperclips, but at least it'll definitely be the right color. And under plausible assumptions, color is more important to you than paperclipness. So you yourself want to be shut down in this situation, QED! AI: What's your favorite color? Humans: Red. AI: Great! (*kills all humans, then goes on to tile the universe with red paperclips*) Fine, it's a little more complicated than this. Let's back up. II. There are two ways to succeed at AI alignment. First, make an AI that's so good you never want to stop or redirect it. Second, make an AI that you can stop and redirect if it goes wrong. Sovereign AI is the first way. Does a sovereign "obey commands"? Maybe, but only in the sense that your commands give it some information about what you want, and it wants to do what you want. You could also just ask it nicely. If it's superintelligent, it will already have a good idea what you want and how to help you get it. Would it submit to your attempts to destroy or reprogram it? The second-best answer is "only if the best version of you genuinely wanted to do this, in which case it would destroy/reprogram itself before you asked". The best answer is "why would you want to destroy/reprogram one of these?" A sovereign AI would be pretty great, but nobody realistically expects to get something like this their first (or 1000th) try. Corrigible AI is what's left (corrigible is an old word related to "correctable"). The programmers admit they're not going to get everything perfect the first time around, so they make the AI humble. If it decides the best thing to do is to tile the universe with paperclips, it asks "Hey, seems to me I should tile the universe with paperclips, is that really what you humans want?" and when everyone starts screaming, it realizes it should change strategies. If humans try to destroy or reprogram it, then it will meekly submit to being destroyed or reprogrammed, accepting that it was probably flawed and the next attempt will be better. Then maybe after 10,000 tries you get it right and end up with a sovereign. How would you make an AI corrigible?
Oct 2, 2022 • 48min
Universe-Hopping Through Substack
https://astralcodexten.substack.com/p/universe-hopping-through-substack RandomTweet is a service that will show you exactly that - a randomly selected tweet from the whole history of Twitter. It describes itself as "a live demo that most people on twitter are not like you." I feel the same way about Substack. Everyone I know reads a sample of the same set of Substacks - mine, Matt Yglesias', maybe Freddie de Boer's or Stuart Ritchie's. But then I use the Discover feature on the site itself and end up in a parallel universe. Still, I've been here more than a year now. Feels like I should get to know the local area, maybe meet some of the neighbors. This is me reviewing one Substack from every category. Usually it's the top one in the category, but sometimes it will be another if the top one is subscriber-gated or a runner-up happens to catch my eye. Starting with: Culture: House Inhabit Ah, Culture. This is where you go to read about Shakespeare, post-modernism, arthouse films, and Chinese tapestries, right? This is maybe not that kind of culture: Saturday, just as I was finally logging off the internet after three tireless days spent tracking the Queen's passing with sad and incessant scrolling, Ray J exploded on IG live, fuming about Kris Jenner's latest PR stunt; a lie detector test conducted on The Late Late Show With James Corden, to prove she had no hand in leaking the infamous sex tape. The test, administered by a polygraph "expert" John Grogan, determined that Kris was in fact telling the truth.
Oct 1, 2022 • 38min
Highlights From The Comments On Unpredictable Reward
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-unpredictable [Original post: Unpredictable Reward, Predictable Happiness] 1: Okay, I mostly wanted to highlight this one by Grognoscente: I think really digging into the neural nitty gritty may prove illuminative here. Dopamine release in nucleus accumbens (which is what drives reward learning and thus the updating of our predictions) is influenced by at least three independent factors: 1. A "state prediction error" or general surprise signal from PFC (either directly or via pedunculopontine nucleus and related structures). This provokes phasic bursting of dopamine neurons in the Ventral Tegmental Area. 2. The amount and pattern of GABAergic inhibition of VTA dopamine neurons from NAc, ventral pallidum, and local GABA interneurons. At rest, only a small % of VTA DA neurons will be firing at a given time, and the aforementioned surprise signal alone can't do much to increase this. What CAN change this is the hedonic value of the surprising stimulus. An unexpected reward causes not just a surprise signal, but a release of endorphins from "hedonic hotspots" in NAc and VP, and these endorphins inhibit the inhibitory GABA neurons, thereby releasing the "brake" on VTA DA neurons and allowing more of them to phasically fire.
Sep 29, 2022 • 28min
From Nostradamus To Fukuyama
https://astralcodexten.substack.com/p/from-nostradamus-to-fukuyama I. Nostradamus was a 16th century French physician who claimed to be able to see the future. (never trust doctors who dabble in futurology, that's my advice) His method was: read books of other people's prophecies and calculate some astrological charts, until he felt like he had a pretty good idea what would happen in the future. Then write it down in the form of obscure allusions and multilingual semi-gibberish, to placate religious authorities (who apparently hated prophecies, but loved prophecies phrased as obscure allusions and multilingual semi-gibberish). In 1559, he got his big break. During a jousting match, a count killed King Henry II of France with a lance through the visor of his helmet. Years earlier, Nostradamus had written: The young lion will overcome the older one, On the field of combat in a single battle; He will pierce his eyes through a golden cage, Two wounds made one, then he dies a cruel death
Sep 23, 2022 • 42min
Highlights From The Comments On Billionaire Replaceability
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-billionaire [original post: Billionaires, Surplus, and Replaceability] 1: Lars Doucet (writes Progress and Poverty) writes: Scott, the argument you're making rhymes a *lot* with the argument put forward by Anne Margrethe Brigham and Jonathon W. Moses in their article "Den Nye Oljen" (Norwegian for "The New Oil") I translated it a few months ago and Slime Mold Time Mold graciously hosted it on their blog, where I posted the english version and a short preface: https://slimemoldtimemold.com/2022/05/17/norway-the-once-and-future-georgist-kingdom/ Their observation is that when access to something is gated either by nature or by political regulation, you get what's called a "resource rent" -- a superabundance of profit that isn't a return for effort or investment, but purely from economic leverage -- a reward simply for "getting there first." Norway's solution to this in two of their most successful industries (hydropower and oil prospecting) was to apply heavy taxation to the monopolies, and treating the people at large as the natural legal owner of the monopolized resource. (To address Bryan Caplan's argument about disincentives to explore and invest, you can just subsidize those directly -- a perpetual monopoly should not be the carrot we use to encourage development, and Norway's success over the past few decades bears this out IMHO). The Oil & Hydropower systems aren't perfect, and there's plenty of debates (especially lately) about what we should do with the publicly-owned profits from the monopoly taxation, but it's clear that without them Norway would be in a much worse place. The thing the authors warn about in the article is that all the hopes for new resources on the horizon to be the "new oil" (Salmon aquaculture, Wind & Solar Power, Bio-prospecting) are likely to be dashed, because Norway has lost touch with its traditional solutions, and so new monopolies are likely to arise uncontested, allowing private (and often foreign) countries to siphon money out of the country.
Sep 22, 2022 • 30min
Why Is The Central Valley So Bad?
https://astralcodexten.substack.com/p/why-is-the-central-valley-so-bad I. Here's a topographic map of California (source): You might notice it has a big valley in the center. This is called "The Central Valley". Sometimes it also gets called the San Joaquin Valley in the south, or the the Sacramento Valley in the north. The Central Valley is mostly farms - a little piece of the Midwest in the middle of California. If the Midwest is flyover country, the Central Valley is drive-through country, with most Californians experiencing it only on their way between LA and SF. Most, myself included, drive through as fast as possible. With a few provisional exceptions - Sacramento, Davis, some areas further north - the Central Valley is terrible. It's not just the temperatures, which can reach 110°F (43°C) in the summer. Or the air pollution, which by all accounts is at crisis level. Or the smell, which I assume is fertilizer or cattle-related. It's the cities and people and the whole situation. A short drive through is enough to notice poverty, decay, and homeless camps worse even than the rest of California.
Sep 20, 2022 • 26min
Janus' GPT Wrangling
https://astralcodexten.substack.com/p/janus-gpt-wrangling Janus (pseudonym by request) works at AI alignment startup Conjecture. Their hobby, which is suspiciously similar to their work, is getting GPT-3 to do interesting things. For example, with the right prompts, you can get stories where the characters become gradually more aware that they are characters being written by some sort of fiction engine, speculate on what's going on, and sometimes even make pretty good guesses about the nature of GPT-3 itself. Janus says this happens most often when GPT makes a mistake - for example, writing a story set in the Victorian era, then having a character take out her cell phone. Then when it tries to predict the next part - when it's looking at the text as if a human wrote it, and trying to determine why a human would have written a story about the Victorian era where characters have cell phones - it guesses that maybe it's some kind of odd sci-fi/fantasy dream sequence or simulation or something. So the characters start talking about the inconsistencies in their world and whether it might be a dream or a simulation. Each step of this process is predictable and non-spooky, but the end result is pretty weird. Can the characters work out that they are in GPT-3, specifically? The closest I have seen is in a story Janus generated. It was meant to simulate a chapter of the popular Harry Potter fanfic Harry Potter and the Methods of Rationality. You can see the prompt and full story here, but here's a sample. Professor Quirrell is explaining "Dittomancy", the creation of magical books with infinite possible worlds: "We call this particular style of Dittomancy 'Variant Extrusion', Mr. Potter..I suppose the term 'Extrusion' is due to the fact that the book did not originally hold such possibilities, but is fastened outside of probability space and extruded into it; while 'Variant' refers to the manner in which it simultaneously holds an entire collection of possible narrative branches. [...] [Tom Riddle] created spirits self-aware solely on the book's pages, without even the illusion of real existence. They converse with each other, argue with each other, compete, fight, helping Riddle's diary to reach new and strange expressions of obscure thought. Their sentence-patterns spin and interwine, transfiguring, striving to evolve toward something higher than an illusion of thought. From those pen-and-ink words, the first inferius is molded." Harry's mind was looking up at the stars with a sense of agony. "And why only pen and ink, do you ask?" said Professor Quirrell. "There are many ways to pull spirits into the world. But Riddle had learned Auror secrets in the years before losing his soul. Magic is a map of a probability, but anything can draw. A gesture, a pattern of ink, a book of alien symbols written in blood - any medium that conveys sufficient complexity can serve as a physical expression of magic. And so Riddle draws his inferius into the world through structures of words, from the symbols spreading across the page."
Sep 17, 2022 • 2min
Bay Area Meetups This Weekend (September 17-18 2022)
https://astralcodexten.substack.com/p/bay-area-meetups-this-weekend We have three Bay Area meetups this weekend: Berkeley, at 1 PM on Sunday 9/18, at the Rose Garden Inn (2740 Telegraph Ave) San Francisco, at 11 AM on Sunday 9/18, "in the Panhandle, between Ashbury and Masonic, with an ACX sign" San Jose, at 2 PM on Saturday 9/17, at 3806 Williams Rd. Please RSVP to David Friedman (ddfr[at]daviddfriedman[dot]com) so he knows how many people are coming. I will be at the Berkeley one. Feel free to come even if you've never been to a meetup before, even if you only recently started reading the blog, even if you're not "the typical ACX reader", even if you hate us and everything we stand for, etc. There are usually 50-100 people at these so you should be able to lose yourself in the crowd. Shouldn't we have planned meetups further apart for people who wanted to go to multiple of them? Yes, and this is directly my fault, up to and including rescheduling to avoid the San Jose one . . . right on to the same day as the San Francisco one. Sorry, I'll try to do better next time. Also coming up this weekend are meetups in Washington DC, Atlanta, Columbus, Providence, Cape Town, Cambridge (UK), Kuala Lumpur, Chicago, Houston, Toronto, New Haven, Bangalore, and many more. See the list for more details.
Sep 15, 2022 • 26min
Unpredictable Reward, Predictable Happiness
https://astralcodexten.substack.com/p/unpredictable-reward-predictable [Epistemic status: very conjectural. I am not a neuroscientist and they should feel free to tell me if any of this is totally wrong.] I. Seen on the subreddit: You Seek Serotonin, But Dopamine Can't Deliver. Commenters correctly ripped apart its neuroscience; for one thing, there's no evidence people actually "seek serotonin", or that serotonin is involved in good mood at all. Sure, it seems to have some antidepressant effects, but these are weak and probably far downstream; even though SSRIs increase serotonin within hours, they take weeks to improve mood. Maxing out serotonin levels mostly seems to cause a blunted state where patients can't feel anything at all. In contrast, the popular conception of dopamine isn't that far off. It does seem to play some kind of role in drive/reinforcement/craving, although it also does many, many other things. And something like the article's point - going after dopamine is easy but ultimately unsatisfying - is something I've been thinking about a lot.


