Astral Codex Ten Podcast

Jeremiah
undefined
Apr 12, 2019 • 4min

Pain as Active Ingredient in Dating

Reciprocity is a simple dating site, created by some friends of mine. You sign up and see a list of all your Facebook friends who also signed up. You can put a checkmark next to their name to indicate you want to date them (they can't see this). If you both checkmark each other, then the site reveals you've matched. This seemed like an obvious great idea. But I started to hear a lot of stories like the following: "I checkmarked Alice's name on Reciprocity, and the system didn't notify me that there was a match, so I assumed Alice didn't like me. Later I asked her out in person, and she said yes and we had a great time." I always figured Alice was just a jerk who was ruining the system for everyone else. After all, the whole premise was to incentivize honesty. Checkmark the names of people you honestly want to date. If they don't want to date you, they never hear about it, and you would be no worse off. If they do want to date you, the system will let you know, and you can arrange a date. If your pattern of checkmarks doesn't really match who you want to date, you're just screwing yourself and everyone else over for no reason. A few months ago, someone asked me out on a date and I said yes. And I realized I hadn't checkmarked them on Reciprocity. This caused a crisis of self-loathing. What's wrong with me? Why would I go against my own incentives and ruin things for everyone else? I asked a friend, who admitted she had done the same thing. Her theory was that asking someone on a date (with all of its accompanying awkwardness and difficulty) was a stronger signal of interest than ticking a checkbox. And potentially there's a grey zone of people who you would only date if you thought they liked you more than a certain amount. And asking them in person is hard enough to be a costly signal that you like them at least that amount, but ticking a checkbox isn't.
undefined
Apr 11, 2019 • 19min

Short Book Reviews April 2019

Timothy Carey's Method Of Levels teaches a form of psychotherapy based on perceptual control theory. The Crackpot List is specific to physics. But if someone were to create one for psychiatry, Method of Levels would score a perfect 100%. It somehow manages to do okay on the physics one despite not discussing any physics. The Method of Levels is the correct solution to every psychological problem, from mild depression to psychosis. Therapists may be tempted to use something other than the Method of Levels, but they must overcome this temptation and just use the Method of Levels on everybody. Every other therapy is about dismissing patients as "just crazy", but the Method of Levels tries to truly understand the patient. Every other therapy is about the therapist trying to change the patient, but the Method of Levels is about the patient trying to change themselves. The author occasionally just lapses into straight-up daydreams about elderly psychologists sitting on the porch, beating themselves up that they were once so stupid as to believe in psychology other than the Method of Levels. This book isn't just bad, it's dangerous. One vignette discusses a patient whose symptoms clearly indicate the start of a manic episode. The author recommends that instead of stigmatizing this person with a diagnosis of bipolar or pumping them full of toxic drugs, you should use the Method of Levels on them. This is a good way to end up with a dead patient. I like perceptual control theory. I share the author's hope that it could one day be a theory of everything for the brain. But even if it is, you can't use theories of everything to do clinical medicine. Darwin discovered a theory of everything for biology, but you can't reason from evolutionary first principles to how to treat a bacterial infection. You should treat the bacterial infection with antibiotics. This will be in accordance with evolutionary principles, and there will even be some cool evolutionary tie-ins (fungi evolved penicillin as a defense against bacteria). But you didn't discover penicillin by reasoning from evolutionary first principles. If you tried reasoning from evolutionary first principles, you might end up trying to make the bacteria mutate into a less dangerous strain during the middle of an osteomyelitis case or something. Just use actually existing clinical medicine and figure out the evolutionary justification for it later.
undefined
Apr 4, 2019 • 9min

Social Censorship: The First Offender Model

RJ Zigerell (h/t Marginal Revolution) studies public support for eugenics. He finds that about 40% of Americans support some form of eugenics. The policies discussed were very vague, like "encouraging poor criminals to have fewer children" or "encouraging intelligent people to have more children"; they did not specify what form the encouragement would take. Of note, much lack of support for eugenics was a belief that it would not work; people who believed the qualities involved were heritable were much more likely to support programs to select for them. For example, of people who thought criminality was completely genetic, a full 65% supported encouraging criminals to have fewer children. I was surprised to hear this, because I thought of moral opposition to eugenics was basically universal. If a prominent politician tentatively supported eugenics, it would provoke a media firestorm and they would get shouted down. This would be true even if they supported the sort of generally mild, noncoercive policies the paper seems to be talking about. How do we square that with a 40% support rate? I think back to a metaphor for norm enforcement I used in an argument against Bryan Caplan:
undefined
Mar 31, 2019 • 4min

Two Wolves and a Sheep

Democracy is two wolves and a sheep deciding what to have for dinner. "Mutton" takes the popular vote, but "grass" wins in the Electoral College. The wolves wish they hadn't all moved into the same few trendy coastal cities. Democracy is two wolves and a sheep deciding what to have for dinner. The Timber Wolf Party and the Gray Wolf Party spend most of their energy pandering shamelessly to the tiebreaking vote. Democracy is two wolves and a sheep deciding what to have for dinner. Everyone agrees to borrow money, go to a fancy French restaurant, and leave the debt to the next generation. Democracy is two wolves and a sheep deciding what to have for dinner. The sheep votes for the Wolf Party, because he agrees with them on social issues. Democracy is two wolves and a sheep deciding what to have for dinner. "Grass" wins the tenth election in a row, thanks to the dominance of special interests. Democracy is two wolves and a sheep deciding what to have for dinner. FactCheck.org rates the Wolf Party's claim that mutton can be made without harming sheep as "Mostly False". Democracy is two wolves and a sheep deciding what to have for dinner. The main issue this election is whether two more sheep should be allowed to immigrate. Democracy is two wolves and a sheep deciding what to have for dinner. A government shutdown is narrowly averted when everyone agrees to what becomes known as the Mutton With A Side Of Grass Compromise; disappointed activists are urged to "keep their demands realistic". Democracy is two wolves and a sheep deciding what to have for dinner. They choose borscht. Election officials suspect foul play.
undefined
Mar 31, 2019 • 3min

Partial Retraction of Post on Animal Value and Neural Number

Commenter Tibbar used Mechanical Turk to replicate my survey on how people thought about the moral weights of animals. After getting 263 responses (to my 50), he reports different results: Chicken: 25 Chimpanzee: 2 Cow: 3 Elephant: 1 Lobster: 60 Pig: 5 Human: 1 On the one hand, Mechanical Turkers sometimes aren't a great sample, and some of them seem to have just put the same number for every animal so they could finish quickly and get their money. They also probably haven't thought about this that much and don't have much of a moral theory behind what they're doing. This makes them a different demographic than the people I surveyed, who were a mix of vegetarians and principled non-vegetarians who had thought a lot about animal rights. For example, 80% of my sample answered yes to a question asking if they were "familiar with work by Brian Tomasik, OneStepForAnimals, etc urging people to eat beef rather than chicken". On the other hand, this makes it pretty hard for me to claim my results are some kind of universal intuitive understanding of what animals are like. So I am partially retracting them (only partially, because of the consideration above) and adding this to my Mistakes page. The best thing to do here would be to re-run my survey with a larger sample of a similar population, but unfortunately I've lost my chance to do that now that I've told you all this, so darn. Maybe I'll include it on next year's survey anyway and hope you've forgetten by then.
undefined
Mar 28, 2019 • 8min

Cortical Neuron Number Matches Intuitive Perceptions of Moral Value Across Animals

[EDIT: No longer confident in this post, see edit note at bottom. May formally partially-retract it later.] Yesterday's post reviewed research showing that animals' intelligence seemed correlated with their number of cortical neurons. If this is true, we could use it to create an absolute scale that puts animals and humans on the same ladder. Here are the numbers from this list. I can't find chickens, so I've used red junglefowl, the wild ancestor of chickens. I can't find cows, so I've eyeballed a number from other cow-sized ruminants (see here for some debate on this). Some animal rights activists discuss the relative value of different species of animal. You have to eat a lot of steak to kill one cow, but you only have to eat a few chicken wings to kill one chicken. This suggests nonvegetarians trying to minimize the moral impact of their diet should eat beef, not chicken. But any calculation like this depends on assumptions about whether one cow and one chicken have similar moral values. Most people would say that they don't – the cow seems intuitively more "human" and capable of suffering – but most people would also say the cow isn't infinitely more valuable. Different animals rights people have come up with different ideas for exactly how we should calculate this. I wondered how people's intuitive ideas about the moral value of animals would correspond to their cortical neuron count. I asked Tumblr users who believed that animals had moral value to fill out a survey (questions, results) estimating the relative value of each animal, in terms of how many animals = 1 human. Fifty people answered, including 21 vegetarians and 29 nonvegetarians. Their numbers ranged from 1 to putting their hand on the 9 key and leaving it there a while, but when I took the median, here's what I got:
undefined
Mar 27, 2019 • 10min

Neurons and Intelligence: A Birdbrained Perspective

Elephants have bigger brains than humans, so why aren't they smarter than we are? The classic answer has been to play down absolute brain size in favor of brain size relative to body. Sometimes people justify this as "it takes a big brain to control a body that size". But it really doesn't. Elephants have the same number of limbs as mice, operating on about the same mechanical principles. Also, dinosaurs had brains the size of walnuts and did fine. Also, the animal with the highest brain-relative-to-body size is a shrew. The classic answer to that has been to look at a statistic called "encephalization quotient", which compares an animal's brain size to its predicted brain size given an equation that fits most animals. Sometimes people use brain weight = constant x (body weight)^0.66, where the constant varies depending on what kind of animal you're talking about. The encephalization quotient mostly works, but it's kind of a hack. Also, capuchin monkeys have higher EQ than chimps, but are not as smart. Also, some birds have lower encephalization quotients than small mammals, but are much smarter. So although EQ usually does a good job predicting intelligence, it's definitely not perfect, and it doesn't tell us what intelligence is. A new AI Impacts report on animal intelligence, partly based on research by Suzana Herculano-Houzel, starts off here. If we knew what made some animals smarter than others, it might help us figure out what intelligence is in a physiological sense, and that might help us predict the growth of intelligence in future AIs. AII focuses on birds. Some birds are very intelligent: crows can use tools, songbirds seem to have a primitive language, parrots can learn human speech. But birds have tiny brains, whether by absolute standards or EQ. They also have very different brains than mammals: while mammals have a neocortex arranged in a characteristic pattern of layers, birds have a different unlayered structure called the pallium with neurons "organized into nuclei". So bird intelligence is surprising both because of their small brains, and because it suggests high intelligence can arise in brain structures very different from our own.
undefined
Mar 22, 2019 • 11min

Translating Predictive Coding Into Perceptual Control

Wired wrote a good article about Karl Friston, the neuroscientist whose works I've puzzled over here before. Raviv writes: Friston's free energy principle says that all life…is driven by the same universal imperative…to act in ways that reduce the gulf between your expectations and your sensory inputs. Or, in Fristonian terms, it is to minimize free energy. Put this way, it's clearly just perceptual control theory. Powers describes the same insight like this: [Action] is the difference between some condition of the situation as the subject sees it, and what we might call a reference condition, as he understands it. I'd previously noticed that these theories had some weird similarities. But I want to go further and say they're fundamentally the same paradigm. I don't want to deny that the two theories have developed differently, and I especially don't want to deny that free energy/predictive coding has done great work building in a lot of Bayesian math that perceptual control theory can't match. But the foundations are the same. Why is this of more than historical interest? Because some people (often including me) find free energy/predictive coding very difficult to understand, but find perceptual control theory intuitive. If these are basically the same, then someone who wants to understand free energy can learn perceptual control theory and then a glossary of which concepts match to each other, and save themselves the grief of trying to learn free energy/predictive coding just by reading Friston directly.
undefined
Mar 21, 2019 • 52min

Book Review: Inventing the Future

They say "don't judge a book by its cover". So in case you were withholding judgment: yes, this bright red book covered with left-wing slogans is, in fact, communist. Inventing The Future isn't technically Nick Srnicek and Alex Williams' manifesto – that would be the equally-striking-looking Accelerate Manifesto. But it's a manifesto-ish description of their plan for achieving a postcapitalist world. S&W start with a critique of what they call "folk politics", eg every stereotype you have of lazy left-wing activists. Protesters who march out and wave signs and then go home with no follow-up plan. Groups that avoid having any internal organization, because organization implies hierarchy and hierarchy is bad. The People's Front of Judaea wasting all their energy warring with the Judaean People's Front. An emphasis on spectacle and performance over results. We've probably all heard stories like this, but some of S&W's are especially good, like one from an activist at a trade summit: On April 20, the first day of the demonstrations, we marched in our thousands toward the fence, behind which 34 heads of state had gathered to hammer out a hemispheric trade deal. Under a hail of catapult-launched teddy bears, activists dressed in black quickly removed the fence's support with bolt cutters and pulled it down with grapples as onlookers cheered them on. For a brief moment, nothing stood between us and the convention centre. We scrambled atop the toppled fence, but for the most part we went no further, as if our intention all along had been simply to replace the state's chain-link and concrete barrier with a human one of our own making. S&W comment: We see here the symbolic and ritualistic nature of the actions, combined with the thrill of having done something – but with a deep uncertainty that appears at the first break with the expected narrative. The role of dutiful protester had given these activists no indication of what to do when the barriers fell. Spectacular political confrontations like the Stop the War marches, the now familiar melees against G20 or World Trade Organization and the rousing scenes of democracy in Occupy Wall Street all give the appearance of being highly significant, as if something were genuinely at stake. Yet nothing has changed, and long-term victories were traded for a simple registration of discontent. To outside observers, it is often not even clear what the movements want, beyond expressing a generalized discontent with the world…in more recent struggles, the very idea of making demands has been questioned. The Occupy movement infamously struggled to articulate meaningful goals, worried that anything too substantial would be divisive. And a broad range of student occupations across the Western world has taken up the mantra of "no demands" under the misguided belief that demanding nothing is a radical act.
undefined
Mar 17, 2019 • 17min

Gwern's AI-Generated Poetry

Gwern has answered my prayers and taught GPT-2 poetry. GPT-2 is the language processing system that OpenAI announced a few weeks ago. They are keeping the full version secret, but have released a smaller prototype version. Gwern retrained it on the Gutenberg Poetry Corpus, a 117 MB collection of pre-1923 English poetry, to create a specialized poetry AI. I previously tested the out-of-the-box version of GPT-2 and couldn't make it understand rhyme and meter. I wrongly assumed this was a fundamental limitation: "obviously something that has never heard sound can't derive these complex rhythms just from meaningless strings of letters." I was wrong; it just didn't have enough training data. Gwern's retrained version gets both of these right, and more too. For example: Thou know'st how Menoetiades the swift Was dragged, of Hector and the fierce compeers And Phrygian warriors. So, we will dispatch Your bodies, then, yourselves to burn the ships In sacrifice; with torches and with bells To burn them, and with oxen to replace Your gallant friends for ever. But I wish That no man living has so long endured The onset of his foes, as I have power To burn or storm; for mighty Hector erst Was slain, and now returns his safe return This is all perfect iambic pentameter. I know AP English students who can't write iambic pentameter as competently as this. (by the way, both "compeers" and "erst" are perfectly cromulent words from the period when people wrote poems like this; both show up in Shelley) It has more trouble with rhymes – my guess is a lot of the poetry it was trained on was blank verse. But when it decides it should be rhyming, it can keep it up for a little while. From its Elegy Written in a Country Churchyardfanfic:

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app