Astral Codex Ten Podcast

Jeremiah
undefined
Jun 27, 2018 • 42min

Book Review: Capital in the Twenty-First Century

[Epistemic status: I am not an economist. Many people who are economists have reviewed this book already. I review it only because if I had to slog through reading this thing I at least want to get a blog post out of it. If anything in my review contradicts that of real economists, trust them instead of me.] I. Thomas Piketty's Capital In The Twenty-First Century isn't just a book on inequality. It's a book about quantitative macroeconomic history. This is much more interesting than it sounds. Piketty spent decades combing through primary sources trying to get good statistics for what the economies of various Western countries have been doing over the past 250 years. Armed with these data, he tries to put together a theory of the very-long-term forces at work in economic change. His results touch on almost every big question in politics and economics, and are able to propose sweeping theories where other people resort to parochial speculation. While more knowledgeable people than I are probably already familiar with much of this, I used him as an Econ History 101 textbook and was not at all disappointed in the results. The most important thing I learned from Piketty is that since the Industrial Revolution, normal economic growth has always been (and maybe always will be) between 1% and 1.5% per year. This came as news to me, since I often hear about countries and eras with much higher growth rates. But Piketty says all such situations are abnormal in one of a few ways. First, they can have high population growth. Population growth will increase GDP, and it will look like a high economic growth rate. But it doesn't increase GDP per capita and it shouldn't be considered the same as normal economic growth, which is always between 1% and 1.5% per year. Second, they can have temporary bubbles. This definitely happens, but after the inevitable bust, the whole period will eventually average out to 1% to 1.5% per year. Third, they can have "catch-up growth". This is a broad category covering any period when a country that was previously underperforming its fundamentals gets a chance to catch up. This can happen after a long war in which a devastated country gets a chance to rebuild. Or it can happen after dropping communism or some other inefficient economic system, as the country transitions to a more practical form of production. Or it can happen when a Third World country globalizes and gets the benefits of First World technology and organization. But if a country is at peace and on the "technological frontier" (ie one of the highest-tech countries that has to invent its own advances and can't get them by osmosis from somewhere else), it will always have growth of 1% to 1.5% per year.
undefined
Jun 23, 2018 • 8min

Cost Disease in Medicine: the Practical Perspective

Sometimes I imagine quitting my job and declaring war on cost disease in medicine. I would set up a practice with a name like Cheap-O Psychiatry. The corny name would be important. It would be a statement of values. It would weed out the people who would say things like "How dare you try to put a dollar value on the health of a human being!" Those people are how we got into this mess, and they would be welcome to keep dealing with the unaffordable health system they helped create. Cheap-O Psychiatry would be for everyone else. Cheap-O Psychiatry wouldn't have an office, because offices cost money. You would Skype, from your house to mine. It wouldn't have a receptionist, because receptionists cost money. You would book a slot in my Google Calendar. It wouldn't have a billing department, because billing departments cost money. You would PayPal me the cost of the appointment afterwards – or, to be really #aesthetic, use cryptocurrency. The Cheap-O website would include a library of great resources on every subject. How To Eat Right. How To Get Good Sleep. How To Find A Good Therapist. The Cognitive Behavioral Therapy Workbook. The Meditation Relaxation Tape. But the flip side would be that Cheap-O appointments would be brutally efficient. If you had problems with sleep, I would evaluate you for any relevant diseases, give you any medications that might be indicated, then tell you to read the How To Get Good Sleep guide on the website. Boom, done. Small talk would be absolutely banned. How little could Cheap-O charge? Suppose I wanted to earn an average psychiatrist salary of about $200K – the whole point of cost disease is that we should be able to lower prices without anyone having to take a pay cut. And suppose I work a 40 hour week, 50 weeks a year, each appointment takes 15 minutes, and 75% of my workday is patient appointments. That's 6000 appointments per year. So to make my $200K I would need to charge about $35 per appointment. There would be a few added costs – malpractice insurance would probably run about $10K per year – but this is the best-case scenario.
undefined
Jun 23, 2018 • 10min

Contra Caplan on Arbitrary Deploring

Last year, Bryan Caplan wrote about what he called The Unbearable Arbitrariness Of Deploring: Let's start with the latest scandal. People all over the country – indeed, the world – have recently discovered that many celebrities are habitual sexual harassers. Each new expose leads to public outrage and professional ostracism. Why does this confuse me? Because many celebrities do many comparably bad things other than sexual harassment, and virtually no one cares. Suppose, for example, that a major celebrity is extremely emotionally abusive to all his subordinates. He screams at them all the time. He calls them the cruelest names he can devise. He habitually makes impossible demands. He threatens to fire them out of sheer sadistic pleasure. But the abuse is never sexual (or ethnic); the celebrity limits himself to attacking subordinates' intelligence, character, pride, and hope for the future. I daresay the average employee would far prefer to work for a boss who occasionally pressured them for a date. But if the tabloids ran a negative profile on the Asexual Boss from Hell, the public wouldn't get very mad and Hollywood almost certainly wouldn't ostracize the offender […] Or to take a far more gruesome case: When the Syrian government last used poison gas, killing roughly a hundred people, the U.S. angrily deployed retaliatory bombers, to bipartisan acclaim. But when the Syrian government murdered vastly more with conventional weapons, the U.S. government and its citizenry barely peeped. The unbearable arbitrariness of deploring! In the past, I've made similar observations about Jim Crow versus immigration laws, and My Lai versus Hiroshima. In each case, I can understand why people would have strong negative feelings about both evils. I can understand why people would have strong negative feelings about neither. I can understand why people would have strong negative feelings about the greater evil, but not the lesser evil. But I can't understand why people would have strong negative feelings about the lesser evil, but care little about the greater evil. Or why they would have strong negative feelings about one evil, but yawn in the face of a comparable evil. He concludes people are just biased by dramatic stories and like jumping on bandwagons. Everyone else is getting upset about the chemical weapon attack, and people are sheep, so they join in. I have a different theory: people get upset over the violation of already-settled bright-line norms, because this is the correct action if you want to use limited enforcement resources efficiently.
undefined
Jun 21, 2018 • 23min

The GATTACA Trilogy

[Few people realize that the 1997 cult hit GATTACA was actually just the first film in a three-movie trilogy. The final two movies, directed by the legendary Moira LeQuivalence, were flops which only stayed in theaters a few weeks and have since become almost impossible to find. In the interest of making them available to the general public, I've written summaries of some key scenes below. Thanks to user Begferdeth from the subreddit for the idea.] GATTACA II: EPI-GATTACA "Congratulations, Vincent", said the supervisor, eyes never looking up from his clipboard. "You passed them all. The orbital mechanics test. The flight simulator. All the fitness tests. More than passed. Some of the highest scores we've ever seen, frankly. You're going to be an astronaut." Vincent's heart leapt in his chest. "Pending, of course, the results of the final test. But this will be easy. I'm sure a fine specimen like you will have no trouble." "The…the final test, sir?" "Well, you know how things are. We want to make sure we get only the healthiest, most on-point individuals for our program. We used to do genetic testing, make sure that people's DNA was pre-selected for success. But after the incident with the Gattaca Corporation and that movie they made about the whole thing, public opinion just wasn't on board, and Congress nixed the whole enterprise. Things were really touch-and-go for a while, but then we came up with a suitably non-invasive replacement. Epigenetics!" "Epi…genetics?" asked Vincent. He hoped he wasn't sounding too implausibly naive – he had, after all, just aced a whole battery of science tests. But surely there were some brilliant astronomers who didn't know anything about biology. He would pretend to be one of those. The supervisor raised an eyebrow, but he went on. "Yes, epigenetics. According to studies, stressful experiences – anything from starvation to social marginalization – change the methylation pattern of your genes. And not just your genes. Some people say that these methylation patterns can transfer to your children, and your children's children, and so on, setting them back in life before they're even born. Of course, it would be illegal for us to take a sample and check your methylation directly – but who needs that! In this day and age, everybody's left a trail online. We can just check your ancestors' life experiences directly, and come up with a projection of your methylation profile good enough to predict everything from whether you'll have a heart attack to whether you'll choke under pressure at a crucial moment. I'll just need to see your genealogy, so we can run it through this computer here…you did bring it like we asked you, right? Of course you did! A superior individual like you, probably no major family traumas going back five, six generations – I bet you've got it all ready for me."
undefined
Jun 8, 2018 • 16min

HPPD and the Specter of Permanent Side Effects

I recently worked with a man who took LSD once in college and never stopped hallucinating. It's been ten years now and it's still going. We can control it with medication, but take the meds away and it starts right back up again. This is a real disease – hallucinogen persisting perception disorder. Most descriptions of the condition emphasize that it's just some the visual effects and doesn't involve distorted reality perception. I'm not sure I believe this – my patient has some weird thoughts sometimes, and 65% of HPPD patient have panic attacks related to their symptoms. Maybe if you can see the walls bubbling, you're going to be having a bad time whether you believe it's "really true" or not. Estimates of prevalence vary. It seems more common on LSD and synthetic cannabinoids, less common (maybe entirely absent) on psilocybin and peyote. Some people say about 1-4% of LSD users will get some form of this, which seems shockingly high to me – why don't we hear about this more often? If I were a drug warrior or DARE instructor, I would never shut up about this. But if most people just get some mild visual issues – by all accounts the most common form of the condition – maybe they never tell anybody. Maybe 1-4% of people who have tried LSD are walking around with slightly distorted perception all the time. There's a lot to say about this from an epidemiological or cultural perspective. But I want to talk about the pharmacology. How can this happen? Why should a drug with a half-life of a few hours have permanent effects on your psyche? It can't be that the LSD sticks around. That doesn't make metabolic sense. And a study discussed here using radio-labeled LSD definitively finds that although a few molecules might stay in the body up to a week or so, there's no reason to think the drug can last longer than this. I like this study, both for its elegant design and because it implies that somewhere someone got a consent form saying "we're going to give you radioactive LSD" and thought "sure, why not?" But then why does it have permanent effects? I know very few other situations where this happens, aside from obvious stuff like "it gives you a stroke and then you're permanently minus one lobe of your brain". The only other open-and-shut case 100% accepted by every textbook is a movement disorder called tardive dyskinesia. If you take too many antipsychotics for too long, you can get involuntary tremors and gyrations that never go away, even off the antipsychotic. Although traditionally associated with very-long-term antipsychotic use, in a few very rare cases you can get it from a single dose. On the other hand, most people can take antipsychotics for decades without developing any problems. Some other possibilities are controversial but plausible. The sexual side effects of SSRIs almost always stop within a few months of stopping the medication, but a few people have reported cases where they can last years or decades. Psychedelics may permanently increase openness and hypnotizability, though it's unclear if this is biochemical or just that drug trips are a life-changing experience – see my discussion here for more. Also, for every drug that has a mild week-long withdrawal syndrome in the average population, you can find a handful of people who claim to have had a five-year protracted nightmare of withdrawal symptoms that never go away. So, again, how does this happen? Every discussion of HPPD etiology I've seen is speculative and admits it doesn't know what it's talking about. Also, most of them are in gated papers I can't access. But a few papers seem to gesture at a theory where LSD kills an undetectably small number of very important neurons. Hermle et al talk about "the excitotoxic destruction of inhibitory interneurons that carry serotonergic and GABAergic receptors on their cell bodies and terminals, respectively". Martinotti seems to be drawing from the same inaccessible source in mentioning "an LSD-generated intense current that may determine the destruction or dysfunction of cortical serotonergic inhibitory interneurons with gamma-Aminobutyric acid (GABAergic) outputs, implicated in sensory filtering mechanisms of unnecessary stimuli". This would require some extra work to explain the coincidence of why the effects of HPPD are so similar to the effects of an LSD trip itself. In particular, if we're talking excitotoxicity, shouldn't the neurons be stimulated (ie more active) in the tripper, but dead (ie less active) in the HPPD patient? Maybe the tripper's neurons are just so overwhelmed that they temporarily stop working? Or maybe you could interpret the comments above to be about LSD exciting some base population of neurons, the relevant inhibitory neurons having to work impossibly hard to inhibit them, and then the inhibitory neurons die of exhaustion/excitotoxicity. Against cell death based explanations, some people seem to recover from HPPD after a while. But this could just be the same kind of brain plasticity that eventually lets people recover from strokes that kill off whole brain regions. The body is usually pretty good at routing around damage if you give it long enough.
undefined
Jun 3, 2018 • 10min

In Search of Missing US Suicides

[Content warning: suicide. Thanks to someone on Twitter I forget for alerting me to this question] Among US states, there's a clear relationship between gun ownership rates and suicide rates, but not between gun ownership rates and homicide rates: You might conclude guns increase suicides but not homicides. Then you might predict that the gun-loving US would be an international outlier in suicides but not homicides. In fact, it's the opposite: Why should this be? We've already discussed why US homicide rates are so high. But why isn't the suicide rate elevated? One possibility: suicide methods are fungible. If guns are easily available, you might use a gun; if not, you might overdose, hang yourself, or junp off a bridge. So getting rid of one suicide method or another doesn't do much. This sounds plausible, but it's the opposite of scientific consensus on the subject. See for example Controlling Access To Suicide Means, which says that "restrictions of access to common means of suicide has lead to lower overall suicide rates, particularly regarding suicide by firearms in USA, detoxification of domestic and motor vehicle gas in England and other countries, toxic pesticides in rural areas, barriers at jumping sites and hanging…" This is particularly brought up in the context of US gun control – see eg Suicide, Guns, and Public Policy, which describes "strong empirical evidence that restriction of access to firearms reduces suicides". The state-level data from above support this view – taking guns away from a state does decrease its suicide rate. And then there's this graph, from Armed With Reason: …which shows that adding more guns to a state does not decrease its nonfirearm suicide rate. But if suicide methods aren't fungible, then why doesn't the US have higher suicide rates? Here's another way of asking this question: The US has fewer nongun suicides than anywhere else. The seemingly obvious explanation is that guns are so common that everyone who wants to commit suicide is using guns, decreasing the non-gun rate. But that contradicts all the nonfungibility evidence above. So the other possibility is that the US ought to have an very low suicide rate, and it's just all our guns that are bringing us back up to average. Of all US states, Massachusetts, New Jersey, and Hawaii have the fewest guns. Unsurprisingly, suicides in these states are less likely than average to be committed with firearms. In MA, the rate is 22%; in NJ 24%; in HI, 20%. Their suicide rates are 8.8, 7.2, and 12.1, respectively. Hawaii has an unusual ethnic composition – 40% Asian and 20% Native Hawaiian, both groups with high suicide rates (see eg the suicide rate for Japan above). So it might be worth taking Massachusetts and New Jersey as examples to look at in more detail. Either state, if it were independent, would be among the lowest-suicide-rate developed nations. And both still have more guns than our comparison countries. If we did a really simple linear extrapolation from New Jersey-level gun control to imagine a state where firearms were as restricted as in Britain, we would expect it to have a suicide rate of around 5 or 6 – which is around the current level of non-gun US suicides. This is much lower than any of the large comparison countries in the graph above, but there are two developed countries currently around this level – Italy and Israel. I think it makes sense to suppose that the US might have a low Italy/Israel-style base rate of suicides. For one thing, it's unusually religious for a developed country. Religion is one of the strongest protective factorsagainst suicide. This also seems like a good explanation for Italy and Israel. For another, it's culturally similar to Britain, which also has a low suicide rate somewhere in the 7s. Other British colonies don't seem to have kept this effect – Australia and Canada are both higher – but maybe the US did. And for another, it's unusually ethnically diverse. Blacks and Hispanics have only about half the suicide rate of whites; which means you would expect the US to be less suicidal than Europe. I previously believed this was because whites had more guns, but this doesn't seem to be true: Riddell et al find that whites have higher non-firearm suicide rates too. So this could be an additional factor driving US rates down.
undefined
May 30, 2018 • 55min

Highlights from the Comments on Basic Jobs

These are some of the best comments from Basic Income, Not Basic Jobs: Against Hijacking Utopia. I'm sorry I still haven't gotten a chance to read everything that people have written about it (in particular I need to look more into Scott Sumner's take). Sorry to anyone with good comments I left out. Aevylmar corrects my claim that Milton Friedman supported a basic income: Technically speaking, what Milton Friedman advocated was a negative income tax, which (he thought, and I think) would be much more efficient than basic income – I don't remember if these are his arguments, but the arguments I know for it are that the IRS can administer it with the resources it has without you needing a new bureaucracy, it doesn't have the same distortionary effects that lump sum payment + percentage tax does, and it's probably easier to pass through congress, since it looks as though it costs less and doesn't have the words 'increasing taxes' in it. And Virbie further explains the differences between UBI and negative income tax: The main difference is that discussing it in terms of NIT neatly skips over a lot of the objections that people raise to flat UBIs that are abstractly and mathematically (but not logistically or politically) trivial. Many of these focus on how to get to the new policy position from where we are now. For example, people ask both about how a flat UBI would be funded and why rich people should receive a UBI. Given that the tax load to fund a basic income plan would likely fall on the upper percentiles or deciles, a flat UBI + an increase in marginal tax rates works out to a lump sum tax cut for high-earners and a marginal tax increase. Adding negative tax brackets at the bottom of the existing system and modifying top marginal rates is a simpler way to handle this and extends gracefully from the current system instead of having to work awkwardly alongside it. In the example above, the NIT approach has the logistical advantage of the bureaucracy and systems we already have handling it more easily. And the political advantage of the net cost of the basic income guarantee looking far smaller than for flat UBI, since we're not including the lump sum payments to upper-income people (that are more than offset by their marginal tax increases). There's some further debate on the (mostly trivial) advantages of NIT or UBI over the other in the rest of the thread. Tentor describes Germany's experience with a basic-jobs-like program: We had/have a similar thing to basic jobs in Germany and it worked about as well as you would expect. Companies could hire workers for 1€/hour and the state would pay social security on top of that. The idea was that long-term unemployed people would find their way back to employment this way, but companies just replaced them with new 1€-workers when their contract was over and reduced fully-paid employment because duh! Plus people on social security can be forced to take jobs or education. As a result a lot of our homeless are depressed people who stopped responding to social security demands because that's what caused their depression. (Links are to German Wikipedia, maybe Google translate helps) Another German reader adds: I agree that it doesn't work as expected in Germany, but I think it it important to point out that not everyone is allowed is to hire workers for 1€. The work has to be neutral to the competition and in the public interest. So people are hired at a lot of public institutions (e.g. schools, universities, cleaning up the city). Additionally these jobs improved the unemployment statistics at a low cost for the government, as people who are working in these jobs count as employed although most of these jobs are only part time jobs.
undefined
May 26, 2018 • 30min

Should Psychiatry Test for Lead More?

Dr. Matthew Dumont treated a 44 year old woman with depression, body dysmorphia, and psychosis. She failed to respond to most of the ordinary treatments, failed to respond to electroconvulsive therapy, and seemed generally untreatable until she mentioned offhandedly that she spent evenings cleaning up after her husband's half-baked attempts to scrape lead paint off the walls. Blood tests revealed elevated lead levels, the doctor convinced her to be more careful about lead exposure, and even though that didn't make the depression any better, at least it was a moral victory. The story continues: Dr. Dumont investigated lead more generally, found that a lot of his most severely affected patients had high lead levels, discovered that his town had a giant, poorly-maintained lead bridge that was making everyone sick, and – well, the rest stops being story about psychiatry and turns into a (barely believable, outrageous) story about politics. Read the whole thing on Siderea's blog. Siderea continues by asking: why don't psychiatrists regularly test for lead? Now, in my case, I'm a talk therapist, and worrying about patients maybe being poisoned is not even supposed to be on my radar. I'm supposed to trust the MDs to handle it. Dumont, however, is just such an MD. And that this was a clinical possibility was almost entirely ignored by his training. Dumont's point here is that while "medical science" knows about the psychiatric effects of lead poisoning and carbon disulfide poisoning and other poisons that have psychiatric effects – as evidenced by his quoting from the scientific literature – psychiatry as practiced in the hospitals and clinics behaves as if it knows no such thing. Dumont is arguing that, in fact, he knew no such thing, because his professional training as a psychiatrist did not include it as a fact, or even as a possibility of a fact. Dumont's point is that psychiatry, as a practical, clinical branch of medicine, has acted, collectively, as if poisoning is just not a medical problem that comes up in psychiatry. Psychiatry generally did not consider poisoning, whether by lead or any other noxious substance, as a clinical explanation for psychiatric conditions. By which I mean, that when a patient presented with the sorts of symptoms he described, the question was simply never asked, is the patient being poisoned? Dumont wants you to be shocked and horrified by what was done to those people, yes. He also wants you to be shocked and horrified by this: psychiatry as a profession – in the 1970s, when (I believe) the incidents he relates where happening, in the 1990s, when he wrote it in his book, or in 2000 when a journal on public health decided to publish it – psychiatry as a profession did not ask the question is the patient being poisoned? And it didn't ask the question, because clinical psychiatry had other explanations it liked better, to which it had a priori philosophical commitments. And that, when you think through what it means for psychiatry, is absolutely chilling. And:
undefined
May 25, 2018 • 45min

Can Things Be Both Popular and Silenced?

The New York Times recently reported on various anti-PC thinkers as "the intellectual dark web", sparking various annoying discussion. The first talking point – that the term is silly – is surely true. So is the second point – that it awkwardly combines careful and important thinkers like Eric Weinstein with awful demagogues like Ben Shapiro. So is the third – that people have been complaining about political correctness for decades, so anything that portrays this as a sudden revolt is ahistorical. There are probably more good points buried within the chaff. But I want to focus on one of the main arguments that's been emphasized in pretty much every article: can a movement really claim it's being silenced if it's actually pretty popular? "Silenced" is the term a lot of these articles use, and it's a good one. "Censored" awkwardly suggests government involvement, which nobody is claiming. "Silenced" just suggests that there's a lot of social pressure on its members to shut up. But shutting up is of course is the exact opposite of what the people involved are doing – as the Timespoints out, several IDW members have audiences in the millions, monthly Patreon revenue in the five to six figures, and (with a big enough security detail) regular college speaking engagements. So, from New Statesman, If The "Intellectual Dark Web" Are Being Silenced, Why Do We Need To Keep Hearing About Them?: The main problem with the whole profile is that it struggles because of a fundamental inherent contradiction in its premise, which is that this group of renegades has been shunned but are also incredibly popular. Either they are persecuted victims standing outside of society or they are not. Joe Rogan "hosts one of the most popular podcasts in the country", Ben Shapiro's podcast "gets 15 million downloads a month". Sam Harris "estimates that his Waking Up podcast gets one million listeners an episode". Dave Rubin's YouTube show has "more than 700,000 subscribers", Jordan Peterson's latest book is a bestseller on Amazon […] On that basis alone, should this piece have been written at all? The marketplace of ideas that these folk are always banging on about is working. They have found their audience, and are not only popular but raking it in via Patreon accounts and book deals and tours to sold-out venues. Why are they not content with that? They are not content with that because they want everybody to listen, and they do not want to be challenged. In the absence of that, they have made currency of the claim of being silenced, which is why we are in this ludicrous position where several people with columns in mainstream newspapers and publishing deals are going around with a loudhailer, bawling that we are not listening to them. Reason's article is better and makes a lot of good points, but it still emphasizes this same question, particularly in their subtitle: "The leading figures of the 'Intellectual Dark Web' are incredibly popular. So why do they still feel so aggrieved?". From the piece: They can be found gracing high-profile cable-news shows, magazine opinion pages, and college speaking tours. They've racked up hundreds of thousands of followers. And yet the ragtag band of academics, journalists, and political pundits that make up the "Intellectual Dark Web" (IDW)—think of it as an Island of Misfit Ideologues—declare themselves, Trump-like, to be underdogs and outsiders. […] [I'm not convinced] they're actually so taboo these days. As Weiss points out, this is a crowd that has built followings on new-media platforms like YouTube and Twitter rather than relying solely on legacy media, academic publishing, and other traditional routes to getting opinions heard. (There isn't much that's new about this except the media involved. Conservatives have long been building large audiences using outside-the-elite-media platforms such as talk radio, speaking tours, and blogs.) In doing so, they've amassed tens and sometimes hundreds of thousands of followers. What they are saying might not be embraced, or even endured, by legacy media institutions or certain social media precincts, but it's certainly not out of tune with or heretical to many Americans. The bottom line is there's no denying most of these people are very popular. Yet one of the few unifying threads among them is a feeling or posture of being marginalized, too taboo for liberal millennial snowflakes and the folks who cater to them. The basic argument – that you can't be both silenced and popular at the same time – sounds plausible. But I want to make a couple points that examine it in more detail.
undefined
May 19, 2018 • 1h 31min

Basic Income, Not Basic Jobs: Against Hijacking Utopia

Some Democrats angling for the 2020 presidential nomination have a big idea: a basic jobs guarantee, where the government promises a job to anybody who wants one. Cory Booker, Kirsten Gillibrand, Elizabeth Warren, and Bernie Sanders are all said to be considering the plan. I've pushed for a basic income guarantee before, and basic job guarantees sure sound similar. Some thinkers have even compared the two plans, pointing out various advantages of basic jobs: it feels "fairer" to make people work for their money, maybe there's a psychological boost from being productive, you can use the labor to do useful projects. Simon Sarris has a long and excellent article on "why basic jobs might fare better than UBI [universal basic income]", saying that: UBI's blanket-of-money approach optimizes for a certain kind of poverty, but it may create more in the long run. Basic Jobs introduce work and opportunity for communities, which may be a better welfare optimization strategy, and we could do it while keeping a targeted approach to aiding the poorest. I am totally against this. Maybe basic jobs are better than nothing, but I have an absolute 100% revulsion at the idea of implementing basic jobs as an alternative to basic income. Before getting into the revulsion itself, I want to bring up some more practical objections: 1. Basic jobs don't help the disabled Only about 15% of the jobless are your traditional unemployed people looking for a new job. 60% are disabled. Disability has doubled over the past twenty years and continues to increase. Experts disagree on how much of the rise in disability reflects deteriorating national health vs. people finding a way to opt out of an increasingly dysfunctional labor market, but everyone expects the the trend to continue. Any program aimed at the non-working poor which focuses on the traditionally unemployed but ignores the disabled is only dealing with the tip of the iceberg.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app