The Bayesian Conspiracy

The Bayesian Conspiracy
undefined
Dec 14, 2022 • 2h 34min

177 – Decision Trees and Butlerian Jihads with Matt Freeman

Matt Freeman gives a primer in how to uses Bayesian decision making in normal life, via decision trees. We also discuss utilitarianism, the Guild of the Rose, and recent AI advances. We are now in the early singularity. Guild of the Rose: Playlist of Decision Tree “lectures” Guild workshop page 1 Guild workshop page 2 Guild workshop page 3 Reflecting on the 2022 Guild of the Rose Workshops How to identify whether text is true or false directly from a model’s *unlabeled activations*. Zvi On the Diplomacy AI AI superpowers – can tell race via chest Xray. Can tell sex via retina scan (87%). 0:00:23 Practical Decision Trees 1:05:46 Utilitarian & Other Ethics considerations 1:31:01 Guild of the Rose info 1:39:02 Bayesian AI Corner 2:08:18 Should we have a Butlerian Jihad? 2:31:36 Thank the Patron! Hey look, we have a discord! What could possibly go wrong? Our Patreon page, your support is most rational, and totally effective. Also merch! Rationality: From AI to Zombies, The Podcast LessWrong Sequence Posts Discussed in this Episode: Nope Next Episode’s Sequence Posts: (for real this time) 0 And 1 Are Not Probabilities Beautiful Math
undefined
Nov 30, 2022 • 1h 45min

176 – Circling with Aella

Aella comes on the show to tell us what the heck Circling is and why we should do it. 0:01:21 How Many Rationalists Are There? 0:04:40 Circling 1:09:42 LW Posts 1:41:35 Thank the Patron! Aella’s Blog The Patreon bit where we discuss The Murder Button Circle Anywhere Authentic Relating International Julia Galef on Bayes’ Equation Hey look, we have a discord! What could possibly go wrong? Our Patreon page, your support is most rational, and totally effective. Also merch! Rationality: From AI to Zombies, The Podcast LessWrong Sequence Posts Discussed in this Episode: Absolute Authority Infinite Certainty Next Episode’s Sequence Posts: (for real this time) 0 And 1 Are Not Probabilities Beautiful Math
undefined
Nov 16, 2022 • 1h 46min

175 – FTX + EA, and Personal Finance

David from The Mind Killer joins us. We got to talking about the FTX collapse and some of the waves it sent through the Effective Altruism community. Afterwards David helps us out with personal finance. 0:00:00 Intro 0:02:08 The Sequences Now Formatted For Zoomers 0:11:25 FTX Collapse & EA Shockwaves 0:54:20 Personal Finance 1:43:20 Thank the Patron! The Sequences In A Zoomer-Readable Format FTX: The $32B implosion – a good fast roundup of what the hell happened Yudkowsky’s essay on FTX Future Fund money that’s been paid out Yudkowsky’s essay warning against humans trying to use utilitarianism 14 years ago Bunch of Twitter – The reason we have excess money to give; Robert Wiblin; a god complex; conflating utilitarianism with naive utilitarianism; the ultimate take-away Our recent episode on Virtue Ethics Erik Hoel criticizing EA More details about the FTX stuff will be on the next episode of The Mind Killer “Investment is a bet that a thing will be important in the future” – David Nassim Taleb’s Barbell Investment Strategy Hey look, we have a discord! What could possibly go wrong? Our Patreon page, your support is most rational, and totally effective. Also merch! Rationality: From AI to Zombies, The Podcast LessWrong Sequence Posts Discussed in this Episode: none Next Episode’s Sequence Posts: (for really real this time) Absolute Authority Infinite Certainty
undefined
Nov 13, 2022 • 43min

BREAKING – FTX collapse & EA shockwaves

While recording the next episode we got to talking about the FTX collapse and some of the waves it sent through the Effective Altruism community. We decided to break it out into a separate segment so it can air while it’s still relevant. FTX: The $32B implosion – a good fast roundup of what the hell happened Yudkowsky’s essay on FTX Future Fund money that’s been paid out Yudkowsky’s essay warning against humans trying to use utilitarianism 14 years ago Bunch of Twitter – The reason we have excess money to give; Robert Wiblin; a god complex; conflating utilitarianism with naive utilitarianism; the ultimate take-away Our recent episode on Virtue Ethics Erik Hoel criticizing EA More details about the FTX stuff will be on the next episode of The Mind Killer Hey look, we have a discord! What could possibly go wrong? Our Patreon page, your support is most rational, and totally effective.
undefined
Nov 2, 2022 • 1h 16min

174 – Jon Stewart, The Consensus Emancipator

Wes and David from The Mind Killer show up for a special cross-over episode. We discuss How Stewart Made Tucker Listen to The Mind Killer Hey look, we have a discord! What could possibly go wrong? Also merch! Rationality: From AI to Zombies, The Podcast LessWrong Sequence Posts Discussed in this Episode: none Next Episode’s Sequence Posts: (for real this time) Absolute Authority Infinite Certainty
undefined
Oct 19, 2022 • 2h 4min

173 – Oh Lawd, Strong AI is Comin’

Matt Freeman returns to discuss General Artificial Intelligence timelines, and why they are short. Our primary text is: Why I think strong general AI is coming soon Other links: “Let’s think step by step” is all you need NVIDIA A100 info The Mind Killer AI suggested 40,000 new possible chemical weapons in six hours Yo be real GPT Hack language models with Ignore Previous Instructions Very Bad Wizards: Is it GPT or Dan Dennett? Tesla Bot “it’s obviously conscious” Whenever anyone says Elon doesn’t deliver… HPMoR Christmas chapter extended by AI (w/ guidance) 0:00:00 Intro/Main Topic 2:02:18 Thank the Patron! Hey look, we have a discord! What could possibly go wrong? Also merch! Rationality: From AI to Zombies, The Podcast LessWrong Sequence Posts Discussed in this Episode: none Next Episode’s Sequence Posts: Absolute Authority Infinite Certainty
undefined
Oct 5, 2022 • 2h 8min

172 – Virtue Ethics

Kerry discusses virtue ethics with us, and how one is to live. After Virtue, by Alasdair MacIntyre Pareto efficiency at Wikipedia, slightly less dense Pareto Improvement at Investopedia Our 23rd episode – Desirism with Alonzo Fyfe The 12 Virtues of Rationality I See Dead Kids 0:00:00 Intro/Main Topic 1:42:34 LW posts 2:05:53 Thank the Patron! Hey look, we have a discord! What could possibly go wrong? Also merch! Rationality: From AI to Zombies, The Podcast, and the other podcast LessWrong posts Discussed in this Episode: But There’s Still A Chance, Right? The Fallacy of Gray Next Episode’s Sequence Posts: Absolute Authority Infinite Certainty
undefined
Sep 21, 2022 • 1h 45min

171 – All About AGP

Tailcalled gives us the down low on a variation of the popularly discussed (if not that widely liked) transgender typology – autogynephilia (or AGP). Unfortunately, due to technical difficulties, Jace was only on for part of the episode and our software didn’t appreciate the confusion and dropped his audio entirely. Tailcalled’s blog:  https://surveyanon.wordpress.com/?utm_source=rss&utm_medium=rss Some specific links from Tailcalled: Some of my better or more important or more relevant posts that you might want to link: Autogynephilia is not a natural abstraction (My attempt to explain the thing I said at the end about why we will never get a better understanding of AGP) A dataset of common AGP/AAP fantasies (To get an idea of what AGP can be like, with the caveat that most people in the AGP debates focus on different and rarer forms of AGP) Using instrumental variables to test the direction of causality between autogynephilia and gender dissatisfaction (Some of the advanced causal inference stuff I’ve experimented with to study AGP; to an extent I now think my idea was flawed – I have other blog posts talking about the flaws – but it might be nice to include as an illustration) The mathematical consequences of a toy model of gender transition (I sort of explained that model in the interview but it might not have been very clear, so making it formal helps) Meta-attraction cannot account for all autogynephiles’ interest in men (The post that really marked the start of my dissatisfaction with Blanchardians) 0:00:00 Intro/Main Topic 1:26:00 LW posts 1:43:41 Thank the Patron! Hey look, we have a discord! What could possibly go wrong? Also merch! Rationality: From AI to Zombies, The Podcast, and the other podcast LessWrong posts Discussed in this Episode: Rational vs. Scientific Ev-Psych A Failed Just-So Story Next Episode’s Sequence Posts: But There’s Still A Chance, Right? The Fallacy of Gray
undefined
Sep 7, 2022 • 1h 59min

170 – By George, The Rent Is Too Damn High!

Eneasz drops a primer on Georgist Land Value Taxes. All increases in productivity will be capture by rising rents for as long as mankind exists, unless we find a way to prevent landlords from capturing all the benefits of rising land value. Henry George found the way to do this. Source post: Book Review: Progress & Poverty Follow-ups that we barely touched on, but include a lot more: Part I – Is Land Really a Big Deal? Part II – Can Land Value Tax be Passed on to Tenants? Part III – Can Unimproved Land Value be Accurately Assessed Separately from Buildings? Also in this episode: Eliezer supports Dignity Points, a scoring system for humanity. Afghan soldiers were shocked to learn about taxes in the U.S. 0:00:40 Feedback 0:06:15 Main Topic 1:33:48 LW posts 1:57:34 Thank the Patron Hey look, we have a discord! What could possibly go wrong? Also merch! Rationality: From AI to Zombies, The Podcast, and the other podcast LessWrong posts Discussed in this Episode: The American System and Misleading Labels Stop Voting For Nincompoops Next Episode’s Sequence Posts: Rational vs. Scientific Ev-Psych A Failed Just-So Story
undefined
Aug 24, 2022 • 2h 13min

169 – S&E BS re AGI

Steven and Eneasz discuss the latest timeline shifts regarding the advent of superhuman general intelligence. Then in the LW posts we got greatly sidetracked by politics (stupid mind-killer!). Links: Two-year update on my personal AI timelines on LessWrong Robert Wiblin on kissing your kids Lex Fridman podcast Jack Clark (AI policy guy) on AI policy Scott Alexander on slowing AI progress Guide to working in AI policy and strategy 0:00:42 Feedback 0:08:20 Main Topic 1:13:29 LW posts 2:11:40 Thank the Patron Hey look, we have a discord! What could possibly go wrong? Also merch! Rationality: From AI to Zombies, The Podcast, and the other podcast LessWrong posts Discussed in this Episode: My Strange Beliefs Posting on Politics The Two-Party Swindle Next Episode’s Sequence Posts: The American System and Misleading Labels Stop Voting For Nincompoops

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app