

The Bayesian Conspiracy
The Bayesian Conspiracy
A conversational podcast for aspiring rationalists.
Episodes
Mentioned books

May 10, 2017 • 1h 46min
34 – Lies, All Lies!
Slate Star Codex article – You Kant Dismiss Universalizability Wikipedia page on the Revelation Principle Eliezer’s post on LessWrong – Ends Don’t Justify Means (Among Humans) Another LessWrong post – Protected From Myself Louis CK bit on Lying (1:46 seconds long) Short book by Sam Harris on Lying (six minute preview of audiobook) Scott Alexander post […]

Apr 26, 2017 • 1h 27min
33 – MIRI, and EA meta-discussion
We talk with Tsvi from MIRI Game-playing algorithm that pauses Tetris “On The Origin of Circuits“, discussing a chip hardware evolution experiment MIRI’s technical research agenda overview Alignment for advanced machine learning systems paper from MIRI Musk’s OpenAI Paul Christiano’s about page, which links to his paper Tsvi mentioned Logical induction paper from MIRI Reason […]

Apr 12, 2017 • 2h 5min
32 – Who’s Afraid of AI?
The Logical Fallacy of Generalizing from fictional evidence Genie Button Though Experiment Wait but Why on AI Part 1 and Part 2 The Downfall meme we mentioned. This is Steven’s favorite version. Yudkowsky vs Hanson – Great AI FOOM debate and the Video Sam Harris AI TED talk Albion’s Seed – SSC Redditor provides an incredible explanation […]

Mar 29, 2017 • 1h 22min
31 – Digital Rights and Privacy
Who can access the external part of your brain that you carry around in your pocket? What rights do you have to it? With Chase. Police demand audio records from the Echo of a murder victim, Amazon displeased (more details) EFF defends podcasting (as a whole) from a patent troll Speaking of which – The EFF […]

Mar 15, 2017 • 1h 48min
30 – Of Specks and Omlettes
We discuss a famously controversial post, despite our better judgement. With Sean and Matt. The Post: Torture Vs Dust Specks Rationality: From AI to Zombies – The Podcast Shut Up And Multiply is actually taken from two posts: Circular Altruism and The “Intuitions” Behind “Utilitarianism” Three Worlds Collide (also in audio) The Ones Who Walk Away From […]

Mar 1, 2017 • 1h 15min
29 – Fiction and Fun with Max Harms
Crystal Society (or read it for free) (or listen to most of it as audio) Crystal Mentality Quick Links: Aubrey de Grey’s TED talk on extreme longevity Ending Aging – Aubrey de Grey’s book “The Sequences” – Compiled in book form. Max Harms on Cryonics (Video) Micromort – The unit used to measure a one in a million […]

Feb 22, 2017 • 56min
Bonus Mini-Episode: I Did Nazi That Coming
Forgive the pun. I couldn’t help myself. (Steven) In this bonus episode, Eneasz and Steven respond to some of the feedback to the Nazi Punching episode. Thank you to everyone who provided thoughtful comments. Be assured we read everything, but we only have so much time to respond on the air. The Innocence Project is […]

Feb 15, 2017 • 1h 15min
28 – Effective Altruism
Being effective vs. feeling good. EA is a big topic, this is really just an intro for the uninitiated. The Life You Can Save, by Peter Singer (and the inspired-by website) Money: The Unit of Caring, by Eliezer Yudkowsky Nobody is Perfect, Everything is Commensurate, by Scott Alexander What Is The Greatest Good? – an EA profile […]

Feb 1, 2017 • 1h 31min
27 – On Punching Nazis
When is it OK to punch Nazis? With Sean. Original Richard Spencer being punched video Follow-up video of the guy who punched him being briefly confronted Eneasz’s two posts on the topic Ken White of Popehat On Punching Nazis (he’s against it, for good legal reasons) A pro-punching piece a friend sent to Eneasz The Alternative Right […]

Jan 18, 2017 • 1h 14min
26 – Concept Networks and Hanging Nodes
Eliezer Yudkowsky’s original post on Neural Categories and hanging nodes. Scott Alexander on Diseased Thinking and a longer essay Concept Networks. The very worthwhile analysis on holiday gift giving reddit comment. Google’s deep learning utilizes these sorts of networks. “The Great AI Awakening” discusses this at some length, and is Eneasz’s source for visual recognition […]


