

The Theory of Anything
Bruce Nielson and Peter Johansen
A podcast that explores the unseen and surprising connections between nearly everything, with special emphasis on intelligence and the search for Artificial General Intelligence (AGI) through the lens of Karl Popper's Theory of Knowledge.
David Deutsch argued that Quantum Mechanics, Darwinian Evolution, Karl Popper's Theory of Knowledge, and Computational Theory (aka "The Four Strands") represent an early 'theory of everything' be it science, philosophy, computation, religion, politics, or art. So we explore everything.
Support us on Patreon:
https://www.patreon.com/brucenielson/membership
David Deutsch argued that Quantum Mechanics, Darwinian Evolution, Karl Popper's Theory of Knowledge, and Computational Theory (aka "The Four Strands") represent an early 'theory of everything' be it science, philosophy, computation, religion, politics, or art. So we explore everything.
Support us on Patreon:
https://www.patreon.com/brucenielson/membership
Episodes
Mentioned books

12 snips
May 22, 2023 • 2h 2min
Episode 58: Deutsch's "Creative Blocks": A Decade Later
Back in 2012, David Deutsch wrote an article called "Creative Blocks: How Close are we to Creating Artificial Intelligence?" This article inspired Bruce to go back to school and study Artificial Intelligence and get a Master's degree in the field.
A decade later, a lot has changed in the field of AI, and the field has never seemed so exciting. But are we really any closer to the goal of true universal intelligence?
We take a look back at the article and assess it from the vantage point of what we know now, a decade later. How much did Deutsch get right and how much is on less solid ground?

6 snips
May 1, 2023 • 1h 3min
Episode 57: Quantum Immortality / Quantum Torment
Does every one of us live forever in the multiverse? Is death a solvable problem? What is “quantum suicide”? Is quantum torment a concern? Does every fantastical thing we can imagine occur somewhere in the multiverse? What are “Harry Potter universes? Are we Boltzmann brains? Bruce, Cameo, and Peter consider these questions in this week’s episode.
Image from jupiterimages on Freeimages.com

14 snips
Apr 10, 2023 • 3h 1min
Episode 56: Rationality, Religion, and the Omega Point
Special guest, Lulie Tanett, asked me if she could come on my podcast and interview me about religion. Lulie and Peter ask me numerous religion-related questions such as:
How is the theology of the Church of Jesus Christ of Latter-day Saints (i.e. Mormon church) similar and different from Deutsch's Four Strands worldview?
What might the Deutsch Four Strands worldview learn from religion?
In a modern world, what (if anything) can religion still teach us?
Is religion an ally or a foe of a rational worldview?
For what matter, what is the most widely accepted rational worldview?
What about supernatural truth claims of religion? Can they be reconciled with a rational worldview?
How was the Omega Point theory (from the final chapter of Fabric of Reality) informed by religion?
What is the Omega Point theory? Why did Deutsch abandon it (in Beginning of Infinity)? What did he replace it with?
Is Frank Tipler (creator of the Omega Point theory) a nutter or a mad genius?

Mar 31, 2023 • 1h 37min
Episode 55: Why are Empirical Theories Special? (IQ part 3)
We continue our discussion of Dwarkesh Patel's article "Contra David Deutsch on AI" compared to Brett Hall's tweet on IQ theory. This time we concentrate on criticisms of Brett Hall's theory. Along the way, we ask the ultimate question:
Why did Karl Popper make his epistemology specifically about refuting empirical scientific theories instead of just generalizing it (like Deutsch does) to criticizing all theories and ideas?
And why is this important?
And then, we talk about how much we really like Brett's theory.

9 snips
Mar 13, 2023 • 2h 10min
Episode 54: Computational and Explanatory Universality (IQ part 2)
In this episode, we continue our discussion of Dwarkesh Patel's article "Contra David Deutsch on AI" compared to Brett Hall's tweet on IQ theory. This time we concentrate on criticisms of Patel's Hardware+Scaling hypothesis. To Patel's credit, he admits that his hypothesis is problematic.
Then Peter asks Bruce about why Brett Hall believes explanatory universality implies 'equal intellectual capacity'. Bruce gives a steelmanned version of Brett's theory that takes us through an explanation of what explanatory universality is and how it relates to computational universality and the Turing Principle.

43 snips
Feb 17, 2023 • 1h 20min
Episode 53: Universality and IQ - Part 1
Dwarkesh Patel published an article called "Contra David Deutsch on AI". This article was actually a defense of IQ theory against the charge (often made by fans of David Deutsch) that the existence of Explanatory Universality destroys IQ theory entirely. But how accurately does Dwarkesh portray Deutsch's view? (For that matter, how accurately do fans of David Deutsch portray Deutsch's viewpoint?) And how good are Patel's criticisms of Deutsch's view?
With some help from a tweet from Brett Hall on IQ theory, we compare and contrast Patel's and Hall's viewpoints and lay out the disagreements that exist.
Brett argues that Explanatory Universality implies we are all equally intelligent (i.e. have an equal capacity to learn) and that the only difference between people is our levels of interest in the knowledge that currently society happens to value. Is he correct? Or are the experiments cited by Patel wrong? If so, how?
Or to put this another way, if we did demonstrate via an experiment that some people do gain knowledge faster than others (as Patel claims), would that refute the theory of explanatory universality? Or are Brett's claims not actually implications of explanatory universality?

Jan 16, 2023 • 1h 18min
Episode 52: Is Being Dogmatic Ever a Good Thing?
In our previous episode, we asked if Karl Popper was Dogmatic. We also introduced the idea that Karl Popper wasn't convinced that dogmatism was always bad. In this episode, we further explore Karl Popper's idea that dogmatism is sometimes a good thing. We also ask difficult questions like 'How can you tell when you are being dogmatic?' and 'Is it possible to overcome your own dogmatism?'

Oct 2, 2022 • 1h 3min
Episode 51: Was Karl Popper Dogmatic?
There seems to be broad agreement, even among Karl Popper's own students, that he was a deeply dogmatic individual. In this episode we ask the question 'Was Karl Popper Dogmatic?' by reviewing a humorous article in Scientific American by John Horgan on August 22, 2018. Along the way, we discuss by what means we judge dogmatism. How do we even tell if someone is dogmatic or not? Is there a litmus test for dogmatism? If so, what is it?
Link to John Horgan's article.

Sep 11, 2022 • 1h 15min
Episode 50: The Turing Test 2.0 (aka is LaMDA Sentient?)
Blake Lemoine, the ex-Google engineer, claims LaMDA -- Google's language model -- is sentient. Is he right?
Alan Turing is perhaps most famous for his "Turing Test" which is a test of intelligence. David Deutsch has some interesting things to say about the Turing Test in "The Beginning of Infinity." Unfortunately, Deutsch's critique of the Turing Test is often misunderstood and it has led to some of his fans disparaging the Turing Test in ways that don't make sense.
The key question is why can humans so easily -- with a high degree of accuracy -- tell if they are talking to an intelligent being or not by merely having a conversation with the person? What is special about conversation that allows it to be used as a highly accurate test of general intelligence?
We also present a Turing Test 2.0 that improves upon the original Turing Test by removing the element of deception and formalizes the test better.
Along the way we answer the following questions:
Is Blake Lemoine right that LaMDA is sentient? How can we know?
Under what circumstances can a chatbot pass the original Turing Test 1.0?
Will we ever have a chatbot that can pass the Turing Test 2.0?
What can we learn from the Turing Test about intelligence?

Aug 1, 2022 • 59min
Episode 49: AGI Alignment and Safety
Is Elon Musk right that Artificial General Intelligence (AGI) research is like 'summoning the demon' and should be regulated?
In episodes 48 and 49, we discussed how our genes 'align' our interests with their own utilizing carrots and sticks (pleasure/pain) or attention and perception. If our genes can create a General Intelligence (i.e. Universal Explainer) alignment and safety 'program' for us, what's to stop us from doing that to future Artificial General Intelligences (AGIs) that we create?
But even if we can, should we?
"I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon." --Elon Musk


