Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary: Against the Singularity Hypothesis (David Thorstad), published by Nicholas Kruus on March 27, 2024 on The Effective Altruism Forum.
This post summarizes "Against the Singularity Hypothesis," a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad's blog, Reflective Altruism, has a three-part series on this paper.
Introduction
The effective altruism community has allocated substantial resources to catastrophic risks from AI, partly motivated by the singularity hypothesis about AI's rapid advancement. While many[1] AI experts and philosophers have defended the singularity hypothesis, Thorstad argues the case for it is surprisingly thin.
Thorstad describes the singularity hypothesis in (roughly) the following three parts:[2]
Self-Improvement: Artificial agents will become able to increase their own quantity of general intelligence.
Intelligence Explosion: For a sustained period, their general intelligence will grow at an accelerating rate, creating exponential or hyperbolic growth that causes them to quickly surpass human intelligence by orders of magnitude.
Singularity: This will produce a discontinuity in human history, after which humanity's fate - living in a digital form, extinct, or powerless - depends largely on our interactions with artificial agents.
Growth
Thorstad offers five reasons to doubt the intelligence growth rate proposed by the singularity hypothesis.
Extraordinary claims require extraordinary evidence: Proposing that exponential or hyperbolic growth will occur for a prolonged period,[3] is an extraordinary claim that requires many excellent reasons to suspect it's correct. Until this high burden of evidence is met, it's appropriate to place very low credence on the singularity hypothesis.
Good ideas become harder to find: Idea-generating becomes increasingly difficult as low-hanging fruit is picked. For example, spending on drug and agricultural research has seen rapidly diminishing returns.[4] AI will likely be no exception, as hardware improvement (e.g. Moore's law) is slowing. Even if the rate of diminishing research productivity is small, its effects become substantial as it compounds over many cycles of self-improvement.[5]
Bottlenecks: No algorithm can run quicker than its slowest component, so, unless every component can be sped up at once, bottlenecks may arise. Even a single bottleneck would halt an intelligence explosion, and we should expect them to emerge because…
There is limited room for improvement in certain processes (e.g., search algorithms)
There are physical resource constraints (we shouldn't expect supply chains' output to increase a thousandfold or more very quickly)
Physical constraints: Regardless of path, improving AI will eventually face intractable limitations from resource constraints and laws of physics, likely slowing intelligence growth. Consider Moore's law's demise:
Circuits' energy requirements have massively increased - increasing costs and overheating.[6]
Capital is drying up, as semiconductor plant prices have skyrocketed.[7]
Our best transistors' diameter is now that of ten atoms, making manufacturing increasingly difficult and soon subject to quantum uncertainties.[8]
Sublinearity: Technological capabilities[9] have been rapidly improving, meaning, if intelligence grows proportionally to them, then continuing current trends would create exponential intelligence growth. But intelligence grows sublinearly to these capabilities, not proportionally.
Consider almost any performance metric plausibly correlated with intelligence - e.g., Chess, Go, protein folding, weather and oil reserve prediction - historically, exponential increases in the quantity of computation power yield merely linear gains.[10] If these performa...