Silicon Valley's Effective Altruist vs. Accelerationist Religious War
Nov 29, 2023
auto_awesome
Crypto researcher Molly White, internet culture reporter Ryan Broderick, and AI reporter Deepa Seetharaman discuss the Effective Altruism vs. Effective Accelerationism debate in Silicon Valley and its connection to the OpenAI controversy. They explore the formation of online groups promoting different ideologies, the polarizing nature of effective altruism, the beliefs and implications of artificial general intelligence, and the emergence of religious philosophies in the industry.
Effective altruism and effective accelerationism are two contrasting ideologies that have gained prominence in Silicon Valley, with EAs advocating for a methodical approach to AI development that considers risks, while effective accelerationists push for rapid development without regard for risks.
While effective altruism and effective accelerationism have influenced the corporate landscape and AI ethics debates, it is important to recognize that these ideologies are not the only perspectives in the AI debate, and there is a need to consider a broader range of ideas and perspectives.
Deep dives
The Divide Between Effective Altruists and Effective Accelerationists
The podcast episode delves into the divide between effective altruists (EAs) and effective accelerationists. EAs believe in doing as much good as possible using data-driven analysis. They are concerned with existential risks, including the potential dangers of artificial intelligence (AI). They advocate for a methodical approach to AI development that accounts for risks. On the other hand, effective accelerationists believe in developing AI as quickly as possible without regard for risks. They argue that the universe is not inherently biased towards self-destruction, so everything will be fine. Both groups have gained attention and influence in Silicon Valley, but the episode emphasizes that the AI debate involves a much wider range of perspectives beyond these two groups.
The Origins and Manifestation of EA and Accelerationism
The origins of effective altruism and effective accelerationism can be traced back to online communities and forums. Less Wrong, a niche but influential message board and blog, played a significant role in the emergence of these digital philosophies. Effective altruism, focusing on maximizing good through data-driven analysis, and effective accelerationism, advocating for rapid AI development regardless of risks, are two ideologies that have gained prominence. While they differ in their approach to AI development, both groups share a common belief in the potential risks and benefits of advanced AI systems. The episode notes that these ideologies are not the only perspectives in the AI debate and that there is a much broader range of voices and ideas involved.
The Impact of EA and Accelerationism in Tech Companies
Effective altruism and effective accelerationism have influenced the corporate landscape, particularly in tech companies. OpenAI, for example, has seen various ideologies at play within its organization. EA-aligned researchers and founders have been concerned about existential risks, while others focused on practical applications and safety measures. The resource disparity between these groups has caused tension. EA's influence can also be seen in hiring practices, where EA-aligned individuals often come from top universities. Meanwhile, effective accelerationism has gained attention and influence, with proponents like Mark Andreessen advocating for rapid AI development. The episode suggests that these ideologies have impacted corporate decision-making and debates surrounding AI ethics, but also highlights the importance of other perspectives and the need to consider a broader range of ideas.
Critiques and Concerns Surrounding EA and Accelerationism
While effective altruism and effective accelerationism have their merits, they also attract criticism and raise concerns. Effective altruism, initially focused on doing good efficiently, has expanded its scope to prioritize long-term concerns and existential risks posed by AI. Some have questioned the prioritization of future possibilities over immediate human challenges. Accelerationism, which advocates for rapid AI development, has been seen as disregarding risks and relying on a belief that the universe inherently avoids self-destruction. The episode points out that these movements have created varying interpretations and actions, making it challenging to make generalizations. There is a concern that the attention and influence given to these ideologies overshadow the importance of other perspectives within the AI debate.
Molly White, Ryan Broderick, and Deepa Seetharaman join Big Technology Podcast to dive deep into the Effective Altruism (EA) vs. Effective Accelerationism (e/acc) debate in Silicon Valley that may have been at the heart of the OpenAI debacle. White is a crypto researcher and critic who writes Citation Needed on Substack, Broderick is an internet culture reporter who writes Garbage Day on Substack, Seetharaman is a reporter at The Wall Street Journal who covers AI. Our three guests join to discuss who these groups are, how they formed, how their influence played into the OpenAI coup and counter-coup, and where they go from here.
--
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.