Can AI Even Be Regulated?, with Sendhil Mullainathan
Feb 13, 2025
auto_awesome
Sendhil Mullainathan, a Professor at MIT and a behavioral economics expert, shares his insights on the rapidly shifting AI landscape. He discusses the implications of Elon Musk's bid to control OpenAI and the emergence of competitors like DeepSeek AI. The conversation delves into the complexities of regulating AI, emphasizing accountability, ethical considerations, and the challenges of balancing profit with public welfare. Mullainathan also highlights the necessity for innovative governance and citizen participation in shaping AI regulations to protect societal interests.
The AI market is highly competitive, as evidenced by the emergence of DeepSeek, contradicting the belief that a few giants dominate the industry.
Regulating AI poses challenges due to traditional frameworks' inadequacy, prompting proposals to shift liability from developers to users for greater accountability.
Deep dives
The Illusion of Monopolistic Markets in AI
The common perception that the AI industry is dominated by a few large players is misleading. In reality, the market is highly competitive, with numerous new entrants able to disrupt established companies. This competitive landscape is highlighted by the emergence of DeepSeq, a new AI startup from China that operates at lower costs compared to giants like OpenAI. Such developments demonstrate that innovation and competition are alive and well, contradicting the narrative of an oligopoly controlling the future of AI.
Challenges of AI Regulation and Liability
Regulating AI presents unique challenges as traditional frameworks fail to address the evolution and application of the technology. One proposed solution is to shift liability to users of AI technology rather than developers, which could encourage responsible use of AI. This approach aims to address concerns around accountability for the consequences of AI applications, especially in sensitive fields like healthcare. However, the complications of defining liability and ensuring effective governance still pose significant hurdles.
Innovation Outpacing Regulatory Responses
The rapid advancement of AI technology raises concerns about regulatory lag behind innovation and its potential disruptive impacts on the economy and society. The characterization of AI as a 'double exponential' suggests it is advancing significantly faster than previous technologies, making it difficult for governance structures to keep pace. There is a fear that without deliberate measures to manage this displacement, workers and communities could suffer significant consequences. Historical parallels with technological revolutions underline the importance of carefully managing change to prevent societal upheaval.
Diverse Perspectives on Governance and AI
The complexity of AI governance is underscored by the varying opinions on how best to regulate its use while fostering innovation. Suggestions include the establishment of citizen assemblies to bring a plurality of voices into the decision-making process, reflecting the diverse views on what constitutes the public interest. Additionally, incorporating public feedback into governance frameworks could lead to more effective outcomes in regulation. This approach emphasizes the necessity of innovation in governance structures to adapt to the unpredictable nature of AI technologies.
This week, Elon Musk—amidst his other duties of gutting United States federal government agencies as head of the “Department of Government Efficiency” (DOGE)—announced a hostile bid alongside a consortium of buyers to purchase control of OpenAI for $97.4 billion. OpenAI CEO Sam Altman vehemently replied that his company is not for sale.
The artificial intelligence landscape is shifting rapidly. The week prior, American tech stocks plummeted in response to claims from Chinese company DeepSeek AI that its model had matched OpenAI’s performance at a fraction of the cost. Days before that, President Donald Trump announced that OpenAI, Oracle, and Softbank would partner on an infrastructure project to power AI in the U.S. with an initial $100 billion investment. Altman himself is trying to pull off a much-touted plan to convert the nonprofit OpenAI into a for-profit entity, a development at the heart of his spat with Musk, who co-founded the startup.
Bethany and Luigi discuss the implications of this changing landscape by reflecting on a prior Capitalisn’t conversation with Luigi’s former colleague Sendhil Mullainathan (now at MIT), who forecasted over a year ago that there would be no barriers to entry in AI. Does DeepSeek’s success prove him right? How does the U.S. government’s swift move to ban DeepSeek from government devices reflect how we should weigh national interests at the risk of hindering innovation and competition? Musk has the ear of Trump and a history of animosity with Altman over the direction of OpenAI. Does Musk’s proposed hostile takeover signal that personal interests and relationships with American leadership will determine how AI develops in the U.S. from here on out? What does regulating AI in the collective interest look like, and can we escape a future where technology is consolidated in the hands of the wealthy few when billions of dollars in capital are required for its progress?