This podcast explores the need for regulations in the tech industry, particularly in relation to AI. It discusses the challenges of governing AI and emphasizes the importance of addressing potential harms caused by technology. The guest, Marietje Schaake, highlights the EU's AI Act as a potential model for regulation. They also discuss the need for regulations in social media, the role of 21st-century governance models, and the opportunities for new approaches in the AI industry.
Read more
AI Summary
Highlights
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Regulation of AI is crucial to protect individuals from the power of tech companies and governments, ensuring public health, safety, and the common good.
Regulating AI presents unique challenges due to its proprietary nature, lack of accessibility to information, and the constantly evolving and personalized experiences it offers.
Deep dives
The Need for Regulation and its Purpose
Regulation is essential to establish a level playing field and ensure the protection of individuals from the power of both technology companies and governments. It is not intended to solely impede companies or stifle innovation but rather to safeguard public health, safety, and the common good. Regulations such as antitrust and competition laws assist in preventing the abuse of market power. The EU has implemented significant regulations in areas like data protection, which aims to safeguard individuals' data rights. The Digital Services Act and Digital Markets Act specifically address content moderation, trust, disinformation, and the market power concerns posed by tech giants.
The Challenges of Regulating AI
Regulating AI presents unique challenges due to the proprietary nature of AI technology and the constantly evolving and personalized experiences it offers. The lack of accessibility to information about AI and its datasets inhibits effective regulation. Additionally, the fluid nature of AI applications makes it difficult to regulate with precision. Unlike traditional regulations in sectors like pharmaceuticals or food, AI's complexity and rapid advancement require a new approach to address risks and oversee its deployment.
Steps Taken by the EU in AI Regulation
The EU has introduced laws and regulations to tackle the challenges associated with AI. The Digital Services Act and Digital Markets Act focus on content moderation responsibilities, transparency in algorithmic settings, and fair competition in the digital market. The AI Act attempts to mitigate risks posed by AI, incorporating a risk-based approach. The EU also emphasizes the need for enforcement and the importance of sanctions that have a significant impact on big tech companies. Collaboration between like-minded governments and global alignment in regulations can lead to greater impact and prevent companies from circumventing regulations through jurisdictional loopholes.
Addressing Unknown Risks and Building Responsible AI
The rapidly evolving nature of AI requires agile regulation and continuous assessment of emerging risks. Rather than attempting to predict every possible risk, regulations should empower designated experts to identify and address the impacts of new technologies on existing rights, public health, education, democracy, and more. Principles-based regulations that prioritize transparency, access to information, oversight, rights protection, and resilience can adapt to new challenges. The advancement of large language models highlights the urgent need for oversight and accountability. However, it is essential to strike a balance and ensure that regulations don't sacrifice civil rights for the sake of national security or bypass inclusive representation and public input.
When it comes to AI, what kind of regulations might we need to address this rapidly developing new class of technologies? What makes regulating AI and runaway tech in general different from regulating airplanes, pharmaceuticals, or food? And how can we ensure that issues like national security don't become a justification for sacrificing civil rights?
Answers to these questions are playing out in real time. If we wait for more AI harms to emerge before proper regulations are put in place, it may be too late.
Our guest Marietje Schaake was at the forefront of crafting tech regulations for the EU. In spite of AI’s complexity, she argues there is a path forward for the U.S. and other governing bodies to rein in companies that continue to release these products into the world without oversight.
Correction: Marietje said antitrust laws in the US were a century ahead of those in the EU. Competition law in the EU was enacted as part of the Treaty of Rome in 1957, almost 70 years after the US.
Tristan Harris and Aza Raskin’s presentation on existing AI capabilities and the catastrophic risks they pose to a functional society. Also available in the podcast format (linked below)
This blog post from the Center for Humane Technology describes the gap between the rising interconnected complexity of our problems and our ability to make sense of them