Courtney Bowman, Global Director of Privacy at Palantir, joins to discuss emerging AI regulation trends. Topics include the EU AI Act, federal regulatory developments in the U.S., state-level legislative initiatives, and upcoming challenges in AI legislation.
The EU AI Act classifies AI systems into risk levels, posing challenges for stakeholders in conformity assessments and regulatory obligations.
Regulatory focus expands beyond bias to address risks of general purpose AI models, disinformation, cybersecurity threats, and societal impacts.
US regulatory actions at state and federal levels aim to ensure safe, secure, and transparent AI development, highlighting challenges in alignment and diverse traditions.
Deep dives
Implications of the EU AI Act and Palantir's Involvement
The enactment of the EU AI Act marks a significant development in AI regulation, impacting various stakeholders. Palantir, a key player in privacy and technology, contributed to shaping the act through submissions. The complexity of EU regulation and collaborations between member states, Parliament, and the Commission influenced the final draft. Despite a two-year implementation period, the Act's classifications of AI systems into different risk thresholds pose regulatory challenges for providers, deployers, and downstream users.
Challenges and Responsibilities for High-Risk AI Systems
The EU AI Act categorizes AI systems based on their risk levels, with a focus on high-risk systems that require stringent regulatory measures. Providers, deployers, and downstream users of high-risk AI face conformity assessments and strict responsibilities. The Act aims to ensure compliance and mitigate risks associated with AI applications. Differentiating between minimal, lower, and high-risk AI applications outlines diverse obligations for stakeholders within the regulatory framework.
Evolution of Risk Profiles in AI Regulation
The regulatory landscape for AI has evolved to encompass issues beyond bias and discrimination, influenced by emerging technologies like generative AI and large language models. The focus shifted towards addressing risks associated with general purpose AI models, considering potential threats like superintelligence. Regulatory discussions now delve into mitigating risks related to disinformation, cybersecurity threats, and the societal impact of advanced AI technologies.
US Regulatory Efforts and Complexities in AI Governance
While the US lacks comprehensive federal AI legislation, regulatory actions have emerged at state and federal levels. Initiatives like NIST's AI framework and White House executive orders aim to address AI risks and standards. The evolving regulatory landscape in the US includes efforts to ensure safe, secure, and transparent AI development. These actions demonstrate a sectoral approach to regulating AI, highlighting challenges in aligning diverse regulatory traditions and technological advancements.
Future Avenues in AI Regulation and Governance Challenges
As AI regulation progresses, translating broad principles into practical best practices remains a critical challenge. Ensuring robust testing, evaluation, and validation processes for AI tools, particularly generative models, is essential. Maintenance concerns around model brittleness and fidelity over time require governance frameworks to sustain AI effectiveness. Addressing these issues will be crucial for fostering trust, transparency, and accountability in the evolving AI landscape.
John is joined by Courtney Bowman, the Global Director of Privacy and Civil Liberties at Palantir, one of the foremost companies in the world specializing in software platforms for big data analytics. They discuss the emerging trends in AI regulation. Courtney explains the AI Act recently passed by the EU Parliament, including the four levels of risk it assesses for different AI systems and the different regulatory obligations imposed on each risk level, how the Act treats general purpose AI systems and how the final Act evolved in response to lobbying by emerging European companies in the AI space. They discuss whether the EU AI Act will become the global standard international companies default to because the European market is too large to abandon. Courtney also explains recent federal regulatory developments in the U.S. including the framework for AI put out by the National Institute of Science and Technology, the AI Bill of Rights announced by the White House which calls for voluntary compliance to certain principles by industry and the Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence which requires each department of the federal government to develop its own plan for the use and deployment of AI. They also discuss the wide range of state level AI legislative initiatives and the leading role California has played in this process. Finally, they discuss the upcoming issues legislatures will need to address including translating principles like accountability, fairness and transparency into concrete best practices, instituting testing, evaluation and validation methodologies to ensure that AI systems are doing what they're supposed to do in a reliable and trustworthy way, and addressing concerns around maintaining AI systems over time as the data used by the system continuously evolves over time until it no longer accurately represents the world that it was originally designed to represent.