The podcast discusses the need for rules and regulations in AI, emphasizing the importance of protecting against potential harms while promoting innovation. It compares their safe framework to Senator Schumer's principles and explores Congress's ability to address data privacy. The chapter also highlights the importance of international collaboration in establishing consistent AI rules and regulations.
Governing AI use cases and outcomes by sector allows experts in each sector to regulate AI in accordance with existing laws and regulations related to discrimination or bias.
A flexible framework is needed to identify and regulate high consequence use cases of AI that have a significant impact on society, such as generative AI creating deep fakes which pose a serious threat to democratic institutions.
Deep dives
The importance of governance in AI innovation
Governance, including regulatory and non-regulatory mechanisms, can benefit innovation and ensure necessary protections against potential harms. The SCSP has developed four principles, one of which is governing AI use cases and outcomes by sector. This approach allows sector regulators with expertise to regulate AI relevant to their sectors. For example, existing laws and regulations related to discrimination or bias still apply to AI-driven decisions in employment or loans. Another principle is empowering and modernizing existing regulators, ensuring they have the capacity and knowledge to regulate AI, possibly through education, talent, or additional resources.
Focus on high consequence AI use cases
Governance should prioritize high consequence use cases that have a significant impact on society, whether positive or negative. A flexible framework is needed to identify and regulate these use cases, providing guidance to regulators on where to allocate their efforts. An example of a high consequence use case mentioned is generative AI creating deep fakes, which poses a serious threat to democratic institutions, especially during elections. The framework being developed by SCSP and Johns Hopkins Applied Physics Lab aims to identify and address high consequence use cases.
Strengthening non-regulatory AI governance
Non-regulatory mechanisms, such as self-governance and voluntary standards, can also play a role in shaping AI technology. Examples include NIST's cybersecurity framework and consumer product safety standards. However, it is acknowledged that self-governance may not always be effective, particularly in areas like social media regulation. Senator Schumer's safe framework is considered aligned with the principles put forth by SCSP, emphasizing security, accountability, IP concerns, and democratic values. While challenges exist, there is optimism that the House and Senate can rise to the challenge and take necessary action in regulating AI.
The Special Competitive Studies Project's Rama Elluru and Jenilee Keefe Singer join host Jeanne Meserve for a conversation on a framework for AI governance.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit scsp222.substack.com
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode