Regulating AI: Innovate Responsibly cover image

Regulating AI: Innovate Responsibly

Latest episodes

undefined
Jan 7, 2025 • 34min

Balancing Innovation & Safety: The Future of AI Regulation in America with Congressman Scott Franklin

In this episode of RegulatingAI Podcast, we’re joined by Congressman Scott Franklin from Florida’s 18th Congressional District, a member of the House AI Task Force, and a strong advocate for responsible AI regulation. Drawing on his unique background in the Navy, insurance, and agriculture, Rep. Franklin provides valuable insights into Congress’s role in the ever-evolving world of AI governance.Resources:https://franklin.house.gov/abouthttps://en.wikipedia.org/wiki/Scott_Franklin_(politician)https://x.com/repfranklinhttps://www.linkedin.com/in/cscottfranklin/https://www.congress.gov/member/c-franklin/F000472Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
undefined
Dec 23, 2024 • 4min

AI & Green Technology for Progress and Development | Roundtable Discussion Ft. Club de Madrid

Join us for an insightful discussion on the intersection of AI and Green Technology as drivers of global progress and sustainable development. This roundtable features highlights from the Imperial Springs International Forum 2024, hosted by Club de Madrid, where over 130 leaders from 40+ countries gathered to explore the future of international cooperation and multilateralism.
undefined
Dec 17, 2024 • 39min

The Fight for Fairness and Transparency in AI Systems with Faiza Patel, Senior Director of the Liberty and National Security Program at the Brennan Center for Justice

Artificial Intelligence has immense potential, but it also carries risks — particularly when it comes to civil liberties. In this episode, I speak with Faiza Patel, Senior Director of the Liberty and National Security Program at the Brennan Center for Justice at NYU Law. Together, we explore how AI can be regulated to ensure fairness, accountability and civil rights, especially in the context of national security and law enforcement.Key Takeaways:(01:53) AI in national security, law enforcement and immigration contexts.(05:00) The dangers of AI in government decisions, from immigration to surveillance.(09:09) Long-standing issues with AI, including biased training data in facial recognition.(12:55) The complexities of regulating AI-generated media, such as deepfakes, while protecting free speech.(17:00) The need for transparency in AI systems and the importance of scrutinizing outputs.(20:25) How marginalized communities are disproportionately affected by AI.(23:30) Companies developing AI must embed civil rights principles into their products.(26:45) Creating unbiased AI systems is a challenge, but necessary to avoid harm.(29:58) The need for a dedicated regulatory body to oversee AI, especially in national security.(34:00) AI’s potential impact on jobs and why policymakers need to prepare for labor disruption.Resources Mentioned:Faiza Patel -https://www.linkedin.com/in/faiza-patel-5a042816/Brennan Center for Justice -https://www.brennancenter.org/President Biden’s Executive Order on AI -https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/AI Bill of Rights -https://www.whitehouse.gov/ostp/ai-bill-of-rights/Brennan Center - Faiza Patel -https://www.brennancenter.org/experts/faiza-patelNational Security Carve-Outs Undermine AI Regulations -https://www.brennancenter.org/our-work/analysis-opinion/national-security-carve-outs-undermine-ai-regulationsSenate AI Hearings Highlight Increased Need for Regulation -https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased-need-regulationThe Perils and Promise of AI Regulation -https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulationAdvances in AI Increase Risks of Government Social Media Monitoring - https://www.brennancenter.org/our-work/analysis-opinion/advances-ai-increase-risks-government-social-media-monitoringThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
undefined
Dec 12, 2024 • 28min

AI and Society: Balancing Innovation, Governance, and Democracy in a Rapidly Changing World

In this episode of the RegulatingAI podcast, Sanjay Puri hosts an insightful discussion with Mr. Boris Tadić, former President of Serbia, to explore the profound implications of artificial intelligence (AI) on governance, society, and global relations at Imperial Springs International Forum 2024, Madrid, Spain. From its potential to revolutionise education and development to concerns about its effects on democracy and societal values, this conversation delves deep into the opportunities and challenges AI presents.Resources:https://x.com/boristadic58https://clubmadrid.org/who/members/tadic-boris/https://en.wikipedia.org/wiki/Boris_Tadi%C4%87
undefined
Dec 11, 2024 • 26min

Shaping Tunisia’s Future: Technology, Education, and AI Governance in the Arab World

In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.Resources:https://www.linkedin.com/in/mehdi-jomaa-60a8333b/https://x.com/Mehdi_Jomaahttps://www.facebook.com/M.mehdi.jomaaIn this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.Resources:https://www.linkedin.com/in/mehdi-jomaa-60a8333b/https://x.com/Mehdi_Jomaahttps://www.facebook.com/M.mehdi.jomaahttps://clubmadrid.org/who/members/mehdi-jomaa/
undefined
Dec 3, 2024 • 48min

The Future of Open-Source AI and Its Global Implications with Professor S. Alex Yang, Professor of Management Science and Operations, London Business School

The rapid rise of AI brings both extraordinary potential and profound risks, demanding urgent global collaboration to ensure its safe development. In this episode, I’m joined by Professor S. Alex Yang, Professor of Management Science and Operations at the London Business School, to explore the complexities of regulating AI, the challenges of international collaboration, and the potential existential risks posed by AI development. With his extensive experience in AI and risk management, Professor Yang provides unique insights into the future of AI governance.Key Takeaways:(02:12) Professor Yang’s early AI experiences and his value chain research.(06:57) The biggest risks from AI, including existential risk and job displacement.(11:42) The debate on AI nationalism and the preservation of cultural heritage.(16:28) How China’s chip-making capacity could reshape AI competition.(21:13) Open-source versus closed-source AI models and the risks involved.(25:58) Why monitoring monopolies in AI is crucial for innovation.(30:44) How content creators can benefit from AI and how copyright law is evolving.(35:29) The importance of fair use standards for AI-generated content.(40:14) Data aggregation and its future role in AI development.(45:00) Professor Yang’s final thoughts on the need for agile, principle-based AI regulation.Resources Mentioned:Professor S. Alex Yang -https://www.linkedin.com/in/songayang/London Business School | LinkedIn -https://www.linkedin.com/school/london-business-school/London Business School | Website -https://www.london.edu/?utm_source=google&utm_medium=ppc&utm_campaign=MC_BRBRAND_ppc_google&sc_camp=760e17bef14a4b399386ef32e55393a8&gad_source=1&gclid=Cj0KCQjwo8S3BhDeARIsAFRmkON1oXbsOVjQ73dCIwrvngSGSF0PBYwWGVKRtCdil8ptF2vmAzcW7lEaAvCxEALw_wcB&gclsrc=aw.dsWorldCoin - https://worldcoin.org/The Case for Regulating Generative AI Through Common Law -https://www.project-syndicate.org/commentary/european-union-ai-act-could-impede-innovation-by-s-alex-yang-and-angela-huyue-zhang-2024-02Generative AI and Copyright: A Dynamic Perspective -https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
undefined
Nov 19, 2024 • 41min

Overcoming the Cultural Clash Between AI Innovation and Data Privacy with Norman Sadeh, Professor of Computer Science, Co-Founder and Co-Director, Privacy Engineering Program, Carnegie Mellon University

AI presents endless opportunities, but its implications for privacy and governance are multifaceted. On this episode, I’m joined by Professor Norman Sadeh, a Computer Science Professor at Carnegie Mellon University, and Co-Founder and Co-Director of the Privacy Engineering Program. With years of experience in AI and privacy, he offers valuable insights into the complexities of AI governance, the evolving landscape of data privacy and why a multidisciplinary approach is vital for creating effective and ethical AI policies.Key Takeaways:(02:09) How Professor Sadeh’s work in AI and privacy began.(05:30) Privacy engineers are in AI governance.(08:45) Why AI governance must integrate with existing company structures.(12:10) The challenges of data ownership and consent in AI applications.(15:20) Privacy implications of foundational models in AI.(18:30) The limitations of current regulations like GDPR in addressing AI concerns.(22:00) How user expectations shape the principles of AI governance.(26:15) The growing debate around the need for specialized AI regulations.(30:40) The role of transparency in AI governance for building trust.(35:50) The potential impact of open-source AI models on security and privacy.Resources Mentioned:Professor Norman Sadeh -https://www.linkedin.com/in/normansadeh/Carnegie Mellon University | LinkedIn -https://www.linkedin.com/school/carnegie-mellon-university/Carnegie Mellon University | Website -https://www.cmu.edu/EU AI Act - https://artificialintelligenceact.eu/General Data Protection Regulation (GDPR) -https://gdpr-info.eu/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
undefined
Nov 7, 2024 • 28min

Championing Diversity, AI Skills, and Youth Empowerment: Reshaping Education and the Future of Work

In this inspiring episode, we explore how AI is not only transforming industries but also reshaping education and the future of work. Learn how diversity, AI skills, and youth empowerment are critical in building an ethical, AI-driven world.Our guest, Elena Sinel, FRSA and Founder of Teens in AI, shares her mission to champion diversity and equip young people with the skills they need to thrive in the AI era. She discusses the importance of empowering youth to lead the way in creating ethical AI solutions for a better future.
undefined
Nov 7, 2024 • 18min

Democratizing AI: The Role of Governments and Ethical Insights in Shaping Policy

In this thought-provoking episode, we explore the crucial role governments play in democratizing AI, ensuring its benefits reach all sectors of society. We discuss the ethical and governance challenges involved in shaping AI policy, as well as the philosophical underpinnings that drive this evolving landscape.Our distinguished guest, Ted Lechterman, Holder of the UNESCO Chair in AI Ethics & Governance at IE University, provides critical perspectives on how governments can lead the way in creating inclusive, ethical AI policies that align with democratic values.
undefined
Nov 7, 2024 • 21min

AI Compliance Challenges: Navigating the European AI Act and Regulatory Frameworks

In this episode, we dive into the complexities of AI compliance and the challenges organizations face in navigating the evolving regulatory landscape, especially with the European AI Act. Learn how businesses can stay compliant while driving innovation in AI development.Our guest, Sean Musch, Founder and CEO of AI & Partners, shares his expertise on the European AI Act and other regulatory frameworks shaping the future of AI. Discover practical strategies for navigating compliance while fostering responsible AI practices.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode