Google's Kent Walker discusses the need for transparency and the legal risks of AI in this entertaining podcast. They explore the potential harms to AI innovation in Washington and the importance of finding a balance between innovation and regulation.
The need for guardrails on AI use in critical infrastructure and rules to detect and warn about deceptive deep fakes.
Google's focus on watermarking and providence technologies to identify AI-generated content and the importance of a cautious approach to AI regulation.
Deep dives
AI Priorities and Guardrails
Senate Commerce Chair Maria Cantwell emphasizes the need for guardrails on AI use in critical infrastructure, such as banks, hospitals, and transportation systems. She also advocates for rules to detect and warn people about deceptive deep fakes and an AI work-first training initiative.
Google's Approach to AI Transparency
Kent Walker, President of Global Affairs and Chief Legal Officer at Google, discusses how Google is working on watermarking and providence technologies to identify AI-generated content on platforms like Google Search and YouTube. He highlights the use of AI to increase content quality and detect misinformation on YouTube, while also acknowledging the need to improve identification and blocking of deceptive or disinformation-based AI-generated content.
Regulation and Collaboration
Walker emphasizes the importance of a cautious approach to AI regulation, with a focus on a public-private partnership. He mentions the potential for collaboration with Europe and the Transatlantic Trade and Technology Council. Walker also emphasizes the need for case-by-case regulation and the importance of academic collaborations and workforce training initiatives to maximize benefits and minimize risks in the AI space.
Google’s Kent Walker was among the tech executives who signed last month’s White House AI safety pledge. On POLITICO Tech, he tells Steven Overly why he wants to see it become a global model, and why some of AI’s legal risks are still unclear.