How to make AI safe, according to the tech giants, with Rebecca Finlay, CEO of PAI
Oct 30, 2023
auto_awesome
Rebecca Finlay, CEO of PAI, discusses the guidelines developed by the Partnership on AI to address risks associated with advanced AI systems. Topics include societal risks, synthetic media framework, non-regulatory levers for responsible use of technology, public input for AI guidance, and the idea of a moratorium on advanced AI development.
The Partnership on AI aims to address the risks and societal impact of artificial intelligence by bringing together tech giants, academia, media, and society.
PAI's guidelines for AI development focus on understanding and addressing both current and future risks, including bias, transparency, superintelligence, and technological unemployment.
Deep dives
The partnership on AI and its guidelines
The Partnership on AI (PAI) is an organization that aims to bring together tech giants, academia, media, and society to address the risks and societal impact of artificial intelligence (AI). PAI was launched in 2016, and by 2019, over a hundred organizations including Amazon, Facebook, Google, Microsoft, IBM, Apple, and Baidu had joined. PAI's guest, Rebecca Finley, discusses how PAI developed a set of guidelines to help companies developing advanced AI systems. These guidelines focus on understanding and addressing both current and future risks associated with AI, with a goal of establishing PAI's guidelines as the basis for global self-regulation in the AI industry.
The spectrum of AI risks
Rebecca Finley highlights the importance of understanding the different perspectives and voices surrounding AI risks. PAI aims to define safety in the context of societal well-being by considering current risks and potential future risks associated with large AI models. This includes short-term risks such as bias, transparency, and the potential for malicious use, as well as long-term risks like superintelligence and technological unemployment. PAI's guidelines provide a taxonomy of risks, emphasizing the need for third-party oversight, responsible practices, and transparency regarding data sets used to train AI systems.
Non-regulatory levers and future direction
Rebecca Finley discusses various non-regulatory levers that can incentivize responsible AI practices. These include government procurement systems with responsibility codes, influence through dialogues and summits, and legislative means like GDPR. She acknowledges the need for industry cooperation, public consultation, and continuous iteration and adaptation of guidelines. While the issue of transparency regarding data sets used in models developed by companies like OpenAI and Google DeepMind is not fully addressed, PAI encourages greater transparency and third-party inspection to uncover potential risks and ensure responsible AI development.
The Partnership on AI was launched back in September 2016, during an earlier flurry of interest in AI, as a forum for the tech giants to meet leaders from academia, the media, and what used to be called pressure groups and are now called civil society. By 2019 more than 100 of those organisations had joined.
The founding tech giants were Amazon, Facebook, Google, DeepMind, Microsoft, and IBM. Apple joined a year later and Baidu joined in 2018.
Our guest in this episode is Rebecca Finlay, who joined the PAI board in early 2020 and was appointed CEO in October 2021. Rebecca is a Canadian who started her career in banking, and then led marketing and policy development groups in a number of Canadian healthcare and scientific research organisations.
In the run-up to the Bletchley Park Global Summit on AI, the Partnership on AI has launched a set of guidelines to help the companies that are developing advanced AI systems and making them available to you and me. Rebecca will be addressing the delegates at Bletchley, and no doubt hoping that the summit will establish the PAI guidelines as the basis for global self-regulation of the AI industry.