Spotlight on AI: What Would It Take For This to Go Well?
Sep 12, 2023
auto_awesome
Silicon Valley AI researchers discuss the future of AI and their plan if things go wrong. Highlights include concerns about AI in social media, dangers of leaked language models, increasing power and accessibility of AI, skepticism and voluntary commitments regarding AI in the US, urgency of global AI chip monitoring, and the importance of focusing on success in the future.
The podcast emphasizes the need to address the potentially harmful impact of uncontrolled AI system scaling and the dangers of leaked open source models.
The episode highlights the importance of implementing compute governance to regulate chip distribution and mitigate the risks of AI proliferation.
Deep dives
Concerning developments in the space
The podcast episode discusses some concerning developments in the field of AI. It highlights the release of the AI dilemma presentation, which sheds light on the unsafe arms race and fierce deployment of AI systems by major companies. The episode emphasizes the need to address these concerns and prevent the potentially harmful impact of uncontrolled AI system scaling. It also touches on the dangers of leaked open source models such as LAMA, which can be used for malicious purposes.
The need for compute governance
The podcast explores the importance of compute governance in managing the risks associated with AI development. It emphasizes the need to control and monitor the flow of advanced chips used for training powerful AI systems. The discussion highlights the limited window of opportunity for implementing such controls and the potential need for international cooperation to regulate chip distribution. The episode offers insight into the challenges and the urgency of implementing compute governance to mitigate the risks of AI proliferation.
Progress in addressing AI risks
Despite the challenges, the podcast highlights some positive developments in addressing AI risks. It mentions the voluntary commitments made by major AI companies to invest in safety research and secure practices. The episode also mentions the AI insight forums organized by Senator Chuck Schumer, which aims to foster learning and dialogue on AI risks among experts and policymakers. These initiatives indicate that awareness and concern about AI risks are growing and efforts are being made to navigate the path towards a safer future.
Pathways to a safer world
In a dedicated three-hour session, the podcast delves into a step-by-step pathway towards achieving compute governance for AI systems. The episode describes the workshop's exploration of plausible paths that could lead to a safer world. It emphasizes the importance of collective action, including legislative measures, industry cooperation, and cultural shifts in technology usage. The discussion highlights the need for comprehensive planning and the involvement of diverse stakeholders to navigate the challenges and risks associated with AI development.
Where do the top Silicon Valley AI researchers really think AI is headed? Do they have a plan if things go wrong? In this episode, Tristan Harris and Aza Raskin reflect on the last several months of highlighting AI risk, and share their insider takes on a high-level workshop run by CHT in Silicon Valley.
NOTE: Tristan refers to journalist Maria Ressa and mentions that she received 80 hate messages per hour at one point. She actually received more than 90 messages an hour.
This week will feature a series of public hearings on artificial intelligence. But all eyes will be on the closed-door gathering convened by Senate Majority Leader Chuck Schumer
Vice President Kamala Harris met with the heads of Google, Microsoft, Anthropic, and OpenAI as the Biden administration rolled out initiatives meant to ensure that AI improves lives without putting people’s rights and safety at risk