In AI We Trust?

Elizabeth Kelly (AISI): How will the US AI Safety Institute lead the US and globe in AI safety?

15 snips
Oct 31, 2024
Elizabeth Kelly, Director of the U.S. Artificial Intelligence Safety Institute (AISI), discusses pivotal AI safety initiatives and the recent National Security Memorandum on AI. She highlights the significance of the Biden Executive Order aimed at innovation and consumer protection. Kelly shares insights on AISI's role in promoting collaboration between the industry and government, partnerships with major AI firms, and the upcoming inaugural AI safety summit. The conversation emphasizes the importance of multidisciplinary collaboration to mitigate AI risks and ensure safe practices.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AISI's Mission

  • The U.S. AI Safety Institute (AISI) advances AI safety across various risks, including national security and individual rights.
  • Housed within the Department of Commerce, AISI develops tests, evaluations, and guidance to accelerate safe AI innovation.
ANECDOTE

Kelly's Role in AI Policy

  • Elizabeth Kelly helped lead the Biden administration's efforts on AI, including the AI executive order.
  • She focused on promoting competition, protecting privacy, supporting workers and consumers, and engaging with allies on AI governance.
INSIGHT

National Security Memorandum on AI

  • The National Security Memorandum (NSM) on AI addresses AI's implications for national security and foreign policy.
  • It designates AISI as the primary contact point for industry and government on AI safety.
Get the Snipd Podcast app to discover more snips from this episode
Get the app