

AI Safety Fundamentals
BlueDot Impact
Listen to resources from the AI Safety Fundamentals courses!https://aisafetyfundamentals.com/
Episodes
Mentioned books

May 13, 2023 • 8min
Choking off China’s Access to the Future of AI
The podcast discusses the new export controls policy on AI and semiconductor technologies announced by the Biden administration. It explores the impact of these controls on US-China relations and the criticism received by the administration. The podcast also delves into the actions taken by the US government to control key technologies in the global semiconductor supply chain. It highlights the new export controls' impact on China's AI and semiconductor industries, as well as the growing divide between the US and China in the field of AI.

May 13, 2023 • 31min
A Tour of Emerging Cryptographic Technologies
A podcast explores the impact of emerging cryptographic technologies, including blockchain-based technologies and computing on confidential data. It discusses the historical significance, future impact, political implications, and challenges of these technologies. Topics covered include public key cryptography, digital signatures, blockchain, cryptocurrencies, zero-knowledge proofs, smart contracts, homomorphic encryption, and secure multi-party computation. It also delves into the evolution of encryption in messaging services, the role of digital signatures, and the importance of cryptographic technologies such as public keys, hash functions, and timestamping. The concept of homomorphic encryption is also explored.

6 snips
May 13, 2023 • 32min
What Does It Take to Catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring
This podcast explores the importance of enforcing rules on developing advanced machine learning systems. It discusses the dangers of ML chip development, proposes a system design for verifying compliance with neural network training rules, and explores measures for secure machine learning training.

May 13, 2023 • 36min
Historical Case Studies of Technology Governance and International Agreements
This podcast discusses historical case studies that can inform AI governance. It explores the governance of nuclear technology, the differences between AI and nuclear technology, proposals for international control of nuclear energy, reducing the risk of nuclear proliferation, the impact of general purpose technologies on military affairs, and the outcomes of the Montreal and Kyoto Protocols.

May 13, 2023 • 10min
12 Tentative Ideas for Us AI Policy
A discussion on 12 US AI policy ideas to improve outcomes, covering topics such as governance, advanced AI regulation, harm tracking, and emergency shutdown mechanisms.

May 13, 2023 • 42min
International Institutions for Advanced AI
Lewis Ho, an expert in international institutions, discusses the importance of international collaborations in ensuring the benefits of advanced AI systems and managing the risks they pose. The podcast explores the need for international governance, challenges of advanced AI, standard setting, and the concept of a Frontier AI Collaborative. It also highlights the importance of providing underserved societies with advanced AI systems through international efforts.

4 snips
May 13, 2023 • 1h 15min
Let’s Think About Slowing Down AI
Exploring the debate on slowing down AI progress to prevent future risks, debunking the idea of technological inevitability, and discussing the risks and rationality of AI development. Navigating ethical challenges in technological advancements and the dilemma of building AI weapons. Delving into the strategic implications and complexities of slowing AI progress for a secure future.

May 13, 2023 • 18min
What AI Companies Can Do Today to Help With the Most Important Century
The podcast delves into practical actions for major AI companies, including alignment research, security standards, and governance preparation. It discusses the importance of ethical AI practices, responsible development, and balancing caution with financial success. It also explores the role of governments in AI regulation and the challenges of navigating ethical dilemmas in the industry.

May 13, 2023 • 7min
LP Announcement by OpenAI
OpenAI LP is a new entity focused on safe development of Artificial General Intelligence. They prioritize their mission over profits, emphasize governance structure, and explore AGI's potential impact on society. The podcast delves into talent acquisition, supercomputing advancements, and the importance of mitigating risks related to AGI development.

May 13, 2023 • 3min
OpenAI Charter
Exploring the refined strategy of the OpenAI Charter for safe & beneficial AGI deployment. Focus on broad benefits, long-term safety, technical leadership, & cooperative orientation for humanity's benefit.