
AI Summer
Tim Lee and Dean Ball interview leading experts about the future of AI technology and policy. www.aisummer.org
Latest episodes

Jan 20, 2025 • 58min
Sam Hammond on getting government ready for AI
Sam Hammond is senior economist at the Foundation for American Innovation, a right-leaning tech policy think tank based in Washington DC. Hammond is a Trump supporter who expects AI to improve rapidly in the next few years, and he believes that will have profound implications for the public policy. In this interview, Hammond explains how he’d like to see the Trump administration tackle the new policy challenges he expects AI to create over the next four years.Here are some of the key points Hammond made during the conversation:* Rapid progress in verifiable domains: In areas with clear verifiers, like math, chemistry, or coding, AI will see rapid progress and be essentially solved in the short term. "For any kind of subdomain that you can construct a verifier for, there'll be very rapid progress."* Slower progress on open-ended problems: Progress in open-ended areas, where verification is harder, will be more challenging, and there’s a need for reinforcement learning to be applied to improve autonomous abilities. "I think we're just scratching the surface of applying reinforcement learning techniques into these models."* The democratization of AI: As AI capabilities become widely accessible, institutions will face unprecedented challenges. With open-source tools and AI agents in the hands of individuals, the volume and complexity of economic and social activity will grow exponentially. "When capabilities get demonstrated, we should start to brace for impact for those capabilities to be widely distributed."* The risk of societal overload: If institutions fail to adapt, AI could overwhelm core functions such as tax collection, regulatory enforcement, and legal systems. The resulting systemic failure could undermine government effectiveness and societal stability. "Core functions of government could simply become overwhelmed by the pace of change."* The need for deregulation: Deregulating and streamlining government processes are necessary to adapt institutions to the rapid changes brought by AI. Traditional regulatory frameworks are incompatible with the pace and scale of AI’s impact. "We need a kind of regulatory jubilee. Removing a regulation takes as much time as it does to add a regulation."* Securing models and labs: There needs to be a deeper focus on securing AI models and increasing security in AI labs, especially as capabilities become tempting targets for other nations. "As we get closer to these kind of capabilities, they're going to be very tempting for other nation state actors to try to steal. And right now the labs are more or less wide open."* The need for export controls and better security: To maintain a technological edge, tighter export controls and advanced monitoring systems are required to prevent adversaries from acquiring sensitive technologies and resources. Investments in technology for secure supply chain management are critical. "Anything that can deny or delay the development of China’s ecosystem is imperative." This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org

Jan 16, 2025 • 1h 13min
Ajeya Cotra on AI safety and the future of humanity
Ajeya Cotra, a Senior Program Manager at Open Philanthropy, focuses on AI safety and capabilities forecasting. She discusses the heated debate between 'doomers' and skeptics regarding AI risks. Cotra also envisions how AI personal assistants may revolutionize daily tasks and the workforce by 2027. The conversation touches on the transformative potential of AI in the 2030s, with advancements in various sectors and the philosophical implications of our digital future. Plus, they explore innovative energy concepts and their technological limits.

77 snips
Jan 14, 2025 • 1h 1min
Nathan Lambert on the rise of "thinking" language models
Nathan Lambert, a research scientist and author of the AI newsletter Interconnects, dives into the fascinating world of language model evolution. He breaks down the shift from pre-training to innovative post-training techniques, emphasizing the complexities of instruction tuning and diverse data usage. Lambert discusses the advancements in reinforcement learning that enhance reasoning capabilities and the balance between scaling models and innovative techniques. He also touches on ethical considerations and the quest for artificial general intelligence amidst the growing field of AI.

13 snips
Jan 9, 2025 • 1h 5min
Jon Askonas on AI policy in the Trump era
Jon Askonas, an Assistant Professor of Politics and a senior fellow, delves into the evolving intersection of technology and the Republican Party. He discusses how Trump's second term may shift AI policy priorities away from existential risks towards competition, especially concerning China. The conversation highlights the rise of a tech-oriented faction within conservatism, tensions in AI regulation, and the challenges of balancing innovation with safety. Jon also critiques the AI safety community's early missteps in influencing policy discussions post-ChatGPT.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.