AI Summer cover image

AI Summer

Latest episodes

undefined
11 snips
Apr 7, 2025 • 1h 2min

Charles Yang on AI and Science

In this engaging conversation, Charles Yang, a former Department of Energy staffer and the mind behind the Rough Drafts newsletter, discusses AI's transformative potential in science. He dives into how AI can revolutionize materials science and biology, emphasizing the development of self-driving labs that automate experiments. The talk also highlights the complexities of integrating AI with quantum computing and the need for robust experimental databases. Yang shares insights on the challenges of making scientific research more efficient and reproducible.
undefined
41 snips
Mar 19, 2025 • 53min

James Grimmelmann on the copyright threat to AI companies

James Grimmelmann, a Cornell law professor and copyright expert, discusses the complex legal landscape of AI and copyright. He explores the fine line between fair use and infringement, referencing pivotal cases like Google Books. Grimmelmann highlights concerns about generative AI's ability to reproduce copyrighted material, emphasizing the potential impact on copyright holders. The conversation also covers the slow-moving legislative response and suggests future rulings could favor large companies negotiating licensing deals, reshaping the tech industry.
undefined
47 snips
Feb 27, 2025 • 1h 1min

Andrew Lee on running an AI email startup

Andrew Lee, co-founder of Shortwave and former Firebase founder, shares insights on transforming traditional email into an AI-powered experience. He discusses how Shortwave employs large language models for advanced inbox organization and message drafting. Andrew explores the balance between open and closed models, the vital role of user feedback, and emerging AI security challenges. He also speculates on the future of email management, emphasizing user customization and productivity enhancements through innovative AI solutions.
undefined
30 snips
Feb 21, 2025 • 1h 3min

Dean and Tim on Deep Research and the Paris Summit

Dean shares insights from the AI Action Summit in Paris, where Vice President Vance addressed AI regulation. The conversation dives into Europe's mindset on AI, highlighting tensions with American tech dominance and differing regulatory priorities. They also explore OpenAI's new deep research agent, which promises a revolutionary approach to policy research and dynamic investigation. The challenges of AI's cognitive limitations and the evolving roles for think tank professionals in the age of AI are also discussed, emphasizing the importance of human perspectives.
undefined
Feb 12, 2025 • 52min

Kashmir Hill on falling love with ChatGPT

Kashmir Hill, a New York Times reporter known for her insights on technology's social impacts, dives into the intriguing world of AI companionship. She discusses how users develop emotional and even romantic connections with chatbots like ChatGPT, raising ethical questions about safety—especially for younger users. Hill explores the humor and implications of personalized AI interactions, particularly in the context of adult content. She also addresses the challenges posed by facial recognition technology and its profound effect on privacy and civil liberties.
undefined
Feb 5, 2025 • 1h 3min

Sophia Tung on riding a self-driving taxi in China

Sophia Tung, an entrepreneur and YouTuber, shares her riveting experiences with self-driving taxis in China and the U.S. She contrasts her ride in Baidu's Apollo Go with her experiences in Waymo, revealing significant issues in the former. Sophia navigates the complexities of China's autonomous vehicle landscape, discussing cultural differences, safety concerns, and the rapid yet challenging implementation of technology. The conversation also dives into the broader implications of the AI race between China and the U.S., highlighting unique advancements and persistent challenges.
undefined
17 snips
Jan 29, 2025 • 57min

Dean and Tim on DeepSeek and AI progress

Dean and Tim dive into DeepSeek’s r1 release and its significance for the AI field. They analyze the implications of export controls on AI competition, particularly between the U.S. and China. The conversation highlights the innovations in hardware and the economic ramifications of AI advancements. Despite the hype around new models, skepticism remains regarding their ability to handle complex reasoning. The duo reflects on the balance between automation, productivity gains, and the potential societal impacts.
undefined
42 snips
Jan 27, 2025 • 1h 19min

Nathan Labenz on the future of AI scaling

Nathan Labenz, host of the Cognitive Revolution podcast and an AI scout, joins to discuss the recent slowdown in AI scaling. He notes that while technology adoption has lagged, significant advancements still occur in model capabilities. Labenz anticipates continued rapid progress, maintaining that we're still on the steep part of the scaling curve. The conversation also highlights AI's potential to discover new scientific concepts, emphasizing the need for a deeper understanding of scaling laws and the complexities within AI organizations.
undefined
Jan 23, 2025 • 1h 6min

Lennart Heim on the AI diffusion rule

Lennart Heim, an information scientist at the RAND Corporation specializing in AI governance, delves into the Biden administration’s diffusion framework aimed at regulating advanced AI. He discusses the geopolitical implications of this framework and its potential legacy under different administrations. The conversation highlights the complexities surrounding AI export controls, particularly concerning national security and competition with China. The impact on major companies like NVIDIA and AMD, along with the ethical concerns surrounding global AI distribution, are also key topics.
undefined
Jan 20, 2025 • 58min

Sam Hammond on getting government ready for AI

Sam Hammond is senior economist at the Foundation for American Innovation, a right-leaning tech policy think tank based in Washington DC. Hammond is a Trump supporter who expects AI to improve rapidly in the next few years, and he believes that will have profound implications for the public policy. In this interview, Hammond explains how he’d like to see the Trump administration tackle the new policy challenges he expects AI to create over the next four years.Here are some of the key points Hammond made during the conversation:* Rapid progress in verifiable domains: In areas with clear verifiers, like math, chemistry, or coding, AI will see rapid progress and be essentially solved in the short term. "For any kind of subdomain that you can construct a verifier for, there'll be very rapid progress."* Slower progress on open-ended problems: Progress in open-ended areas, where verification is harder, will be more challenging, and there’s a need for reinforcement learning to be applied to improve autonomous abilities. "I think we're just scratching the surface of applying reinforcement learning techniques into these models."* The democratization of AI: As AI capabilities become widely accessible, institutions will face unprecedented challenges. With open-source tools and AI agents in the hands of individuals, the volume and complexity of economic and social activity will grow exponentially. "When capabilities get demonstrated, we should start to brace for impact for those capabilities to be widely distributed."* The risk of societal overload: If institutions fail to adapt, AI could overwhelm core functions such as tax collection, regulatory enforcement, and legal systems. The resulting systemic failure could undermine government effectiveness and societal stability. "Core functions of government could simply become overwhelmed by the pace of change."* The need for deregulation: Deregulating and streamlining government processes are necessary to adapt institutions to the rapid changes brought by AI. Traditional regulatory frameworks are incompatible with the pace and scale of AI’s impact. "We need a kind of regulatory jubilee. Removing a regulation takes as much time as it does to add a regulation."* Securing models and labs: There needs to be a deeper focus on securing AI models and increasing security in AI labs, especially as capabilities become tempting targets for other nations. "As we get closer to these kind of capabilities, they're going to be very tempting for other nation state actors to try to steal. And right now the labs are more or less wide open."* The need for export controls and better security: To maintain a technological edge, tighter export controls and advanced monitoring systems are required to prevent adversaries from acquiring sensitive technologies and resources. Investments in technology for secure supply chain management are critical. "Anything that can deny or delay the development of China’s ecosystem is imperative." This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner