
The Retort AI Podcast
Distilling the major events and challenges in the world of artificial intelligence and machine learning, from Thomas Krendl Gilbert and Nathan Lambert. www.retortai.com
Latest episodes

Jan 26, 2024 • 57min
Tom's Story: to get through grad school, become a sperm whale
Tom, a sociology turned AI Ph.D. at Berkeley, shares his journey through grad school, discussing topics such as the current state of the field, striving for impact, educational background among physicists, understanding existentialism, childhood exposure to AI, reaching out in academia, origins of AI ethics, and the evolution of Less Wrong community.

Jan 19, 2024 • 37min
Non-profits need to be businesses too
This podcast discusses the challenges of ML compute and evaluation, including AI2's transition to industry, the need for minimum viable resources, and government trust for non-profits. It examines the debate on a regulatory body for algorithms, exploring clear communication standards and evaluation tasks. It also explores the evolution of AI, uncomfortable choices, and the need for new approaches to add truth to public discourse.

Jan 12, 2024 • 57min
How the US could lose (and win!) against China in AI, with Jordan Schneider of ChinaTalk
In this podcast, Tom and Nate are joined by Jordan Schneider of ChinaTalk. They discuss the impact of AI development in China on the US ecosystem, the competition between OpenAI and China, the influence of the free-spirited 60s on Silicon Valley tech development, and the importance of openness in AI development. They also explore the growth of the NURIPS conference, the role of courts in shaping laws, and offer book recommendations.

Jan 5, 2024 • 47min
AI is literally the culture war, figuratively speaking
The podcast discusses the importance of efficient GPU usage in AI companies and worries about the broader culture war. It explores controversies in AI ethics, the lawsuit between the New York Times and Open AI, and the future of language models. The hosts also touch on the resignation of the Harvard University president and the influence of communication methods on learning.

Dec 22, 2023 • 51min
What I wish someone had told me
The podcast explores the importance of optimism, personal connections, and audacious ideas. They discuss the stifled energy in the AI field and the need for civil discourse. They also touch on the potential government bailout of Twitter and anticipate positive developments in the upcoming year. The hosts recommend watching Godzilla and praise Fei-Fei Lee's book on machine learning. They conclude with a poetic reflection on the industry and AGI.

Dec 15, 2023 • 49min
Everyone wants fair benchmarks, but do you even lift?
In this episode, they cover a wide range of topics including open-source AI research, the hype around new AI models, the relation between power levels in Dragon Ball Z and benchmarking, the impact of Twitter on academic culture, the future of Hugging Face, groundbreaking experiments on fluids and gases under pressure, and the significance of size in Godzilla movies.

Dec 8, 2023 • 42min
Cybernetics, Feedback, and Reinventionism in CS
In this podcast, the speakers discuss the historical context of cybernetics and its connection to computer science. They explore the impact of machine learning models on society and the challenges of implementing real-time feedback in AI experiments. The conversation also delves into the unknowns and risks in computer science ethics, particularly regarding reinforcement learning.

Nov 24, 2023 • 44min
Q* and OpenAI's Strange Loop: We Pecan't Even
The podcast discusses recent events at OpenAI, including the CEO's departure and the disbandment of the responsible AI team. They also talk about the confusion caused by a government diagram and speculate on the meaning of the AI breakthrough called Q*. The hosts emphasize the need for responsibility in AI discussions and highlight the cultural reflection of company logos.

Nov 10, 2023 • 48min
OpenAI: Developers, Hegemons, and Origins
OpenAI embraces their role as a consumer technology company in their first developer keynote. Topics include: comparisons to Steve Jobs' product announcements, OpenAI's prioritizations and their API for GPT-4, differences between engineering and AI development values, concerns about DeepMind's growth, OpenAI's approach to safety and concerns surrounding their policies, exploring the concept of hegemony in keynotes, comparing Steve Jobs' ideology to OpenAI's approach, contrasting narratives of control, and career advice.

Nov 3, 2023 • 53min
Executive Orders, Safety Summits, and Open Letters, Oh My!
We discuss all the big regulation steps in AI this week, from the Biden Administration's Executive Order to the UK AI Safety Summit. Links:Link the Executive OrderLink the Mozilla Open Letter The Slaughterbots video UK AI Safety Summit graph/meme This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.retortai.com