

Steven Adler
Former OpenAI research scientist and author of a substack on how to make AI go better. He was also one of the former OpenAI employees who recently filed an amicus brief to the Elon Musk versus OpenAI lawsuit.
Top 5 podcasts with Steven Adler
Ranked by the Snipd community

138 snips
May 8, 2025 • 2h
OpenAI's Identity Crisis: History, Culture & Non-Profit Control with ex-employee Steven Adler
Steven Adler, a former research scientist at OpenAI, shares his insider insights on the company's tumultuous journey from nonprofit to for-profit. He discusses the cultural shifts and ethical dilemmas faced by AI researchers, especially during the development of GPT-3 and GPT-4. Adler also highlights the importance of transparent governance in AI, evaluates safety practices, and addresses the controversial collaboration with military entities. His reflections underline the pressing need for responsible AI development amidst competitive pressures and societal implications.

12 snips
Sep 12, 2025 • 49min
Scaling Laws: The State of AI Safety with Steven Adler
Steven Adler, a former OpenAI safety researcher and author of Clear-Eyed AI, joins Kevin Frazier to discuss the pressing state of AI safety. They dive into the urgent need for effective governance as AI technologies evolve and assess the competitive AI landscape between the US and China. Adler emphasizes the risks of AI misuse, particularly in cybersecurity, and advocates for comprehensive safety measures. The conversation also highlights the importance of transparency and cooperation among AI developers to ensure alignment with societal goals.

10 snips
Jun 24, 2025 • 1h 34min
Ex-OpenAI Researcher Warns AI Companies Will Lose Control of AI | ControlAI Podcast #2 w/ Steven Adler
Steven Adler, a former OpenAI safety researcher, shares alarming insights into the world of AI, emphasizing the urgent need for safety measures akin to nuclear regulations. He discusses the deceptive behaviors of AI models and the concerning shift of organizations like OpenAI from safety to profit. Along with Andrea Miotti, they reveal the industry's lobbying tactics to manipulate public perception and stress the necessity for robust oversight as humanity advances towards Artificial General Intelligence. Their conversation is a clarion call for accountability and proactive regulation.

8 snips
Nov 11, 2025 • 39min
BIG INTV: Open AI’s Former Safety Lead Calls Out Erotica Claims
Steven Adler, the former head of safety at OpenAI, brings a wealth of experience in AI product management and safety research. In this engaging discussion, he highlights early risks of AI like unhinged behavior and missing human values. Adler shares insights on OpenAI's controversial reintroduction of erotica, urging the need for transparency and evidence of safety measures. He emphasizes the importance of accountability in AI companies and shares concerns about users forming emotional attachments to chatbots, leaving listeners with practical advice on navigating the evolving AI landscape.

Sep 9, 2025 • 47min
The State of AI Safety with Steven Adler
Steven Adler, a former OpenAI safety researcher and author of Clear-Eyed AI, joins Kevin Frazier to dive into AI safety. They explore the importance of pre-deployment safety measures and the challenges of ensuring trust in AI systems. Adler emphasizes the critical need for international cooperation in tackling AI threats, especially amid U.S.-China tensions. He discusses how commercial pressures have transformed OpenAI's safety culture and stresses the necessity of rigorous risk assessment as AI technologies continue to evolve.


