

Haize Labs: How a 23-Year-Old is Making AI Safer and Smarter - CEO And Founder Leonard Tang
The AI revolution is here, but how do we ensure it’s safe and reliable? Meet Leonard Tang, the 23-year-old founder and CEO of Haize Labs, who’s tackling one of AI’s biggest challenges: building trust in the systems that are shaping our future.
In this episode of Thinking on Paper, Leonard explains how Haize Labs is creating a "robustness and safety layer" for AI models like ChatGPT and Claude, exposing vulnerabilities and ensuring they behave predictably in real-world scenarios.
Here’s what you’ll learn:
- How Haize Labs works with OpenAI, Anthropic, and others to test and strengthen AI systems.
- The hidden failure modes and risks that could jeopardize AI’s reliability.
- Why AI-specific codes of conduct are critical for industries like healthcare, finance, and education.
- Leonard’s vision for AI as a tool to merge cultures and align with human needs.
Key Quotes from the Show:
- "We rigorously test AI to uncover vulnerabilities before deployment."
- "AI is a technology of language, and it will empower us to merge cultures."
- "Legacy industries underestimate AI, while Silicon Valley overhypes it."
If you’re curious about how AI can be made safer, more mature, and tightly aligned with its use cases, this episode is a must-listen.
Hit play, subscribe, and join us as we explore the critical intersection of safety, innovation, and the future of AI.
--
TIMESTAMPS
(00:00) - Disruptors and Curious Minds (01:07) - Our Sponsor: Conviction (01:50) - Introducing Leonard Tang: AI CEO and Founder (03:37) - The Importance of AI Safety: What’s at Stake in AI Development? (06:21) - Using Mathematics and Modeling to Understand Human Behaviour in AI (08:12) - Why Are Technologists So Often Musicians? (11:06) - Language, Culture, and AI (17:05) - Common Misconceptions About AI: What People Get Wrong (19:20) - The Dartmouth Conference: Birth of AI and Its Lasting Impact (19:55) - Claude and ChatGPT Pre-training: What Do The Models Go Through? (25:20) - An Alan Watts AI Model for Enhanced Understanding (28:33) - Claude vs ChatGPT: Comparing AI Models and Performance (31:44) - AI Jailbreak Detection (33:25) - How Dreamlike Images Enhance AI Safety and Trustworthiness (38:20) - Top-Down vs Bottom-Up AI Development: Approaches to Building Safer AI (42:55) - Protecting Artists, Intellectual Property, and Art in the Age of AI (48:20) - Developing an AI Code of Conduct for Ethical AI Usage (49:45) - A Message for Veteran AI Stars (52:35) - Restructuring Education for Critical Thinking in the Age of AI (54:16) - Book Club Live --
Quotes from the show: "We need to rigorously test AI models to discover all their vulnerabilities, failure modes, and gotchas before they get deployed in production." "AI is a technology of language, and inevitably, it will empower us to merge cultures." "We’re trying to get AI to be a little more mature, a little more sophisticated, and just more reliable." "What we’re interested in is enforcing an AI code of conduct for specific applications, making AI systems tightly aligned with the needs of their use cases." "People in legacy industries are underestimating AI’s potential, while Silicon Valley is often overhyping it." -- 🔗 More: Visit Haize Labs: https://haizelabs.com/ Visit Thinking On Paper: https://www.thinkingonpaper.xyz/