undefined

Kevin Wang

Chief Product Officer at Braze - scaled from Day 0 to public markets as employee #5

Top 3 podcasts with Kevin Wang

Ranked by the Snipd community
undefined
Oct 18, 2023 • 50min

#22 - Kevin Wang (Chief Product Officer, Braze) - Scaling from Employee 5 to Public Company Exec, 0 to 1 Anecdotes and Advice for Early Stage Startups, Product Advice from early to scale

Kevin Wang, Chief Product Officer at Braze, discusses his journey from employee #5 to a public company executive. He shares anecdotes and advice for early-stage startups, including insights on building products for enterprises, acquiring customers, and hiring. Wang also talks about Braze's end-to-end platform for enterprise customers and the importance of customer input in product development. He highlights the value of removing features, building customer relationships, and the unique features of Braze's product.
undefined
Sep 19, 2024 • 5min

RLC 2024 - Posters and Hallways 4

David Abel from DeepMind dives into the 'Three Dogmas of Reinforcement Learning,' offering fresh insights on foundational principles. Kevin Wang from Brown discusses innovative variable depth search methods for Monte Carlo Tree Search, enhancing efficiency. Ashwin Kumar from Washington University addresses fairness in resource allocation, highlighting ethical implications. Finally, Prabhat Nagarajan from UAlberta delves into Value overestimation, revealing its impact on decision-making in RL. This dynamic conversation touches on pivotal advancements and challenges in the field.
undefined
Apr 1, 2024 • 25min

Interpretability in the Wild: A Circuit for Indirect Object Identification in GPT-2 Small

Kevin Wang, a researcher in mechanistic interpretability, discusses reverse-engineering the behavior of GPT-2 small in indirect object identification. The podcast explores 26 attention heads grouped into 7 classes, the reliability of explanations, and the feasibility of understanding large ML models. It delves into attention head behaviors, model architecture, and mathematical explanations for mechanistic interpretability in language models.