undefined

Anastasios Angelopoulos

Sixth-year PhD student at UC Berkeley. Focuses on theoretical statistics and model evaluation, and leads the Chatbot Arena project.

Top 3 podcasts with Anastasios Angelopoulos

Ranked by the Snipd community
undefined
77 snips
Jun 6, 2025 • 3h 32min

Trump-Musk Fallout Recap, Circle IPO Post-Game with CEO | Dylan Patel, Jeremy Allaire, Kat Cole, Dana Settle, Anastasios Angelopoulos, Patrick Blumen, Blake Scholl

Dylan Patel, Chief Analyst at SemiAnalysis, breaks down the latest in AI chip technology and global supply chains. Jeremy Allaire, CEO of Circle, shares insights on the IPO's impact on digital currencies and trust in finance. Kat Cole recounts her journey from hostess to CEO, while Patrick Blumenthal gives a firsthand look at NYC Tech Week and discusses drone warfare in Ukraine. Anastasios Angelopoulos introduces Lumarina, a platform revolutionizing AI evaluation through human preferences, promising a new era of user-centric AI experiences.
undefined
43 snips
May 30, 2025 • 1h 42min

Beyond Leaderboards: LMArena’s Mission to Make AI Reliable

Anastasios N. Angelopoulos, a UC Berkeley professor and AI researcher, along with LMArena cofounders Wei-Lin Chiang and Ion Stoica, delve into innovative AI evaluation methods. They discuss transitioning from static benchmarks to dynamic user feedback for better model reliability. Fresh data and community engagement are emphasized as essential for AI development. The conversation highlights personalized leaderboards, real-time testing challenges, and the importance of scaling their platform to meet diverse user needs and preferences, all while fostering an inclusive approach to AI.
undefined
29 snips
Nov 1, 2024 • 41min

In the Arena: How LMSys changed LLM Benchmarking Forever

Anastasios Angelopoulos and Wei-Lin Chiang, both PhD students at UC Berkeley, lead the Chatbot Arena—a pioneering platform for AI evaluation. They discuss the evolution of crowdsourced benchmarking and the philosophical challenges of measuring AI intelligence. Emphasizing the limitations of static benchmarks, they advocate for user-driven assessments. The duo also tackles human biases in evaluations and the significance of community engagement, showcasing innovative strategies in AI red teaming and collaboration, all aimed at refining how language models are compared.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app