Latent Space: The AI Engineer Podcast cover image

ICLR 2024 — Best Papers & Talks (Benchmarks, Reasoning & Agents) — ft. Graham Neubig, Aman Sanger, Moritz Hardt)

Latent Space: The AI Engineer Podcast

CHAPTER

Understanding Cardinal vs. Ordinal Benchmarks in Language Model Evaluation

This chapter explores the differences between cardinal and ordinal benchmarks in language model evaluation, focusing on the HELM benchmark from Stanford. It contrasts this with the OpenLLM leaderboard, while appreciating the community's contributions to benchmarking advancements.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner