The chapter explores the evolution of benchmarking paradigms in AI research due to the introduction of large language and multimodal models. It discusses the creation of multitask benchmarks like Superglue, MMLU, BigBench, and Helm to provide a more comprehensive evaluation of new models. Additionally, it delves into the challenges and benefits of dynamic benchmarks and the importance of scientific foundations in benchmarking practices for promoting scientific progress in AI research.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode