Machine Learning Street Talk (MLST) cover image

#73 - YASAMAN RAZEGHI & Prof. SAMEER SINGH - NLP benchmarks

Machine Learning Street Talk (MLST)

00:00

Decoding the Limits of Language Models

This chapter explores the performance and limitations of pre-trained large language models, particularly in numerical reasoning tasks. It emphasizes the impact of pre-training data on model accuracy and the distinction between memorization and genuine reasoning. The discussion calls for greater transparency in training corpuses and advocates for a more nuanced understanding of these models' capabilities and their evaluation.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app