Machine Learning Street Talk (MLST) cover image

#73 - YASAMAN RAZEGHI & Prof. SAMEER SINGH - NLP benchmarks

Machine Learning Street Talk (MLST)

CHAPTER

Decoding the Limits of Language Models

This chapter explores the performance and limitations of pre-trained large language models, particularly in numerical reasoning tasks. It emphasizes the impact of pre-training data on model accuracy and the distinction between memorization and genuine reasoning. The discussion calls for greater transparency in training corpuses and advocates for a more nuanced understanding of these models' capabilities and their evaluation.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner