The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

Common Sense Reasoning in NLP with Vered Shwartz - #461

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

00:00

Evaluating Pre-trained Language Models for Knowledge Capture

This chapter discusses the use of various pre-trained language models, especially GPT-2, in research experiments. It highlights the performance differences of model sizes and the implications for capturing knowledge, alongside a reflection on the evaluation of generated sentence quality.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app