The Robot Brains Podcast cover image

Yejin Choi: teaching AI common sense and morality

The Robot Brains Podcast

CHAPTER

The Curious Case of a Neural Text Degeneration in 2019

We found that if you try to look for argumacs, the best probability sequence out of your neural language models, then you get degenerate text. So we've done a lot of application scenarios, even including machine translation. And we could demonstrate that across the board, you can improve the performance right away. In some cases, even using neurologic decoding on top of unsupervised off-the-shelf GPT two can do better than supervised model based on beam search. This is really unexpected empirical result about how well these algorithms work.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner