
NLP Highlights
**The podcast is currently on hiatus. For more active NLP content, check out the Holistic Intelligence Podcast linked below.**
Welcome to the NLP highlights podcast, where we invite researchers to talk about their work in various areas in natural language processing. All views expressed belong to the hosts/guests, and do not represent their employers.
Latest episodes

34 snips
Feb 29, 2024 • 42min
Are LLMs safe?
Exploring the safety of Large Language Models (LLMs) with insights on model optimization, customization challenges, quality filters, student newspaper content analysis, biases in data curation, adaptive pre-training, model merging inefficiencies, and decentralized training frameworks for enhanced performance.

18 snips
Jan 8, 2024 • 23min
"Imaginative AI" with Mohamed Elhoseiny
Dr. Mohamed Elhoseiny, a luminary in computer vision, discusses topics like enabling species recognition using natural language descriptions, harnessing the power of imagination in AI, AI models in therapy for mental health, and using videos for research and imagination in autonomous driving.

Dec 28, 2023 • 49min
142 - Science Of Science, with Kyle Lo
Our first guest with this new format is Kyle Lo, the most senior lead scientist in the Semantic Scholar team at Allen Institute for AI (AI2), who kindly agreed to share his perspective on #Science of #Science (#scisci) on our podcast. SciSci is concerned with studying how people do science, and includes developing methods and tools to help people consume AND produce science. Kyle has made several critical contributions in this field which enabled a lot of SciSci work over the past 5+ years, ranging from novel NLP methods (eg, SciBERT https://lnkd.in/gTP_tYiF ), to open data collections (eg, S2ORK https://lnkd.in/g4J6tXCG), to toolkits for manipulating scientific documents (eg, PaperMage https://lnkd.in/gwU7k6mJ which JUST received a Best Paper Award 🏆 at EMNLP 2023).
Kyle Lo's homepage: https://kyleclo.github.io/

6 snips
Jun 29, 2023 • 30min
141 - Building an open source LM, with Iz Beltagy and Dirk Groeneveld
In this special episode of NLP Highlights, we discussed building and open sourcing language models. What is the usual recipe for building large language models? What does it mean to open source them? What new research questions can we answer by open sourcing them? We particularly focused on the ongoing Open Language Model (OLMo) project at AI2, and invited Iz Beltagy and Dirk Groeneveld, the research and engineering leads of the OLMo project to chat.
Blog post announcing OLMo: https://blog.allenai.org/announcing-ai2-olmo-an-open-language-model-made-by-scientists-for-scientists-ab761e4e9b76
Organizations interested in partnership can express their interest here: https://share.hsforms.com/1blFWEWJ2SsysSXFUEJsxuA3ioxm
You can find Iz at twitter.com/i_beltagy and Dirk at twitter.com/mechanicaldirk

Jun 6, 2023 • 51min
140 - Generative AI and Copyright, with Chris Callison-Burch
In this special episode, we chatted with Chris Callison-Burch about his testimony in the recent U.S. Congress Hearing on the Interoperability of AI and Copyright Law. We started by asking Chris about the purpose and the structure of this hearing. Then we talked about the ongoing discussion on how the copyright law is applicable to content generated by AI systems, the potential risks generative AI poses to artists, and Chris’ take on all of this. We end the episode with a recording of Chris’ opening statement at the hearing.

40 snips
Mar 24, 2023 • 45min
139 - Coherent Long Story Generation, with Kevin Yang
How can we generate coherent long stories from language models? Ensuring that the generated story has long range consistency and that it conforms to a high level plan is typically challenging. In this episode, Kevin Yang describes their system that prompts language models to first generate an outline, and iteratively generate the story while following the outline and reranking and editing the outputs for coherence. We also discussed the challenges involved in evaluating long generated texts.
Kevin Yang is a PhD student at UC Berkeley.
Kevin's webpage: https://people.eecs.berkeley.edu/~yangk/
Papers discussed in this episode:
1. Re3: Generating Longer Stories With Recursive Reprompting and Revision (https://www.semanticscholar.org/paper/Re3%3A-Generating-Longer-Stories-With-Recursive-and-Yang-Peng/2aab6ca1a8dae3f3db6d248231ac3fa4e222b30a)
2. DOC: Improving Long Story Coherence With Detailed Outline Control (https://www.semanticscholar.org/paper/DOC%3A-Improving-Long-Story-Coherence-With-Detailed-Yang-Klein/ef6c768f23f86c4aa59f7e859ca6ffc1392966ca)

Jan 20, 2023 • 48min
138 - Compositional Generalization in Neural Networks, with Najoung Kim
Compositional generalization refers to the capability of models to generalize to out-of-distribution instances by composing information obtained from the training data. In this episode we chatted with Najoung Kim, on how to explicitly evaluate specific kinds of compositional generalization in neural network models of language. Najoung described COGS, a dataset she built for this, some recent results in the space, and why we should be careful about interpreting the results given the current practice of pretraining models of lots of unlabeled text.
Najoung's webpage: https://najoungkim.github.io/
Papers we discussed:
1. COGS: A Compositional Generalization Challenge Based on Semantic Interpretation (Kim et al., 2020): https://www.semanticscholar.org/paper/b20ddcbd239f3fa9acc603736ac2e4416302d074
2. Compositional Generalization Requires Compositional Parsers (Weissenhorn et al., 2022): https://www.semanticscholar.org/paper/557ebd17b7c7ac4e09bd167d7b8909b8d74d1153
3. Uncontrolled Lexical Exposure Leads to Overestimation of Compositional Generalization in Pretrained Models (Kim et al., 2022): https://www.semanticscholar.org/paper/8969ea3d254e149aebcfd1ffc8f46910d7cb160e
Note that we referred to the final paper by an earlier name in the discussion.

17 snips
Jan 13, 2023 • 36min
137 - Nearest Neighbor Language Modeling and Machine Translation, with Urvashi Khandelwal
We invited Urvashi Khandelwal, a research scientist at Google Brain to talk about nearest neighbor language and machine translation models. These models interpolate parametric (conditional) language models with non-parametric distributions over the closest values in some data stores built from relevant data. Not only are these models shown to outperform the usual parametric language models, they also have important implications on memorization and generalization in language models.
Urvashi's webpage: https://urvashik.github.io
Papers discussed:
1) Generalization through memorization: Nearest Neighbor Language Models (https://www.semanticscholar.org/paper/7be8c119dbe065c52125ee7716601751f3116844)
2)Nearest Neighbor Machine Translation (https://www.semanticscholar.org/paper/20d51f8e449b59c7e140f7a7eec9ab4d4d6f80ea)

May 19, 2022 • 1h 2min
136 - Including Signed Languages in NLP, with Kayo Yin and Malihe Alikhani
In this episode, we talk with Kayo Yin, an incoming PhD at Berkeley, and Malihe Alikhani, an assistant professor at the University of Pittsburgh, about opportunities for the NLP community to contribute to Sign Language Processing (SLP). We talked about history and misconceptions about sign languages, high-level similarities and differences between spoken and sign languages, distinct linguistic features of signed languages, representations, computational resources, SLP tasks, and suggestions for better design and implementation of SLP models.

Mar 2, 2022 • 37min
135 - PhD Application Series: After Submitting Applications
This episode is the third in our current series on PhD applications.
We talk about what the PhD application process looks like after applications are submitted. We start with a general overview of the timeline, then talk about how to approach interviews and conversations with faculty, and finish by discussing the different factors to consider in deciding between programs.
The guests for this episode are Rada Mihalcea (Professor at the University of Michigan), Aishwarya Kamath (PhD student at NYU), and Sanjay Subramanian (PhD student at UC Berkeley).
Homepages:
- Aishwarya Kamath: https://ashkamath.github.io/
- Sanjay Subramanian: https://sanjayss34.github.io/
- Rada Mihalcea: https://web.eecs.umich.edu/~mihalcea/
The hosts for this episode are Alexis Ross and Nishant Subramani.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.