

LinkedIn Recommender System Predictive ML vs LLMs
Aug 12, 2025
Arpita Vats, a leading AI researcher specializing in Natural Language Processing and Recommender Systems, dives into the transformative role of LLMs in enhancing recommendation systems. She discusses how these models outpace traditional methods by interpreting user behavior more naturally. The conversation highlights benefits like reduced manual effort alongside challenges such as latency and costs. They explore the evolving landscape of personalized recommendations, including insights into travel recommendations and the nuances of algorithm visibility in social networking.
AI Snips
Chapters
Transcript
Episode notes
LLMs Reduce Feature Engineering Burden
- LLMs can absorb many tiny behavioral signals so you don't have to hand-engineer every feature.
- That reduces manual clustering and explicit feature construction in recommendation pipelines.
Mitigate LLM Latency With Distillation Or Offline Use
- Use lightweight LLMs or distill large models into student models to avoid inference latency in feeds.
- Alternatively run LLMs offline to generate features and use fast traditional models for online ranking.
Eval Criteria Stay The Same
- Evaluation metrics remain user-action driven: did the user like, comment, or engage with recommended items.
- LLMs change model internals but not the external success criteria for recommendations.