Machine Learning Street Talk (MLST) cover image

OpenAI GPT-3: Language Models are Few-Shot Learners

Machine Learning Street Talk (MLST)

00:00

Transformers Unpacked: Limits and Innovations

This chapter examines the trade-offs between bi-directional and auto-regressive models in NLP, focusing on transformer architectures' limitations with long input sequences. It discusses the impact of redundant information on model training and introduces concepts like positional encodings that influence how transformers process data. Additionally, the chapter speculates on future advancements in architecture, particularly innovations in memory modules and explainability improvements for better model transparency.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app