Machine Learning Street Talk (MLST) cover image

OpenAI GPT-3: Language Models are Few-Shot Learners

Machine Learning Street Talk (MLST)

CHAPTER

Transformers Unpacked: Limits and Innovations

This chapter examines the trade-offs between bi-directional and auto-regressive models in NLP, focusing on transformer architectures' limitations with long input sequences. It discusses the impact of redundant information on model training and introduces concepts like positional encodings that influence how transformers process data. Additionally, the chapter speculates on future advancements in architecture, particularly innovations in memory modules and explainability improvements for better model transparency.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner