
Jay Alammar
Well-known AI educator, applied NLP practitioner at co:here, and author of the popular blog, “The Illustrated Transformer.”
Top 5 podcasts with Jay Alammar
Ranked by the Snipd community

66 snips
Dec 30, 2023 • 2h 42min
NeurIPS 2023 Recap — Top Startups
In this dynamic discussion, Jonathan Frankle, Chief Scientist at MosaicML, shares insights on their $1.3 billion acquisition by Databricks. Lin Qiao, CEO of Fireworks AI, talks about optimizing PyTorch for inference. Aman Sanger from Cursor reveals innovative memory strategies for AI coding. Aravind Srinivas discusses the impressive growth of Perplexity AI, hitting 1 million installs, while Jeremy Howard emphasizes the need for accessible AI. Together, they explore the vibrant AI startup landscape showcased at NeurIPS 2023, reflecting on innovation, collaboration, and the future of technology.

42 snips
Aug 11, 2024 • 57min
Jay Alammar on LLMs, RAG, and AI Engineering
Jay Alammar, a prominent AI educator and researcher at Cohere, dives into the latest on large language models (LLMs) and retrieval augmented generation (RAG). He explores how RAG enhances data interactions, helping reduce hallucination in AI outputs. Jay also addresses the challenges of implementing AI in enterprises, emphasizing the importance of education for developers. The conversation highlights semantic search innovations and the future of AI architectures, offering insights on effective deployment strategies and the need for continuous learning in this rapidly evolving field.

41 snips
Feb 22, 2023 • 38min
Applied NLP solutions & AI education
Jay Alammar, a prominent AI educator and applied NLP practitioner at co:here, shares insights on effective NLP solutions and the role of public writing in mastering machine learning. He discusses the balance between the reliability of advanced NLP models and their accessible integration into real-world applications. The conversation also covers the exciting advancements in multimodal models that merge text and image generation. Alammar emphasizes the importance of educational resources to foster innovation in AI.

7 snips
Jun 27, 2023 • 1h 13min
Building LLM Apps & the Challenges that come with it. The What's AI Podcast Episode 16: Jay Alammar
My interview with Jay Alammar, widely known inthe AI and NLP field mainly through his great blog on transformers and attention.
►Watch on YouTube: https://youtu.be/TO0IV9e2MMQ
►LLM University: https://docs.cohere.com/docs/llmu
►Jay's blog: http://jalammar.github.io/illustrated-transformer/
►Twitter: https://twitter.com/JayAlammar, https://twitter.com/Whats_AI
►My Newsletter (A new AI application explained weekly to your emails!): https://www.louisbouchard.ai/newsletter/
►Support me on Patreon: https://www.patreon.com/whatsai
►Join Our AI Discord: https://discord.gg/learnaitogether
How to start in AI/ML - A Complete Guide:
►https://www.louisbouchard.ai/learnai/
Become a member of the YouTube community, support my work and get a cool Discord role :
https://www.youtube.com/channel/UCUzGQrN-lyyc0BWTYoJM_Sg/join
Chapters:
0:00 Hey! Tap the Thumbs Up button and Subscribe. You'll learn a lot of cool stuff, I promise.
00:43 Introduction of Jay Alammar
04:00 Why Jay got into AI 8 years ago?
08:07 Why teach after learning
16:12 What is a Transformer?
21:03 What the blocks are made of and how they work?
26:27 Training steps of LLM explained in simple words
39:31 Re-rank Systems
41:47 How to know that your problem can be solved by LLM?
45:31 Chatbot on private or proprietary data
47:10 Challenges with AI Apps
50:51 The requirement to create AI apps
56:11 Mitigate model hallucination from your side
59:05 How does ChatGPT work with any language you type in?
01:02:56 AI evolution in next few years
01:05:38 Jay wants AI to be able to do this
01:08:10 AI apps used by Jay
01:10:10 Projects of Jay

Sep 11, 2025 • 38min
Beyond the Chatbot: What Actually Works in Enterprise AI
Jay Alammar, Director and Engineering Fellow at Cohere and co-author of "Hands-on Large Language Models," delves into enterprise AI. He discusses the challenges in understanding large language models and the adoption of GraphRag, emphasizing the gap between vendor enthusiasm and real-world application. Alammar highlights the balance between self-directed and collaborative learning in AI, and the critical role of evaluation processes for effective AI development. He also explores the potential of smaller AI models, showcasing their efficiency in addressing specific tasks.