Koyena Pal, a graduate student at Northeastern University and lead author of the Future Lens paper, joins Professor David Bau and Eric Todd to unpack the mysteries of large language models. Koyena reveals how mid-sized models think multiple tokens ahead, while Eric introduces the intriguing concept of Function Vectors that enable in-context learning. The trio discusses advancements in interpretability, the significance of hidden states, and the complexities of newer AI architectures, providing a fascinating insight into how these models process and predict information.