Latent Space: The AI Engineer Podcast cover image

The Utility of Interpretability — Emmanuel Amiesen

Latent Space: The AI Engineer Podcast

00:00

Navigating Complexities of Language Models

This chapter explores the intricate workings of large language models (LLMs), particularly in tasks like medical diagnosis and their interpretability. It discusses concepts such as adversarial training, multilingual representations, and the implications of biases in language processing. The dialogue also delves into multimodality, examining how different forms of data are integrated while questioning the effectiveness of current methods.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app