

Deep Papers
Arize AI
Deep Papers is a podcast series featuring deep dives on today’s most important AI papers and research. Hosted by Arize AI founders and engineers, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.
Episodes
Mentioned books

Feb 2, 2024 • 44min
Phi-2 Model
The podcast delves into the Phi-2 model, showcasing its superior performance compared to larger models on various benchmarks, especially in coding and math tasks. Despite its smaller size, Phi-2 outperforms Google's Gemini Nano 2 model. The discussion also covers the benefits of small language models over large ones, including trainability with less data and easier fine-tuning for specific tasks.

Dec 27, 2023 • 48min
A Deep Dive Into Generative's Newest Models: Gemini vs Mistral (Mixtral-8x7B)–Part I
ML Solutions Architect Dat Ngo and Product Manager Aman Khan discuss the new models Gemini and Mixtral-8x7B. They cover the background and context of Mixtral, its performance compared to Llama and GPT3.5, and its optimized fine-tuning. Part II will explore Gemini, developed by DeepMind and Google Research.

Dec 18, 2023 • 45min
How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, and Cross-domain Settings
We’re thrilled to be joined by Shuaichen Chang, LLM researcher and the author of this week’s paper to discuss his findings. Shuaichen’s research investigates the impact of prompt constructions on the performance of large language models (LLMs) in the text-to-SQL task, particularly focusing on zero-shot, single-domain, and cross-domain settings. Shuaichen and his team explore various strategies for prompt construction, evaluating the influence of database schema, content representation, and prompt length on LLMs’ effectiveness. The findings emphasize the importance of careful consideration in constructing prompts, highlighting the crucial role of table relationships and content, the effectiveness of in-domain demonstration examples, and the significance of prompt length in cross-domain scenarios.Read the blog and watch the discussion: https://arize.com/blog/how-to-prompt-llms-for-text-to-sql-paper-reading/Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Nov 30, 2023 • 41min
The Geometry of Truth: Emergent Linear Structure in LLM Representation of True/False Datasets
In this podcast, Samuel Marks, a Postdoctoral Research Associate at Northeastern University, discusses his paper on the linear structure of true/false datasets in LLM representations. They explore how language models can linearly represent truth or falsehood, introduce a new probing technique called mass mean probing, and analyze the process of embedding truth in LLM models. They also discuss the future research directions and limitations of the paper.

Nov 20, 2023 • 45min
Towards Monosemanticity: Decomposing Language Models With Dictionary Learning
In this paper read, we discuss “Towards Monosemanticity: Decomposing Language Models Into Understandable Components,” a paper from Anthropic that addresses the challenge of understanding the inner workings of neural networks, drawing parallels with the complexity of human brain function. It explores the concept of “features,” (patterns of neuron activations) providing a more interpretable way to dissect neural networks. By decomposing a layer of neurons into thousands of features, this approach uncovers hidden model properties that are not evident when examining individual neurons. These features are demonstrated to be more interpretable and consistent, offering the potential to steer model behavior and improve AI safety.Find the transcript and more here: https://arize.com/blog/decomposing-language-models-with-dictionary-learning-paper-reading/Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Oct 18, 2023 • 44min
RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models
We discuss RankVicuna, the first fully open-source LLM capable of performing high-quality listwise reranking in a zero-shot setting. While researchers have successfully applied LLMs such as ChatGPT to reranking in an information retrieval context, such work has mostly been built on proprietary models hidden behind opaque API endpoints. This approach yields experimental results that are not reproducible and non-deterministic, threatening the veracity of outcomes that build on such shaky foundations. RankVicuna provides access to a fully open-source LLM and associated code infrastructure capable of performing high-quality reranking.Find the transcript and more here: https://arize.com/blog/rankvicuna-paper-reading/Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Oct 17, 2023 • 36min
Explaining Grokking Through Circuit Efficiency
The podcast explores the concept of grokking and its relationship with network performance. It discusses the use of circuits as modules, module addition and generalization, balancing cross entropy loss and weight decay in deep learning models, circuit efficiency and its role in performance, grokking and the impact on model strength, and the relationship between circuit efficiency and generalization.

Sep 29, 2023 • 42min
Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. In this episode, we discuss the paper, “Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior.” This episode is led by SallyAnn Delucia (ML Solutions Engineer, Arize AI), and Amber Roberts (ML Solutions Engineer, Arize AI). The research they discuss highlights that while LLMs have great generalization capabilities, they struggle to effectively predict and optimize communication to get the desired receiver behavior. We’ll explore whether this might be because of a lack of “behavior tokens” in LLM training corpora and how Large Content Behavior Models (LCBMs) might help to solve this issue.Find the transcript and more here: https://arize.com/blog/large-content-and-behavior-models-paper-reading/Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Aug 30, 2023 • 44min
Skeleton of Thought: LLMs Can Do Parallel Decoding
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. In this paper reading, we explore the paper ‘Skeleton-of-Thought’ (SoT) approach, aimed at reducing large language model latency while enhancing answer quality. This episode is led by Aparna Dhinakaran ( Chief Product Officer, Arize AI) and Sally-Ann Delucia (ML Solutions Engineer, Arize AI), with two of the paper authors: Xuefei Ning, Postdoctoral Researcher at Tsinghua University and Zinan Lin, Senior Researcher, Microsoft Research. SoT’s innovative methodology guides LLMs to construct answer skeletons before parallel content elaboration, achieving impressive speed-ups of up to 2.39x across 11 models. Don’t miss the opportunity to delve into this human-inspired optimization strategy and its profound implications for efficient and high-quality language generation.Full transcript and more here: https://arize.com/blog/skeleton-of-thought-llms-can-do-parallel-decoding-paper-reading/Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Jul 31, 2023 • 30min
Llama 2: Open Foundation and Fine-Tuned Chat Models
Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. This episode is led by Aparna Dhinakaran ( Chief Product Officer, Arize AI) and Michael Schiff (Chief Technology Officer, Arize AI), as they discuss the paper "Llama 2: Open Foundation and Fine-Tuned Chat Models."In this paper reading, we explore the paper “Developing Llama 2: Pretrained Large Language Models Optimized for Dialogue.” The paper introduces Llama 2, a collection of pretrained and fine-tuned large language models ranging from 7 billion to 70 billion parameters. Their fine-tuned model, Llama 2-Chat, is specifically designed for dialogue use cases and showcases superior performance on various benchmarks. Through human evaluations for helpfulness and safety, Llama 2-Chat emerges as a promising alternative to closed-source models. Discover the approach to fine-tuning and safety improvements, allowing us to foster responsible development and contribute to this rapidly evolving field.Full transcript and more here: https://arize.com/blog/llama-2-open-foundation-and-fine-tuned-chat-models-paper-reading/Follow AI__Pub on Twitter. To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.