Join Shreya Shankar, a UC Berkeley researcher specializing in human-centered data management systems, as she navigates the exciting world of large language models (LLMs). Discover her insights on the shift from traditional machine learning to LLMs and the importance of data quality over algorithm issues. Shreya shares her innovative SPaDE framework for improving AI evaluations and emphasizes the need for human oversight in AI development. Plus, explore the future of low-code tools and the fascinating concept of 'Habsburg AI' in recursive processes.
01:15:10
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
question_answer ANECDOTE
AI-Generated Music
Shreya Shankar's interest in AI was sparked by an internship at Google.
She observed AI-generated music and was inspired to take more AI classes.
insights INSIGHT
ML Engineering Reality
Shreya's industry experience revealed that most ML work involves data engineering.
Training models was a small part of her role as an ML engineer.
volunteer_activism ADVICE
Data Flywheels
Continuously evolve your LLM application based on production data.
Label production data, correlate it with human judgment, and use it to improve prompts.
Get the Snipd Podcast app to discover more snips from this episode
Hugo speaks with Shreya Shankar, a researcher at UC Berkeley focusing on data management systems with a human-centered approach. Shreya's work is at the cutting edge of human-computer interaction (HCI) and AI, particularly in the realm of large language models (LLMs). Her impressive background includes being the first ML engineer at Viaduct, doing research engineering at Google Brain, and software engineering at Facebook.
In this episode, we dive deep into the world of LLMs and the critical challenges of building reliable AI pipelines. We'll explore:
The fascinating journey from classic machine learning to the current LLM revolution
Why Shreya believes most ML problems are actually data management issues
The concept of "data flywheels" for LLM applications and how to implement them
The intriguing world of evaluating AI systems - who validates the validators?
Shreya's work on SPADE and EvalGen, innovative tools for synthesizing data quality assertions and aligning LLM evaluations with human preferences
The importance of human-in-the-loop processes in AI development
The future of low-code and no-code tools in the AI landscape
We'll also touch on the potential pitfalls of over-relying on LLMs, the concept of "Habsburg AI," and how to avoid disappearing up our own proverbial arseholes in the world of recursive AI processes.
Whether you're a seasoned AI practitioner, a curious data scientist, or someone interested in the human side of AI development, this conversation offers valuable insights into building more robust, reliable, and human-centered AI systems.