

Optimizing Retrieval Agents with Shirley Wu - Weaviate Podcast #115!
12 snips Feb 19, 2025
Shirley Wu, a PhD student at Stanford University, delves into AI agents and retrieval systems, bringing expertise from her work on the Avatar Optimizer and STaRK Benchmark. She describes how the Avatar Optimizer enhances LLM tool usage through contrastive reasoning and iterative feedback. The discussion also tackles the STaRK Benchmark's role in evaluating retrieval systems, highlighting challenges like unifying textual and relational data, multi-vector embeddings, and the future of human-centered language models in various applications.
AI Snips
Chapters
Transcript
Episode notes
Agent Tool Failure
- Shirley Wu's team built an agent for a specific task, but it failed to use tools properly.
- This motivated them to investigate AI agents further, leading to the Avatar Optimizer.
Data Model Evolution
- Traditional machine learning emphasized clean, normalized data tables.
- Modern AI models benefit from interconnected data, reflecting the real world's relational nature.
Unifying Retrieval Methods
- Traditional relational retrieval excels at structured queries but struggles with semantic understanding.
- Textual retrieval, using embeddings, can suffer from information loss and imprecision.