In this episode of Neural Search Talks, we're chatting with Aamir Shakir from Mixed Bread AI, who shares his insights on starting a company that aims to make search smarter with AI. He details their approach to overcoming challenges in embedding models, touching on the significance of data diversity, novel loss functions, and the future of multilingual and multimodal capabilities. We also get insights on their journey, the ups and downs, and what they're excited about for the future.
Timestamps:
0:00 Introduction
0:25 How did mixedbread.ai start?
2:16 The story behind the company name and its "bakers"
4:25 What makes Berlin a great pool for AI talent
6:12 Building as a GPU-poor team
7:05 The recipe behind mxbai-embed-large-v1
9:56 The Angle objective for embedding models
15:00 Going beyond Matryoshka with mxbai-embed-2d-large-v1
17:45 Supporting binary embeddings & quantization
19:07 Collecting large-scale data is key for robust embedding models
21:50 The importance of multilingual and multimodal models for IR
24:07 Where will mixedbread.ai be in 12 months?
26:46 Outro