
How AI Is Built
Real engineers. Real deployments. Zero hype. We interview the top engineers who actually put AI in production. Learn what the best engineers have figured out through years of experience. Hosted by Nicolay Gerold, CEO of Aisbach and CTO at Proxdeal and Multiply Content.
Latest episodes

5 snips
Nov 15, 2024 • 54min
#031 BM25 As The Workhorse Of Search; Vectors Are Its Visionary Cousin
David Tippett, a search engineer at GitHub with expertise in BM25 and OpenSearch, delves into the efficiency of BM25 versus vector search for information retrieval. He explains how BM25 refines search by factoring in user expectations and adapting to diverse queries. The conversation highlights the challenges of vector search at scale, particularly with GitHub's massive dataset. David emphasizes that understanding user intent is crucial for optimizing search results, as it surpasses merely chasing cutting-edge technology.

Nov 7, 2024 • 36min
#030 Vector Search at Scale, Why One Size Doesn't Fit All
Join Charles Xie, founder and CEO of Zilliz and pioneer behind the Milvus vector database, as he unpacks the complexities of scaling vector search systems. He discusses why vector search slows down at scale and introduces a multi-tier storage strategy that optimizes performance. Charles reveals innovative solutions like real-time search buffers and GPU acceleration to handle massive queries efficiently. He also dives into the future of search technology, including self-learning indices and hybrid search methods that promise to elevate data retrieval.

Oct 31, 2024 • 55min
#029 Search Systems at Scale, Avoiding Local Maxima and Other Engineering Lessons
Stuart Cam and Russ Cam, seasoned search infrastructure experts from Elastic and Canva, dive into the complexities of modern search systems. They discuss the integration of traditional text search with vector capabilities for better outcomes. The conversation emphasizes the importance of systematic relevancy testing and avoiding local maxima traps, where improving one query can harm others. They also explore the critical balance needed between performance, cost, and indexing strategies, including practical insights into architecting effective search pipelines.

Oct 25, 2024 • 49min
#028 Training Multi-Modal AI, Inside the Jina CLIP Embedding Model
Today we are talking to Michael Günther, a senior machine learning scientist at Jina about his work on JINA Clip.Some key points:Uni-modal embeddings convert a single type of input (text, images, audio) into vectorsMultimodal embeddings learn a joint embedding space that can handle multiple types of input, enabling cross-modal search (e.g., searching images with text)Multimodal models can potentially learn richer representations of the world, including concepts that are difficult or impossible to put into wordsTypes of Text-Image ModelsCLIP-like ModelsSeparate vision and text transformer modelsEach tower maps inputs to a shared vector spaceOptimized for efficient retrievalVision-Language ModelsProcess image patches as tokensUse transformer architecture to combine image and text informationBetter suited for complex document matchingHybrid ModelsCombine separate encoders with additional transformer componentsAllow for more complex interactions between modalitiesExample: Google's Magic Lens modelTraining Insights from Jina CLIPKey LearningsFreezing the text encoder during training can significantly hinder performanceShort image captions limit the model's ability to learn rich text representationsLarge batch sizes are crucial for training embedding models effectivelyTraining ProcessThree-stage training approach: Stage 1: Training on image captions and text pairsStage 2: Adding longer image captionsStage 3: Including triplet data with hard negativesPractical ConsiderationsSimilarity ScalesDifferent modalities can produce different similarity value scalesImportant to consider when combining multiple embedding typesCan affect threshold-based filteringModel SelectionEvaluate models based on relevant benchmarksConsider the domain similarity between training data and intended use caseAssessment of computational requirements and efficiency needsFuture DirectionsAreas for DevelopmentMore comprehensive benchmarks for multimodal tasksBetter support for semi-structured dataImproved handling of non-photographic imagesUpcoming Developments at Jina AIMultilingual support for Jina ColBERTNew version of text embedding modelsFocus on complex multimodal search applicationsPractical ApplicationsE-commerceProduct search and recommendationsCombined text-image embeddings for better resultsSynthetic data generation for fine-tuningFine-tuning StrategiesUsing click data and query logsGenerative pseudo-labeling for creating training dataDomain-specific adaptationsKey Takeaways for EngineersBe aware of similarity value scales and their implicationsEstablish quantitative evaluation metrics before optimizationConsider model limitations (e.g., image resolution, text length)Use performance optimizations like flash attention and activation checkpointingUniversal embedding models might not be optimal for specific use casesMichael GuentherLinkedInX (Twitter)Jina AINew Multilingual Embedding ModalNicolay Gerold:LinkedInX (Twitter)00:00 Introduction to Uni-modal and Multimodal Embeddings 00:16 Exploring Multimodal Embeddings and Their Applications 01:06 Training Multimodal Embedding Models 02:21 Challenges and Solutions in Embedding Models 07:29 Advanced Techniques and Future Directions 29:19 Understanding Model Interference in Search Specialization 30:17 Fine-Tuning Jina CLIP for E-Commerce 32:18 Synthetic Data Generation and Pseudo-Labeling 33:36 Challenges and Learnings in Embedding Models 40:52 Future Directions and Takeaways

5 snips
Oct 23, 2024 • 45min
#027 Building the database for AI, Multi-modal AI, Multi-modal Storage
Chang She, CEO of Lens and co-creator of the Pandas library, shares insights on building LanceDB for AI data management. He discusses how LanceDB tackles data bottlenecks and speeds up machine learning experiments with unstructured data. The conversation dives into the decision to use Rust for enhanced performance, achieving up to 1,000 times faster results than Parquet. Chang also explores multimodal AI's challenges, future applications of LanceDB in recommendation systems, and the vision for more composable data infrastructures.

Oct 10, 2024 • 47min
#026 Embedding Numbers, Categories, Locations, Images, Text, and The World
Mór Kapronczay, Head of ML at Superlinked, unpacks the nuances of embeddings beyond just text. He emphasizes that traditional text embeddings fall short, especially with complex data. Mór introduces multi-modal embeddings that integrate various data types, improving search relevance and user experiences. He also discusses challenges in embedding numerical data, suggesting innovative methods like logarithmic transformations. The conversation delves into balancing speed and accuracy in vector searches, highlighting the dynamic nature of real-time data prioritization.

Oct 4, 2024 • 59min
#025 Data Models to Remove Ambiguity from AI and Search
Today we have Jessica Talisman with us, who is working as an Information Architect at Adobe. She is (in my opinion) the expert on taxonomies and ontologies.That’s what you will learn today in this episode of How AI Is Built. Taxonomies, ontologies, knowledge graphs.Everyone is talking about them no-one knows how to build them.But before we look into that, what are they good for in search?Imagine a large corpus of academic papers. When a user searches for "machine learning in healthcare", the system can:Recognize "machine learning" as a subcategory of "artificial intelligence"Identify "healthcare" as a broad field with subfields like "diagnostics" and "patient care"We can use these to expand the query or narrow it down.We can return results that include papers on "neural networks for medical imaging" or "predictive analytics in patient outcomes", even if these exact phrases weren't in the search queryWe can also filter down and remove papers not tagged with AI that might just mention it in a side not.So we are building the plumbing, the necessary infrastructure for tagging, categorization, query expansion and relexation, filtering.So how can we build them?1️⃣ Start with Industry Standards • Leverage established taxonomies (e.g., Google, GS1, IAB) • Audit them for relevance to your project • Use as a foundation, not a final solution2️⃣ Customize and Fill Gaps • Adapt industry taxonomies to your specific domain • Create a "coverage model" for your unique needs • Mine internal docs to identify domain-specific concepts3️⃣ Follow Ontology Best Practices • Use clear, unique primary labels for each concept • Include definitions to avoid ambiguity • Provide context for each taxonomy nodeJessica Talisman:LinkedInNicolay Gerold:LinkedInX (Twitter)00:00 Introduction to Taxonomies and Knowledge Graphs 02:03 Building the Foundation: Metadata to Knowledge Graphs 04:35 Industry Taxonomies and Coverage Models 06:32 Clustering and Labeling Techniques 11:00 Evaluating and Maintaining Taxonomies 31:41 Exploring Taxonomy Granularity 32:18 Differentiating Taxonomies for Experts and Users 33:35 Mapping and Equivalency in Taxonomies 34:02 Best Practices and Examples of Taxonomies 40:50 Building Multilingual Taxonomies 44:33 Creative Applications of Taxonomies 48:54 Overrated and Underappreciated Technologies 53:00 The Importance of Human Involvement in AI 53:57 Connecting with the Speaker 55:05 Final Thoughts and Takeaways

Sep 27, 2024 • 55min
#024 How ColPali is Changing Information Retrieval
Jo Bergum, Chief Scientist at Vespa, dives into the game-changing technology of ColPali, which revolutionizes document processing by merging late interaction scoring and visual language models. He discusses how ColPali effectively handles messy data, allowing for seamless searches across complex formats like PDFs and HTML. By eliminating the need for extensive text extraction, ColPali enhances both efficiency and user experience. Its applications span multiple domains, promising significant advancements in information retrieval technology.

9 snips
Sep 26, 2024 • 42min
#023 The Power of Rerankers in Modern Search
Aamir Shakir, founder of mixedbread.ai, is an expert in crafting advanced embedding and reranking models for search applications. He discusses the transformative power of rerankers in retrieval systems, emphasizing their role in enhancing search relevance and performance without complete overhauls. Aamir highlights the benefits of late interaction models like ColBERT for better interpretability and shares creative applications of rerankers beyond traditional use. He also navigates future challenges in multimodal data management and the exciting possibilities of compound models for unified search.

19 snips
Sep 19, 2024 • 46min
#022 The Limits of Embeddings, Out-of-Domain Data, Long Context, Finetuning (and How We're Fixing It)
Join Nils Reimers, a prominent researcher in dense embeddings and the driving force behind foundational search models at Cohere. He dives into the intriguing limitations of text embeddings, such as their struggles with long documents and out-of-domain data. Reimers shares insights on the necessity of fine-tuning to adapt models effectively. He also discusses innovative approaches like re-ranking to enhance search relevance, and the bright future of embeddings as new research avenues are explored. Don't miss this deep dive into the cutting-edge of AI!