

How AI Is Built
Nicolay Gerold
Real engineers. Real deployments. Zero hype. We interview the top engineers who actually put AI in production. Learn what the best engineers have figured out through years of experience. Hosted by Nicolay Gerold, CEO of Aisbach and CTO at Proxdeal and Multiply Content.
Episodes
Mentioned books

14 snips
Dec 19, 2024 • 48min
#036 How AI Can Start Teaching Itself - Synthetic Data Deep Dive
Adrien Morisot, an ML engineer at Cohere, discusses the transformative use of synthetic data in AI training. He explores the prevalent practice of using synthetic data in large language models, emphasizing model distillation techniques. Morisot shares his early challenges in generative models, breakthroughs driven by customer needs, and the importance of diverse output data. He also highlights the critical role of rigorous validation in preventing feedback loops and the potential for synthetic data to enhance specialized AI applications across various fields.

5 snips
Dec 13, 2024 • 45min
#035 A Search System That Learns As You Use It (Agentic RAG)
Stephen Batifol, an expert in Agentic RAG and advanced search technology, dives into the future of search systems. He discusses how modern retrieval-augmented generation (RAG) systems smartly match queries to the most suitable tools, utilizing a mix of methods. Batifol emphasizes the importance of metadata and modular design in creating effective search workflows. The conversation touches on adaptive AI capabilities for query refinement and the significance of user feedback in improving system accuracy. He also addresses the challenges of ambiguity in user queries, highlighting the need for innovative filtering techniques.

Dec 5, 2024 • 47min
#034 Rethinking Search Inside Postgres, From Lexemes to BM25
Philippe Noël, Founder and CEO of ParadeDB, dives into the revolutionary shift in search technology with his open-source PostgreSQL extension. He discusses how ParadeDB eliminates the need for separate search clusters by enabling search directly within databases, simplifying architecture and enhancing cost-efficiency. The conversation explores BM25 indexing, maintaining data normalization, and the advantages of ACID compliance with search. Philippe also reveals successful use cases, including Alibaba Cloud’s implementation, and practical insights for optimizing large-scale search applications.

14 snips
Nov 28, 2024 • 51min
#033 RAG's Biggest Problems & How to Fix It (ft. Synthetic Data)
Saahil Ognawala, Head of Product at Jina AI and expert in RAG systems, dives deep into the complexities of retrieval augmented generation. He reveals why RAG systems often falter in production and how strategic testing and synthetic data can enhance performance. The conversation covers the vital role of user intent, evaluation metrics, and the balancing act between real and synthetic data. Saahil also emphasizes the importance of continuous user feedback and the need for robust evaluation frameworks to fine-tune AI models effectively.

Nov 21, 2024 • 47min
#032 Improving Documentation Quality for RAG Systems
Max Buckley, a Google expert in LLM experimentation, dives into the hidden dangers of poor documentation in RAG systems. He explains how even one ambiguous sentence can skew an entire knowledge base. Max emphasizes the challenge of identifying such "documentation poisons" and discusses the importance of multiple feedback loops for quality control. He highlights unique linguistic ecosystems in large organizations and shares insights on enhancing documentation clarity and consistency to improve AI outputs.

5 snips
Nov 15, 2024 • 54min
#031 BM25 As The Workhorse Of Search; Vectors Are Its Visionary Cousin
David Tippett, a search engineer at GitHub with expertise in BM25 and OpenSearch, delves into the efficiency of BM25 versus vector search for information retrieval. He explains how BM25 refines search by factoring in user expectations and adapting to diverse queries. The conversation highlights the challenges of vector search at scale, particularly with GitHub's massive dataset. David emphasizes that understanding user intent is crucial for optimizing search results, as it surpasses merely chasing cutting-edge technology.

Nov 7, 2024 • 36min
#030 Vector Search at Scale, Why One Size Doesn't Fit All
Join Charles Xie, founder and CEO of Zilliz and pioneer behind the Milvus vector database, as he unpacks the complexities of scaling vector search systems. He discusses why vector search slows down at scale and introduces a multi-tier storage strategy that optimizes performance. Charles reveals innovative solutions like real-time search buffers and GPU acceleration to handle massive queries efficiently. He also dives into the future of search technology, including self-learning indices and hybrid search methods that promise to elevate data retrieval.

Oct 31, 2024 • 55min
#029 Search Systems at Scale, Avoiding Local Maxima and Other Engineering Lessons
Stuart Cam and Russ Cam, seasoned search infrastructure experts from Elastic and Canva, dive into the complexities of modern search systems. They discuss the integration of traditional text search with vector capabilities for better outcomes. The conversation emphasizes the importance of systematic relevancy testing and avoiding local maxima traps, where improving one query can harm others. They also explore the critical balance needed between performance, cost, and indexing strategies, including practical insights into architecting effective search pipelines.

Oct 25, 2024 • 49min
#028 Training Multi-Modal AI, Inside the Jina CLIP Embedding Model
Today we are talking to Michael Günther, a senior machine learning scientist at Jina about his work on JINA Clip.Some key points:Uni-modal embeddings convert a single type of input (text, images, audio) into vectorsMultimodal embeddings learn a joint embedding space that can handle multiple types of input, enabling cross-modal search (e.g., searching images with text)Multimodal models can potentially learn richer representations of the world, including concepts that are difficult or impossible to put into wordsTypes of Text-Image ModelsCLIP-like ModelsSeparate vision and text transformer modelsEach tower maps inputs to a shared vector spaceOptimized for efficient retrievalVision-Language ModelsProcess image patches as tokensUse transformer architecture to combine image and text informationBetter suited for complex document matchingHybrid ModelsCombine separate encoders with additional transformer componentsAllow for more complex interactions between modalitiesExample: Google's Magic Lens modelTraining Insights from Jina CLIPKey LearningsFreezing the text encoder during training can significantly hinder performanceShort image captions limit the model's ability to learn rich text representationsLarge batch sizes are crucial for training embedding models effectivelyTraining ProcessThree-stage training approach: Stage 1: Training on image captions and text pairsStage 2: Adding longer image captionsStage 3: Including triplet data with hard negativesPractical ConsiderationsSimilarity ScalesDifferent modalities can produce different similarity value scalesImportant to consider when combining multiple embedding typesCan affect threshold-based filteringModel SelectionEvaluate models based on relevant benchmarksConsider the domain similarity between training data and intended use caseAssessment of computational requirements and efficiency needsFuture DirectionsAreas for DevelopmentMore comprehensive benchmarks for multimodal tasksBetter support for semi-structured dataImproved handling of non-photographic imagesUpcoming Developments at Jina AIMultilingual support for Jina ColBERTNew version of text embedding modelsFocus on complex multimodal search applicationsPractical ApplicationsE-commerceProduct search and recommendationsCombined text-image embeddings for better resultsSynthetic data generation for fine-tuningFine-tuning StrategiesUsing click data and query logsGenerative pseudo-labeling for creating training dataDomain-specific adaptationsKey Takeaways for EngineersBe aware of similarity value scales and their implicationsEstablish quantitative evaluation metrics before optimizationConsider model limitations (e.g., image resolution, text length)Use performance optimizations like flash attention and activation checkpointingUniversal embedding models might not be optimal for specific use casesMichael GuentherLinkedInX (Twitter)Jina AINew Multilingual Embedding ModalNicolay Gerold:LinkedInX (Twitter)00:00 Introduction to Uni-modal and Multimodal Embeddings 00:16 Exploring Multimodal Embeddings and Their Applications 01:06 Training Multimodal Embedding Models 02:21 Challenges and Solutions in Embedding Models 07:29 Advanced Techniques and Future Directions 29:19 Understanding Model Interference in Search Specialization 30:17 Fine-Tuning Jina CLIP for E-Commerce 32:18 Synthetic Data Generation and Pseudo-Labeling 33:36 Challenges and Learnings in Embedding Models 40:52 Future Directions and Takeaways

5 snips
Oct 23, 2024 • 45min
#027 Building the database for AI, Multi-modal AI, Multi-modal Storage
Chang She, CEO of Lens and co-creator of the Pandas library, shares insights on building LanceDB for AI data management. He discusses how LanceDB tackles data bottlenecks and speeds up machine learning experiments with unstructured data. The conversation dives into the decision to use Rust for enhanced performance, achieving up to 1,000 times faster results than Parquet. Chang also explores multimodal AI's challenges, future applications of LanceDB in recommendation systems, and the vision for more composable data infrastructures.