Weaviate Podcast

Weaviate
undefined
Oct 26, 2023 • 56min

Vibs Abhishek on Alltius AI - Weaviate Podcast #71!

Hey everyone! Thank you so much for watching the 71st Weaviate Podcast with Vibs Abhishek! Vibs is the CEO and Founder of Alltius AI, as well as a professor at UC Irvine business school! In order to tame the somewhat chaotic emerging landscape of RAG and LLM applications, Alltius has settled on 3 core pillars of Knowledge, Skills, and Deployment Channels! Vibs further explained how he sees the distinction between Assistants and Agents and many more topics important to Enterprise deployment of RAG applications such as reducing hallucinations and employing classifiers to route skills and knowledge sources! I learned so much from this conversation, I hope you enjoy the podcast! Alltius KNO Plus Demo Video: https://www.loom.com/share/fcfe516b75ea4f069b1a8d6a3510fa4c?sid=5f43317f-c20b-4dd9-91d3-2cde993fd91f Chapters 0:00 Welcome Vibs 0:22 Background 2:30 Alltius’ UI for Assistants 7:15 The Knowledge Pillar 12:05 SQL Router and Intent Management 14:10 Classifying a Pipeline / Skill 17:30 Flexibility of Zero-Shot versus Fine-Tuning 21:00 The Channels Pillar 23:00 Connecting the Warehouse / Lakehouse 24:50 Assistant versus Agent 28:30 MemGPT 31:25 Offline LLM Research 35:50 Multi-Agent Role-Playing Assistants 39:25 From Clicks to Conversations 44:10 CEO / Professor and Evolution of the Field
undefined
Oct 24, 2023 • 31min

MemGPT Explained!

Discover the innovative world of MemGPT, where operating system principles meet large language models. Explore how memory management is revolutionized to enhance conversational AI. Delve into the architecture that boosts dialogue consistency and engagement. Unpack the challenges of training long-context models and the role of efficient memory in search dynamics. Learn about the creation of synthetic textbooks as training data, showcasing the seamless interaction of language models and APIs.
undefined
Oct 18, 2023 • 55min

Kevin Cohen on Neum AI - Weaviate Podcast #70!

Hey everyone! Thank you so much for watching the 70th episode of the Weaviate podcast with Neum AI CTO and Co-Founder Kevin Cohen! I first met Kevin when he was debugging an issue with his distributed node utilization and have since learned so much from him about how he sees the space of Data Ingestion, also commonly referenced as ETL for LLMs! There are so many interesting parts to this from the general flow of data connectors, chunkers and metadata extractors, embedding inference, and the last leg of the mile of importing the vectors to a Vector DB such as Weaviate! I really loved how Kevin broke down the distributed messaging queue and system design for orchestrating data ingestion at massive scale such as dealing with failures and optimizing the infrastructure as code setup. We also discussed things like new use cases with quadrillion scale vector indexes and the role of knowledge graphs in all this! I really hope you enjoy the podcast, please check out this amazing article below from Neum AI! https://medium.com/@neum_ai/retrieval-augmented-generation-at-scale-building-a-distributed-system-for-synchronizing-and-eaa29162521 Chapters 0:00 Check this out! 1:18 Welcome Kevin! 1:58 Founding Neum AI 6:55 Data Ingestion, End-to-End Overview 9:10 Chunking and Metadata Extraction 14:20 Embedding Cache 16:57 Distributed Messaging Queues 22:15 Embeddings Cache ELI5 25:30 Customizing Weaviate Kubernetes 38:10 Multi-Tenancy and Resource Allocation 39:20 Billion-Scale Vector Search 45:05 Knowledge Graphs 52:10 Y Combinator Experience
undefined
Oct 4, 2023 • 1h 9min

Charles Pierse on Tactic Generate - Weaviate Podcast #69!

Hey everyone! Thank you so much for watching the 69th episode of the Weaviate Podcast featuring Charles Pierse from Tactic! Tactic has recently launched their new Tactic Generate project, an incredible UI for conducting research across multiple documents. I think there is a massive opportunity to pair these prompts and LLM workflows with User Interfaces and take more of a holistic User Experience perspective. Tactic Generate has done an incredible job of that, please take a look from the link below! I had such a fun conversation catching up with Charles (Charles was our 2nd Weaviate Podcast guest!), I hope you enjoy the podcast! Tactic Generate: https://tactic.fyi/generative-insights/ Chapters 0:00 Tactic Generate 1:40 Welcome Charles! 2:38 Charles’ work at Tactic 4:40 LLMs comparing documents 9:10 LLM Chaining 17:30 Discovering LLM Chains 20:28 Moats in ML Products 28:48 Fine-Tuning vs. RAG 34:30 Fine-Tuning Search Models 39:45 Skepticism on RLHF 41:52 Gorilla, Integrations, and CRM 45:40 Query Routers 47:55 CRM and Tree-of-Thoughts 55:54 Graph Embeddings 1:02:20 Llama CPP / GGML 1:04:28 What are you looking forward to most in AI?
undefined
Sep 20, 2023 • 52min

Weights and Biases on Fine-Tuning LLMs - Weaviate Podcast #68!

Hey everyone! Thank you so much for watching the 68th episode of the Weaviate Podcast! We are super excited to welcome Morgan McGuire, Darek Kleczek, and Thomas Capelle! This was such a fun discussion beginning with generally how see the space of fine-tuning from why you would want to do it, to the available tooling, intersection with RAG and more! Check out W&B Prompts! https://wandb.ai/site/prompts Check out the W&B Tiny Llama Report! https://wandb.ai/capecape/llamac/reports/Training-Tiny-Llamas-for-Fun-and-Science--Vmlldzo1MDM2MDg0 Chapters 0:00 Tiny Llamas! 1:53 Welcome! 2:22 LLM Fine-Tuning 5:25 Tooling for Fine-Tuning 7:55 Why Fine-Tune? 9:55 RAG vs. Fine-Tuning 12:25 Knowledge Distillation 14:40 Gorilla LLMs 18:25 Open-Source LLMs 22:48 Jonathan Frankle on W&B 23:45 Data Quality for LLM Training 25:55 W&B for Data Versioning 27:25 Curriculum Learning 29:28 GPU Rich and Data Quality 30:30 Vector DBs and Data Quality 32:50 Tuning Training with Weights & Biases 35:47 Training Reports 42:28 HF Collections and W&B Sweeps 44:50 Exciting Directions for AI
undefined
Sep 13, 2023 • 1h 1min

Farshad Farahbakhshian and Etienne Dilocker on Weaviate and AWS - Weaviate Podcast #67!

Hey everyone! Thank you so much for watching the 67th Weaviate Podcast, announcing Weaviate on the AWS Marketplace! This was one of my favorite podcasts to date with a deep dive on the details of running RAG applications in the cloud, our general understanding of LLM Fine-Tuning and RAG, as well as a really interesting discussion on VPCs and Hybrid SaaS! I hope you find the podcast useful, as always we are more than happy to answer any questions or discuss any ideas you have about the content presented in the podcast! Learn more here: https://aws.amazon.com/marketplace/seller-profile?id=seller-jxgfug62rvpxs As well as here: https://weaviate.io/developers/weaviate/installation/aws-marketplace Chapters 0:00 Welcome Farshad 0:38 Weaviate’s Journey to AWS 2:05 Retrieval-Augmented Generation and Vector DBs 3:44 Running AI in the Cloud 9:40 Fine-Tuning LLMs vs. RAG 10:30 Skill vs. Knowledge (Lawyer Example) 14:28 Continual Learning of LLMs 16:50 Searching through multiple sources 19:58 Hybrid Search controlled by LLMs 22:10 Classes versus Filters 25:00 SQL and Vector Search 25:55 Favorite RAG Use Cases 31:55 Cloud Benchmarking 37:00 Price Performance 38:20 Tuning HNSW 42:15 Horizontal Scalability on AWS Marketplace 47:00 Privacy Requirements 54:45 Weaviate Hybrid SaaS 59:00 AWS Marketplace
undefined
Sep 12, 2023 • 4min

Hybrid SaaS in Weaviate Explained!

Hey everyone! Here is a clip from our newest Weaviate podcast with Farshad Farahbakhshian, Gen AI specialist at AWS and Etienne Dilocker, CTO and Co-Founder of Weaviate! This podcast announces Weaviate on the AWS marketplace and is packed with info on running Weaviate in the cloud such as this clip explaining how Hybrid SaaS works! I hope you find the clip useful, we are more than happy to answer any questions you have about the content in this clip! Chapters 0:00 Quick Intro for Context 0:29 Etienne Dilocker on Hybrid SaaS
undefined
Sep 7, 2023 • 1h 5min

David Garnitz on VectorFlow - Weaviate Podcast #66!

Hey everyone! Thank you so much for watching the 66th Weaviate Podcast with David Garnitz, the creator of VectorFlow! VectorFlow (open-sourced on GH and linked below) is a new tool for ingesting data into Vector Databases such as Weaviate! There is quite an interesting End-to-End stack emerging at the ingestion layer, from retrieving data from misc. sources such as Slack, Salesforce, GitHub, Google Drive, Notion, ... to then Chunking the Text (maybe with the use of Visual Document Layout parsers like what Unstructured is imagining), extracting Metadata potentially (say the "age" of an NBA player as in the Evaporate-Code+ research) -- then sending this data off to embedding model inference and unpacking that can of worms from inference acceleration to load balancing, and finally -- importing the vectors themselves to Weaviate! I learned so much from this conversation, I really hope you enjoy listening and please check out VectorFlow below! VectorFlow: https://github.com/dgarnitz/vectorflow Chapters 0:00 VectorFlow on GitHub! 0:52 Welcome David Garnitz! 1:17 Vector Flow, Founding Vision 2:00 Billions of Vectors in Weaviate! 4:20 End-to-end data importing 6:30 Metadata Extraction in Vector Database Flows 10:15 Vectorizing 100s of millions of billions of chunks 15:58 Fine-Tuning Embedding Models 23:50 Zero-Shot Models in Metadata and Chunking 36:36 Vector + SQL 42:45 Self-Driving Databases 49:23 Generative Feedback Loop REST API 51:38 GPT Cache 55:55 Building VectorFlow
undefined
Aug 31, 2023 • 1h 7min

Ofir Press on AliBi and Self-Ask - Weaviate Podcast #65!

Hey everyone! Thank you so much for watching the Weaviate Podcast! I am SUPER excited to publish my conversation with Ofir Press! Ofir has done incredible work pioneering AliBi attention and Self-Ask prompting and I learned so much from speaking with him! As always we are more than happy to answer any questions or discuss any ideas you have about the content in the podcast! +Huge Congratulations on your Ph.D. Ofir! AliBi Attention: https://arxiv.org/abs/2108.12409 Self-Ask Prompting: https://arxiv.org/abs/2210.03350 Ofir Pres on YouTube: https://www.youtube.com/@ofirpress Chapters 0:00 Welcome Ofir Press 0:41 Large Context LLMs 12:38 Quadratic Complexity of Attention 19:12 AliBi Attention, Visual Demo! 24:53 Recency Bias in LLMs 28:57 RAG in Long Context LLM Training 36:27 Self-Ask Prompting 46:07 Chain-of-Thought and Self-Ask 50:47 Gorilla LLMs 58:42 New Directions for New Training Data
undefined
Aug 30, 2023 • 49min

Shishir Patil and Tianjun Zhang on Gorilla - Weaviate Podcast #64!

Hey everyone! Thank you so much for watching the 64th Weaviate Podcast with Shishir Patil and Tianjun Zhang, co-authors of Gorilla: Large Language Models Connected with Massive APIs! I learned so much about Gorilla from Shishir and Tianjun, from the APIBench dataset to the continually evolving APIZoo, how the models are trained with Retrieval-Aware Training, Self-Instruct Training data and how the authors think of fine-tuning LLaMA-7B models for tasks such as this, and many more! I hope you enjoy the podcast! As always I am more than happy to answer any questions or discuss any ideas you have about the content in the podcast! Please check out the paper here! https://arxiv.org/abs/2305.15334 Chapters 0:00 Welcome Shishir and Tianjun 0:25 Gorilla LLM Story 1:50 API Examples 7:40 The APIZoo 10:55 Gorilla vs. OpenAI Funcs 12:50 Retrieval-Aware Training 19:55 Mixing APIs, Gorilla for Integration 25:12 LlaMA-7B Fine-Tuning vs. GPT-4 29:08 Weaviate Gorilla 33:52 Gorilla and Baby Gorillas 35:40 Gorilla vs. HuggingFace 38:32 Structured Output Parsing 41:14 Reflexion Prompting for Debugging 44:00 Directions for the Future

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app