AI Engineering Podcast cover image

AI Engineering Podcast

Latest episodes

undefined
Nov 11, 2024 • 1h 16min

ML Infrastructure Without The Ops: Simplifying The ML Developer Experience With Runhouse

Donnie Greenberg, Co-founder and CEO of Runhouse and former product lead for PyTorch at Meta, shares insights on simplifying machine learning infrastructure. He discusses the challenges of traditional MLOps tools and presents Runhouse's serverless approach that reduces complexity in moving from development to production. Greenberg emphasizes the importance of flexible, collaborative environments and innovative fault tolerance in ML workflows. He also touches on the need for integration with existing DevOps practices to meet the evolving demands of AI and ML.
undefined
Nov 11, 2024 • 54min

Building AI Systems on Postgres: An Inside Look at pgai Vectorizer

Avthar Sewrathan, Head of AI at Timescale and expert in database infrastructure, shares insights into the innovative pgai Vectorizer toolchain. He reveals how this tool enables seamless management of AI workflows in Postgres, emphasizing the importance of keeping vector data updated. The discussion covers optimizing embedding strategies, the balance between user-friendliness and customization for developers, and the future of AI integration within databases. Avthar also touches on challenges in content moderation and semantic search, highlighting the need for continuous improvement and collaboration in the open-source community.
undefined
Oct 28, 2024 • 58min

Running Generative AI Models In Production

Philip Kiely, an AI infrastructure expert at BaseTen, dives into the complexities of running generative AI models in production. He shares insights on the importance of selecting the right model based on product requirements and discusses key deployment strategies, including architecture and performance monitoring. Challenges like model quantization and the balance between open-source and proprietary models are explored. Philip also highlights future trends such as local inference, emphasizing the need for compliance in sectors like healthcare.
undefined
Sep 10, 2024 • 59min

Enhancing AI Retrieval with Knowledge Graphs: A Deep Dive into GraphRAG

Philip Rathle, CTO of Neo4J and an expert in knowledge graphs, dives deep into how GraphRAG revolutionizes AI retrieval systems. He explains how this innovative method blends knowledge graphs with vector similarity for clearer, more accurate AI outputs. Rathle discusses the technical aspects of data modeling and the importance of structured data in addressing traditional retrieval challenges. The conversation also touches on real-world applications of GraphRAG across various industries, highlighting its potential to transform AI interactions.
undefined
Sep 2, 2024 • 42min

Harnessing Generative AI for Effective Digital Advertising Campaigns

SummaryIn this episode of the AI Engineering podcast Praveen Gujar, Director of Product at LinkedIn, talks about the applications of generative AI in digital advertising. He highlights the key areas of digital advertising, including audience targeting, content creation, and ROI measurement, and delves into how generative AI is revolutionizing these aspects. Praveen shares successful case studies of generative AI in digital advertising, including campaigns by Heinz, the Barbie movie, and Maggi, and discusses the potential pitfalls and risks associated with AI-powered tools. He concludes with insights into the future of generative AI in digital advertising, highlighting the importance of cultural transformation and the synergy between human creativity and AI.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Praveen Gujar about the applications of generative AI in digital advertisingInterviewIntroductionHow did you get involved in machine learning?Can you start by defining "digital advertising" for the scope of this conversation?What are the key elements/characteristics/goals of digital avertising?In the world before generative AI, what did a typical end-to-end advertising campaign workflow look like?What are the stages of that workflow where generative AI are proving to be most useful?How do the current limitations of generative AI (e.g. hallucinations, non-determinism) impact the ways in which they can be used?What are the technological and organizational systems that need to be implemented to effectively apply generative AI in public-facing applications that are so closely tied to brand/company image?What are the elements of user education/expectation setting that are necessary when working with marketing/advertising personnel to help avoid damage to the brands?What are some examples of applications for generative AI in digital advertising that have gone well?Any that have gone wrong?What are the most interesting, innovative, or unexpected ways that you have seen generative AI used in digital advertising?What are the most interesting, unexpected, or challenging lessons that you have learned while working on digital advertising applications of generative AI?When is generative AI the wrong choice?What are your future predictions for the use of generative AI in dgital advertising?Contact InfoWebsiteLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksGenerative AILLM == Large Language ModelDall-E)RLHF == Reinforcement Learning fHuman FeedbackThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Aug 15, 2024 • 50min

Building Scalable ML Systems on Kubernetes

Tammer Saleh, founder of SuperOrbital and an expert in scalable machine learning systems, discusses the advantages and challenges of using Kubernetes for ML workloads. He highlights the importance of model tracking and versioning within containerized environments. The conversation touches on the necessity of a unified API for collaboration across teams and the evolving imperfections of Kubernetes in stateful ML contexts. Tammer also shares insights on future innovations and best practices for teams navigating the complexities of machine learning on Kubernetes.
undefined
Jul 28, 2024 • 1h 3min

Expert Insights On Retrieval Augmented Generation And How To Build It

Matt Zeiler, founder and CEO of Clarifai, shares his expertise in retrieval augmented generation (RAG) and its journey from large language models. He discusses how RAG addresses data freshness and hallucinations, utilizing vector databases for dynamic information access. The conversation dives into the architecture and operational challenges of integrating RAG into AI systems. Matt emphasizes the rise of user-friendly AI tools that enable non-experts to create functional prototypes. Tune in for essential insights on the future trends of AI applications and RAG's practical implementations.
undefined
Jul 28, 2024 • 53min

Barking Up The Wrong GPTree: Building Better AI With A Cognitive Approach

SummaryArtificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Peter Voss about what is involved in making your AI applications more "human"InterviewIntroductionHow did you get involved in machine learning?Can you start by unpacking the idea of "human-like" AI?How does that contrast with the conception of "AGI"?The applications and limitations of GPT/LLM models have been dominating the popular conversation around AI. How do you see that impacting the overrall ecosystem of ML/AI applications and investment?The fundamental/foundational challenge of every AI use case is sourcing appropriate data. What are the strategies that you have found useful to acquire, evaluate, and prepare data at an appropriate scale to build high quality models? What are the opportunities and limitations of causal modeling techniques for generalized AI models?As AI systems gain more sophistication there is a challenge with establishing and maintaining trust. What are the risks involved in deploying more human-level AI systems and monitoring their reliability?What are the practical/architectural methods necessary to build more cognitive AI systems?How would you characterize the ecosystem of tools/frameworks available for creating, evolving, and maintaining these applications?What are the most interesting, innovative, or unexpected ways that you have seen cognitive AI applied?What are the most interesting, unexpected, or challenging lessons that you have learned while working on desiging/developing cognitive AI systems?When is cognitive AI the wrong choice?What do you have planned for the future of cognitive AI applications at Aigo?Contact InfoLinkedInWebsiteParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksAigo.aiArtificial General IntelligenceCognitive AIKnowledge GraphCausal ModelingBayesian StatisticsThinking Fast & Slow by Daniel Kahneman (affiliate link)Agent-Based ModelingReinforcement LearningDARPA 3 Waves of AI presentationWhy Don't We Have AGI Yet? whitepaperConcepts Is All You Need WhitepaperHellen KellerStephen HawkingThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Jul 28, 2024 • 48min

Build Your Second Brain One Piece At A Time

SummaryGenerative AI promises to accelerate the productivity of human collaborators. Currently the primary way of working with these tools is through a conversational prompt, which is often cumbersome and unwieldy. In order to simplify the integration of AI capabilities into developer workflows Tsavo Knott helped create Pieces, a powerful collection of tools that complements the tools that developers already use. In this episode he explains the data collection and preparation process, the collection of model types and sizes that work together to power the experience, and how to incorporate it into your workflow to act as a second brain.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Tsavo Knott about Pieces, a personal AI toolkit to improve the efficiency of developersInterviewIntroductionHow did you get involved in machine learning?Can you describe what Pieces is and the story behind it?The past few months have seen an endless series of personalized AI tools launched. What are the features and focus of Pieces that might encourage someone to use it over the alternatives?model selectionsarchitecture of Pieces applicationlocal vs. hybrid vs. online modelsmodel update/delivery processdata preparation/serving for models in context of Pieces appapplication of AI to developer workflowstypes of workflows that people are building with piecesWhat are the most interesting, innovative, or unexpected ways that you have seen Pieces used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Pieces?When is Pieces the wrong choice?What do you have planned for the future of Pieces?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksPiecesNPU == Neural Processing UnitTensor ChipLoRA == Low Rank AdaptationGenerative Adversarial NetworksMistralEmacsVimNeoVimDartFlutterTypescriptLuaRetrieval Augmented GenerationONNXLSTM == Long Short-Term MemoryLLama 2GitHub CopilotTabninePodcast EpisodeThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Mar 3, 2024 • 49min

Strategies For Building A Product Using LLMs At DataChat

Jignesh Patel discusses the challenges of building a product using Large Language Models, the business and technical difficulties, and strategies for gaining visibility into the inner workings of LLMs while maintaining control and privacy of data. The episode explores the trade-offs in prompt engineering for AI model context building, potential applications of LLMs in information distillation, and the importance of balancing AI regulation and openness for innovation.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode