AI Engineering Podcast cover image

AI Engineering Podcast

Latest episodes

undefined
Sep 10, 2024 • 59min

Enhancing AI Retrieval with Knowledge Graphs: A Deep Dive into GraphRAG

Philip Rathle, CTO of Neo4J and an expert in knowledge graphs, dives deep into how GraphRAG revolutionizes AI retrieval systems. He explains how this innovative method blends knowledge graphs with vector similarity for clearer, more accurate AI outputs. Rathle discusses the technical aspects of data modeling and the importance of structured data in addressing traditional retrieval challenges. The conversation also touches on real-world applications of GraphRAG across various industries, highlighting its potential to transform AI interactions.
undefined
Sep 2, 2024 • 42min

Harnessing Generative AI for Effective Digital Advertising Campaigns

SummaryIn this episode of the AI Engineering podcast Praveen Gujar, Director of Product at LinkedIn, talks about the applications of generative AI in digital advertising. He highlights the key areas of digital advertising, including audience targeting, content creation, and ROI measurement, and delves into how generative AI is revolutionizing these aspects. Praveen shares successful case studies of generative AI in digital advertising, including campaigns by Heinz, the Barbie movie, and Maggi, and discusses the potential pitfalls and risks associated with AI-powered tools. He concludes with insights into the future of generative AI in digital advertising, highlighting the importance of cultural transformation and the synergy between human creativity and AI.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Praveen Gujar about the applications of generative AI in digital advertisingInterviewIntroductionHow did you get involved in machine learning?Can you start by defining "digital advertising" for the scope of this conversation?What are the key elements/characteristics/goals of digital avertising?In the world before generative AI, what did a typical end-to-end advertising campaign workflow look like?What are the stages of that workflow where generative AI are proving to be most useful?How do the current limitations of generative AI (e.g. hallucinations, non-determinism) impact the ways in which they can be used?What are the technological and organizational systems that need to be implemented to effectively apply generative AI in public-facing applications that are so closely tied to brand/company image?What are the elements of user education/expectation setting that are necessary when working with marketing/advertising personnel to help avoid damage to the brands?What are some examples of applications for generative AI in digital advertising that have gone well?Any that have gone wrong?What are the most interesting, innovative, or unexpected ways that you have seen generative AI used in digital advertising?What are the most interesting, unexpected, or challenging lessons that you have learned while working on digital advertising applications of generative AI?When is generative AI the wrong choice?What are your future predictions for the use of generative AI in dgital advertising?Contact InfoWebsiteLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksGenerative AILLM == Large Language ModelDall-E)RLHF == Reinforcement Learning fHuman FeedbackThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Aug 15, 2024 • 50min

Building Scalable ML Systems on Kubernetes

Tammer Saleh, founder of SuperOrbital and an expert in scalable machine learning systems, discusses the advantages and challenges of using Kubernetes for ML workloads. He highlights the importance of model tracking and versioning within containerized environments. The conversation touches on the necessity of a unified API for collaboration across teams and the evolving imperfections of Kubernetes in stateful ML contexts. Tammer also shares insights on future innovations and best practices for teams navigating the complexities of machine learning on Kubernetes.
undefined
Jul 28, 2024 • 1h 3min

Expert Insights On Retrieval Augmented Generation And How To Build It

Matt Zeiler, founder and CEO of Clarifai, shares his expertise in retrieval augmented generation (RAG) and its journey from large language models. He discusses how RAG addresses data freshness and hallucinations, utilizing vector databases for dynamic information access. The conversation dives into the architecture and operational challenges of integrating RAG into AI systems. Matt emphasizes the rise of user-friendly AI tools that enable non-experts to create functional prototypes. Tune in for essential insights on the future trends of AI applications and RAG's practical implementations.
undefined
Jul 28, 2024 • 53min

Barking Up The Wrong GPTree: Building Better AI With A Cognitive Approach

SummaryArtificial intelligence has dominated the headlines for several months due to the successes of large language models. This has prompted numerous debates about the possibility of, and timeline for, artificial general intelligence (AGI). Peter Voss has dedicated decades of his life to the pursuit of truly intelligent software through the approach of cognitive AI. In this episode he explains his approach to building AI in a more human-like fashion and the emphasis on learning rather than statistical prediction.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Peter Voss about what is involved in making your AI applications more "human"InterviewIntroductionHow did you get involved in machine learning?Can you start by unpacking the idea of "human-like" AI?How does that contrast with the conception of "AGI"?The applications and limitations of GPT/LLM models have been dominating the popular conversation around AI. How do you see that impacting the overrall ecosystem of ML/AI applications and investment?The fundamental/foundational challenge of every AI use case is sourcing appropriate data. What are the strategies that you have found useful to acquire, evaluate, and prepare data at an appropriate scale to build high quality models? What are the opportunities and limitations of causal modeling techniques for generalized AI models?As AI systems gain more sophistication there is a challenge with establishing and maintaining trust. What are the risks involved in deploying more human-level AI systems and monitoring their reliability?What are the practical/architectural methods necessary to build more cognitive AI systems?How would you characterize the ecosystem of tools/frameworks available for creating, evolving, and maintaining these applications?What are the most interesting, innovative, or unexpected ways that you have seen cognitive AI applied?What are the most interesting, unexpected, or challenging lessons that you have learned while working on desiging/developing cognitive AI systems?When is cognitive AI the wrong choice?What do you have planned for the future of cognitive AI applications at Aigo?Contact InfoLinkedInWebsiteParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksAigo.aiArtificial General IntelligenceCognitive AIKnowledge GraphCausal ModelingBayesian StatisticsThinking Fast & Slow by Daniel Kahneman (affiliate link)Agent-Based ModelingReinforcement LearningDARPA 3 Waves of AI presentationWhy Don't We Have AGI Yet? whitepaperConcepts Is All You Need WhitepaperHellen KellerStephen HawkingThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Jul 28, 2024 • 48min

Build Your Second Brain One Piece At A Time

SummaryGenerative AI promises to accelerate the productivity of human collaborators. Currently the primary way of working with these tools is through a conversational prompt, which is often cumbersome and unwieldy. In order to simplify the integration of AI capabilities into developer workflows Tsavo Knott helped create Pieces, a powerful collection of tools that complements the tools that developers already use. In this episode he explains the data collection and preparation process, the collection of model types and sizes that work together to power the experience, and how to incorporate it into your workflow to act as a second brain.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Tsavo Knott about Pieces, a personal AI toolkit to improve the efficiency of developersInterviewIntroductionHow did you get involved in machine learning?Can you describe what Pieces is and the story behind it?The past few months have seen an endless series of personalized AI tools launched. What are the features and focus of Pieces that might encourage someone to use it over the alternatives?model selectionsarchitecture of Pieces applicationlocal vs. hybrid vs. online modelsmodel update/delivery processdata preparation/serving for models in context of Pieces appapplication of AI to developer workflowstypes of workflows that people are building with piecesWhat are the most interesting, innovative, or unexpected ways that you have seen Pieces used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Pieces?When is Pieces the wrong choice?What do you have planned for the future of Pieces?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksPiecesNPU == Neural Processing UnitTensor ChipLoRA == Low Rank AdaptationGenerative Adversarial NetworksMistralEmacsVimNeoVimDartFlutterTypescriptLuaRetrieval Augmented GenerationONNXLSTM == Long Short-Term MemoryLLama 2GitHub CopilotTabninePodcast EpisodeThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Mar 3, 2024 • 49min

Strategies For Building A Product Using LLMs At DataChat

Jignesh Patel discusses the challenges of building a product using Large Language Models, the business and technical difficulties, and strategies for gaining visibility into the inner workings of LLMs while maintaining control and privacy of data. The episode explores the trade-offs in prompt engineering for AI model context building, potential applications of LLMs in information distillation, and the importance of balancing AI regulation and openness for innovation.
undefined
Feb 18, 2024 • 50min

Improve The Success Rate Of Your Machine Learning Projects With bizML

SummaryMachine learning is a powerful set of technologies, holding the potential to dramatically transform businesses across industries. Unfortunately, the implementation of ML projects often fail to achieve their intended goals. This failure is due to a lack of collaboration and investment across technological and organizational boundaries. To help improve the success rate of machine learning projects Eric Siegel developed the six step bizML framework, outlining the process to ensure that everyone understands the whole process of ML deployment. In this episode he shares the principles and promise of that framework and his motivation for encapsulating it in his book "The AI Playbook".AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Eric Siegel about how the bizML approach can help improve the success rate of your ML projectsInterviewIntroductionHow did you get involved in machine learning?Can you describe what bizML is and the story behind it? What are the key aspects of this approach that are different from the "industry standard" lifecycle of an ML project?What are the elements of your personal experience as an ML consultant that helped you develop the tenets of bizML?Who are the personas that need to be involved in an ML project to increase the likelihood of success? Who do you find to be best suited to "own" or "lead" the process?What are the organizational patterns that might hinder the work of delivering on the goals of an ML initiative?What are some of the misconceptions about the work involved in/capabilities of an ML model that you commonly encounter?What is your main goal in writing your book "The AI Playbook"?What are the most interesting, innovative, or unexpected ways that you have seen the bizML process in action?What are the most interesting, unexpected, or challenging lessons that you have learned while working on ML projects and developing the bizML framework?When is bizML the wrong choice?What are the future developments in organizational and technical approaches to ML that will improve the success rate of AI projects?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksThe AI Playbook: Mastering the Rare Art of Machine Learning Deployment by Eric SiegelPredictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die by Eric SiegelColumbia UniversityMachine Learning Week ConferenceGenerative AI WorldMachine Learning Leadership and Practice CourseRexer AnalyticsKD NuggetsCRISP-DMRandom ForestGradient DescentThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Feb 11, 2024 • 45min

Using Generative AI To Accelerate Feature Engineering At FeatureByte

SummaryOne of the most time consuming aspects of building a machine learning model is feature engineering. Generative AI offers the possibility of accelerating the discovery and creation of feature pipelines. In this episode Colin Priest explains how FeatureByte is applying generative AI models to the challenge of building and maintaining machine learning pipelines.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Colin Priest about applying generative AI to the task of building and deploying AI pipelinesInterviewIntroductionHow did you get involved in machine learning?Can you start by giving the 30,000 foot view of the steps involved in an AI pipeline? Understand the problemFeature ideationFeature engineeringExperimentOptimizeProductionizeWhat are the stages of that process that are prone to repetition? What are the ways that teams typically try to automate those steps?What are the features of generative AI models that can be brought to bear on the design stage of an AI pipeline? What are the validation/verification processes that engineers need to apply to the generated suggestions?What are the opportunities/limitations for unit/integration style tests?What are the elements of developer experience that need to be addressed to make the gen AI capabilities an enhancement instead of a distraction? What are the interfaces through which the AI functionality can/should be exposed?What are the aspects of pipeline and model deployment that can benefit from generative AI functionality? What are the potential risk factors that need to be considered when evaluating the application of this functionality?What are the most interesting, innovative, or unexpected ways that you have seen generative AI used in the development and maintenance of AI pipelines?What are the most interesting, unexpected, or challenging lessons that you have learned while working on the application of generative AI to the ML workflow?When is generative AI the wrong choice?What do you have planned for the future of FeatureByte's AI copilot capabiliteis?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksFeatureByteGenerative AIThe Art of WarOCR == Optical Character RecognitionGenetic AlgorithmSemantic LayerPrompt EngineeringThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0Support The Machine Learning Podcast
undefined
Jan 28, 2024 • 43min

Learn And Automate Critical Business Workflows With 8Flow

SummaryEvery business develops their own specific workflows to address their internal organizational needs. Not all of them are properly documented, or even visible. Workflow automation tools have tried to reduce the manual burden involved, but they are rigid and require substantial investment of time to discover and develop the routines. Boaz Hecht co-founded 8Flow to iteratively discover and automate pieces of workflows, bringing visibility and collaboration to the internal organizational processes that keep the business running.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Boaz Hecht about using AI to automate customer support at 8FlowInterviewIntroductionHow did you get involved in machine learning?Can you describe what 8Flow is and the story behind it?How does 8Flow compare to RPA tools that companies are using today? What are the opportunities for augmenting or integrating with RPA frameworks?What are the key selling points for the solution that you are building? (does AI sell? Or is it about the realized savings?)What are the sources of signal that you are relying on to build model features?Given the heterogeneity in tools and processes across customers, what are the common focal points that let you address the widest possible range of functionality?Can you describe how 8Flow is implemented? How have the design and goals evolved since you first started working on it?What are the model categories that are most relevant for process automation in your product?How have you approached the design and implementation of your MLOps workflow? (model training, deployment, monitoring, versioning, etc.)What are the open questions around product focus and system design that you are still grappling with?Given the relative recency of ML/AI as a profession and the massive growth in attention and activity, how are you addressing the challenge of obtaining and maximizing human talent?What are the most interesting, innovative, or unexpected ways that you have seen 8Flow used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on 8Flow?When is 8Flow the wrong choice?What do you have planned for the future of 8Flow?Contact InfoLinkedInPersonal WebsiteParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.Links8FlowRobotic Process AutomationThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0Support The Machine Learning Podcast

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode