The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
Dec 18, 2023 • 30min

Edutainment for AI and AWS PartyRock with Mike Miller - #661

Mike Miller, Director of Product at AWS, leads the charge in developing engaging AI edutainment tools. He dives into AWS PartyRock, a playful, no-code generative AI app builder, making app creation fun and accessible. The conversation highlights innovations like DeepRacer, an RC car navigating AI challenges, and DeepLens, a groundbreaking computer vision tool. Miller emphasizes the importance of blending education with entertainment, inviting listeners to unleash their creativity through intuitive AI-powered applications.
undefined
18 snips
Dec 14, 2023 • 38min

Data, Systems and ML for Visual Understanding with Cody Coleman - #660

Cody Coleman, co-founder and CEO of Coactive AI, discusses the innovative applications of data-centric AI in building a multimodal asset platform. He delves into active learning and core set selection, explaining how these techniques boost efficiency in machine learning. The conversation also highlights Coactive's use of multimodal embeddings for visual search and the infrastructure optimizations that support scalability. Cody shares insights and advice for entrepreneurs in the generative AI space, making complex topics accessible to all.
undefined
Dec 11, 2023 • 36min

Patterns and Middleware for LLM Applications with Kyle Roche - #659

Join Kyle Roche, the founder and CEO of Griptape and former GM at AWS, as he dives deep into middleware for generative AI. He unveils innovative patterns for LLM applications, including off-prompt data retrieval and flexible pipeline management. Roche discusses how Griptape enhances data connectivity while addressing privacy and management concerns. Tune in to learn about driving efficiencies in various industries and the impact of responsible AI solutions!
undefined
Dec 4, 2023 • 42min

AI Access and Inclusivity as a Technical Challenge with Prem Natarajan - #658

In this discussion, Prem Natarajan, Chief Scientist and Head of Enterprise AI at Capital One, tackles AI access and inclusivity as critical challenges in banking. He highlights the importance of diversity in data sets to combat biases and improve fraud detection. Prem shares insights on the use of foundation models and federated learning, emphasizing data quality and privacy preservation. He also stresses the need for collaboration between academia and industry to enhance AI impact, ultimately advocating for mission-inspired research that benefits customers and the broader community.
undefined
31 snips
Nov 28, 2023 • 43min

Building LLM-Based Applications with Azure OpenAI with Jay Emery - #657

In a captivating discussion, Jay Emery, Director of Technical Sales & Architecture at Microsoft Azure, shares insights on crafting applications using large language models. He tackles challenges organizations face, such as data privacy and performance optimization. Jay reveals innovative techniques like prompt tuning and retrieval-augmented generation to enhance LLM outputs. He also discusses unique business use cases and effective methods to manage costs while improving functionality. This conversation is packed with practical strategies for anyone interested in the AI landscape.
undefined
15 snips
Nov 20, 2023 • 41min

Visual Generative AI Ecosystem Challenges with Richard Zhang - #656

In this discussion, Richard Zhang, a Senior Research Scientist at Adobe Research specializing in visual generative AI, tackles significant challenges in the AI ecosystem. He dives into the creation of effective perceptual metrics for AI, emphasizing the role of LPIPS in aligning human and machine evaluations. Zhang also addresses the pressing need for detection tools to combat fake visuals and the complexities of data attribution in generative art. His insights emphasize the delicate balance between creator autonomy and consumer trust in this rapidly evolving field.
undefined
Nov 13, 2023 • 39min

Deploying Edge and Embedded AI Systems with Heather Gorr - #655

Heather Gorr, Principal MATLAB Product Marketing Manager at MathWorks, dives into the fascinating world of deploying AI models for embedded systems. She emphasizes crucial factors like data preparation, device constraints, and latency requirements for successful implementation. Heather shares insights on MLOps techniques to enhance deployment speed, while tailoring AI solutions for industries such as automotive and oil & gas. Anecdotes of real-world AI applications illustrate the importance of rigorous validation processes and interdisciplinary collaboration in ensuring safety and reliability.
undefined
40 snips
Nov 6, 2023 • 48min

AI Sentience, Agency and Catastrophic Risk with Yoshua Bengio - #654

Yoshua Bengio, a leading AI safety researcher from Université de Montréal, joins the conversation to discuss the dire risks posed by advanced AI technologies. He highlights the potential for AI to manipulate, spread disinformation, and concentrate power, raising alarm over its impact on democracy. The discussion dives into the complexities of AI safety, agency, and the troubling distinction between mimicking emotion and true sentience. Bengio advocates for robust safety measures, regulatory frameworks, and an urgent need to align AI developments with human values.
undefined
20 snips
Oct 30, 2023 • 44min

Delivering AI Systems in Highly Regulated Environments with Miriam Friedel - #653

Miriam Friedel, Senior Director of ML Engineering at Capital One, shares her insights on deploying AI tools in regulated environments. She discusses creating a culture of collaboration and the importance of standardized tooling. Miriam highlights strategies like using open-source tools for compliance and speed, and dives into the challenges of maintaining consistency across large organizations. Her thoughts on building a 'unicorn' team and making smart build vs. buy decisions for MLOps offer a fresh perspective on the future of enterprise AI.
undefined
78 snips
Oct 23, 2023 • 40min

Mental Models for Advanced ChatGPT Prompting with Riley Goodside - #652

Riley Goodside, a staff prompt engineer at Scale AI, shares insights on mastering prompt engineering for large language models. He dives into the limitations and capabilities of LLMs, emphasizing the intricacies of autoregressive inference. Goodside discusses the effectiveness of zero-shot vs. k-shot prompting and the crucial role of Reinforcement Learning from Human Feedback. He highlights how effective prompting acts as a scaffolding structure to achieve desired AI responses, blending technical skill with strategic thinking.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app