
Practical AI
Making artificial intelligence practical, productive & accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more).
The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!
Latest episodes

31 snips
Jul 6, 2023 • 42min
Cambrian explosion of generative models
The hosts dive into the explosive growth of generative models like Stable Diffusion XL and OpenChat. They discuss the transition from proprietary systems to open source, highlighting potential shifts in business strategies. Concerns arise about cybersecurity and cultural impacts amid this tech wave. They also explore accessibility challenges and the need for ethical frameworks in AI development. Overall, it’s a captivating blend of innovation, humor, and caution as they navigate the future of AI.

Jun 28, 2023 • 45min
Automated cartography using AI
Gabriel Ortiz, Principal Geospatial Information Officer in Cantabria, Spain, discusses revolutionary applications of AI in geospatial analysis. He shares insights on automating cartography, how deep learning enhances aerial surveys, and the innovative use of technologies like LiDAR for environmental monitoring. Gabriel highlights AI's role in identifying invasive species and crowd management at beaches. His vision for the future blends human expertise with AI to overcome traditional cartography challenges, envisioning an exciting horizon for GIS professionals.

32 snips
Jun 21, 2023 • 47min
From ML to AI to Generative AI
The hosts explore how generative AI is reshaping the landscape of machine learning. They discuss the evolution of AI terminology and the key differences between supervised learning and generative models. Insights are shared on the time-saving capabilities of generative AI, supported by a real-life example of creating a presentation. The conversation also navigates the ethical risks of generative technologies, reflecting on humanity's changing identity and the delicate balance between advancement and intention.

25 snips
Jun 14, 2023 • 60min
AI trends: a Latent Space crossover
Shawn Wang, a writer and editor at Latent Space, joins Alessio Fanelli, CTO at Decibel Partners, for an insightful discussion. They delve into open access LLMs and the evolution of model control techniques, alongside the emerging field of LLMOps. The conversation highlights the importance of prompt engineering and shares reflections from past episodes, including grassroots AI efforts in Africa. Expect a blend of practical insights and challenges that shape the future of AI technology in a rapidly changing landscape.

12 snips
Jun 6, 2023 • 42min
Accidentally building SOTA AI
Kate Bradley Chernis, CEO of Lately.ai, shares her unique journey from radio DJ to innovator in generative AI. She discusses how their platform captures individual voices for tailored social media content. The conversation highlights the balance between authenticity and AI in storytelling and marketing. Kate emphasizes the power of kindness in engagement strategies, detailing how small gestures can deepen connections. Also, she explores the future of AI in voice learning and sentiment analysis, showcasing the platform's potential for evolving marketing landscapes.

13 snips
May 31, 2023 • 50min
Controlled and compliant AI applications
The discussion delves into the challenges of integrating large language models with a focus on compliance and legal concerns. It highlights the dangers, including hallucinations and security vulnerabilities, that make corporate professionals wary. The conversation features Prediction Guard, a solution that ensures consistent and safe AI outputs. They also explore the future of open access AI models and the balance between utility and regulatory compliance in an evolving tech landscape. Expect insights on making AI reliable while navigating its complexities!

35 snips
May 23, 2023 • 45min
Data augmentation with LlamaIndex
Jerry Liu, co-founder of LlamaIndex, dives into the captivating world of integrating private data with large language models. He reveals how LlamaIndex streamlines data ingestion and enhances query efficiency through innovative indexing techniques. The conversation covers prompt engineering intricacies and the transition from traditional querying to natural language processing. Plus, Jerry discusses evaluating model outputs and the exciting future of automated query interfaces, showcasing how LlamaIndex is set to revolutionize data interactions in AI applications.

8 snips
May 16, 2023 • 27min
Creating instruction tuned models
Erin Mikail Staples, a developer community advocate at Label Studio, shares her insights on creating instruction-tuned large language models. She emphasizes the importance of harnessing human feedback to refine AI outputs and demonstrates how context shapes model behavior. The discussion also highlights the critical role of open data and ethical labeling in enhancing machine learning accuracy. Erin's passion for improving AI accessibility and transparency makes this conversation a must-listen for anyone interested in custom generative models.

26 snips
May 11, 2023 • 39min
The last mile of AI app development
Travis Fischer, Founder and CEO of a stealth AI startup and known for projects like ChatGPTBot, delves into the nuances of AI app development. He explores the hierarchy of complexity from basic prompting to advanced fine-tuning. The conversation highlights critical trade-offs in using language models regarding quality and cost, while emphasizing the rise of TypeScript in AI development. Travis shares insights on the importance of structured approaches for tackling complex problems and the evolving landscape of AI tools that foster innovation.

15 snips
May 2, 2023 • 38min
Large models on CPUs
Mark Kurtz, Director of Machine Learning at NeuralMagic, dives into the world of model optimization and CPU inference. He reveals that up to 90% of model parameters may be redundant, slowing down processes and inflating costs. The discussion covers the merits of leveraging CPUs over GPUs for large models and the revolutionary impact of sparsity, significantly reducing model size without losing performance. Mark also touches on the exciting future of generative AI and the promise of making advanced AI more accessible through collaborative efforts.