MLOps.community

Trust at Scale: Security and Governance for Open Source Models // Hudson Buzby // #338

11 snips
Sep 9, 2025
Hudson Buzby, a Solutions Architect at JFrog, specializes in MLOps and large-scale AI deployments. In this discussion, he delves into the challenges of integrating open-source large language models within enterprise settings, emphasizing governance and compliance. The conversation touches on the contrasting security needs of startups versus established companies, reflecting on the vital role of structure in MLOps amidst rapid AI advancements. Buzby also highlights the importance of resource management and security to navigate the evolving landscape of AI innovation.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Generative AI Needs Production Rigor

  • Generative AI projects are being treated as production but without traditional production practices.
  • Organizations must raise scrutiny to match other engineering services or face major failures.
INSIGHT

Open Source LLMs Are Inevitable But Complex

  • Enterprises adopted managed LLMs quickly but now look to open source for cost, control, and privacy.
  • They want open source presence but must manage licensing, governance, and security trade-offs.
ANECDOTE

JFrog Finds Exploits On Hugging Face

  • JFrog scans Hugging Face models multiple times daily and sees vulnerabilities spike as models proliferate.
  • Vulnerabilities grow faster than models, and attackers deliver exploits via misspellings and malicious fine-tunes.
Get the Snipd Podcast app to discover more snips from this episode
Get the app