Practical AI

Model inspection and interpretation at Seldon

Jun 17, 2019
Janis Klaise, a data scientist at Seldon, sheds light on the complexities of model interpretation in AI. He discusses the significance of the Alibi open-source project, designed to make sense of intricate models. Key topics include the integration of Alibi into Seldon’s platform for enhanced explainability, the use of innovative techniques like LIME, and the challenges of deploying machine learning models in real-world scenarios. Janis emphasizes the importance of collaboration between engineering and data science to improve AI's accessibility.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Seldon's namesake

  • Chris Benson asked if Seldon was named after the psychohistorian, Hari Seldon, from Isaac Asimov's Foundation series.
  • Janis Klaise confirmed the connection, admitting he hadn't read the series before joining the company but did so afterward.
INSIGHT

Seldon's Mission

  • Seldon's focus on prediction in machine learning deployment aligns with Hari Seldon's predictive abilities in the Foundation series.
  • Seldon helps businesses operationalize machine learning models after data scientists finish development.
ADVICE

Simplifying Deployments

  • Use Seldon to simplify model deployment by creating a Python class with a predict function.
  • Seldon handles REST API creation and deployment, streamlining the process.
Get the Snipd Podcast app to discover more snips from this episode
Get the app