The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Benchmarking ML with MLCommons w/ Peter Mattson - #434

Dec 7, 2020
Peter Mattson, President of MLCommons and a Staff Engineer at Google, discusses the vital role of MLPerf in standardizing machine learning benchmarks. He emphasizes the need for ethical guidelines in AI, particularly through initiatives like the People's Speech dataset, which addresses fairness and representation in machine learning. Mattson also shares insights on streamlining model sharing with MLCube and the importance of robust performance metrics as the ML landscape evolves, aiming to democratize access and innovation in the field.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

MLPerf's Purpose

  • MLPerf benchmarks speed for tasks like model training and inference throughput.
  • This allows comparing hardware and software solutions, driving progress.
ANECDOTE

MLPerf's Origins

  • Peter Mattson, a compiler expert, co-founded MLPerf with David Patterson.
  • They collaborated with other researchers, creating a standardized ML benchmark.
INSIGHT

MLPerf's Scope

  • MLPerf includes vision, language, recommendation, and reinforcement learning benchmarks.
  • It focuses on real-world tasks to avoid optimizing for unrealistic scenarios.
Get the Snipd Podcast app to discover more snips from this episode
Get the app