

Benchmarking ML with MLCommons w/ Peter Mattson - #434
Dec 7, 2020
Peter Mattson, President of MLCommons and a Staff Engineer at Google, discusses the vital role of MLPerf in standardizing machine learning benchmarks. He emphasizes the need for ethical guidelines in AI, particularly through initiatives like the People's Speech dataset, which addresses fairness and representation in machine learning. Mattson also shares insights on streamlining model sharing with MLCube and the importance of robust performance metrics as the ML landscape evolves, aiming to democratize access and innovation in the field.
AI Snips
Chapters
Books
Transcript
Episode notes
MLPerf's Purpose
- MLPerf benchmarks speed for tasks like model training and inference throughput.
- This allows comparing hardware and software solutions, driving progress.
MLPerf's Origins
- Peter Mattson, a compiler expert, co-founded MLPerf with David Patterson.
- They collaborated with other researchers, creating a standardized ML benchmark.
MLPerf's Scope
- MLPerf includes vision, language, recommendation, and reinforcement learning benchmarks.
- It focuses on real-world tasks to avoid optimizing for unrealistic scenarios.