
Gradient Dissent: Conversations on AI
Join Lukas Biewald on Gradient Dissent, an AI-focused podcast brought to you by Weights & Biases. Dive into fascinating conversations with industry giants from NVIDIA, Meta, Google, Lyft, OpenAI, and more. Explore the cutting-edge of AI and learn the intricacies of bringing models into production.
Latest episodes

Mar 25, 2021 • 39min
Dominik Moritz — Building Intuitive Data Visualization Tools
Dominik shares the story and principles behind Vega and Vega-Lite, and explains how visualization and machine learning help each other.
---
Dominik is a co-author of Vega-Lite, a high-level visualization grammar for building interactive plots. He's also a professor at the Human-Computer Interaction Institute Institute at Carnegie Mellon University and an ML researcher at Apple.
Connect with Dominik
Twitter: https://twitter.com/domoritz
GitHub: https://github.com/domoritz
Personal website: https://www.domoritz.de/
---
0:00 Sneak peek, intro
1:15 What is Vega-Lite?
5:39 The grammar of graphics
9:00 Using visualizations creatively
11:36 Vega vs Vega-Lite
16:03 ggplot2 and machine learning
18:39 Voyager and the challenges of scale
24:54 Model explainability and visualizations
31:24 Underrated topics: constraints and visualization theory
34:38 The challenge of metrics in deployment
36:54 In between aggregate statistics and individual examples
Links Discussed
Vega-Lite: https://vega.github.io/vega-lite/
Data analysis and statistics: an expository overview (Tukey and Wilk, 1966): https://dl.acm.org/doi/10.1145/1464291.1464366
Slope chart / slope graph: https://vega.github.io/vega-lite/examples/line_slope.html
Voyager: https://github.com/vega/voyager
Draco: https://github.com/uwdata/draco
Check out the transcription and discover more awesome ML projects:
http://wandb.me/gd-domink-moritz
---
Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
---
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery

Mar 18, 2021 • 49min
Cade Metz — The Stories Behind the Rise of AI
How Cade got access to the stories behind some of the biggest advancements in AI, and the dynamic playing out between leaders at companies like Google, Microsoft, and Facebook.
Cade Metz is a New York Times reporter covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites. His first book, "Genius Makers", tells the stories of the pioneers behind AI.
Get the book: http://bit.ly/GeniusMakers
Follow Cade on Twitter: https://twitter.com/CadeMetz/
And on Linkedin: https://www.linkedin.com/in/cademetz/
Topics discussed:
0:00 sneak peek, intro
3:25 audience and charachters
7:18 *spoiler alert* AGI
11:01 book ends, but story goes on
17:31 overinflated claims in AI
23:12 Deep Mind, OpenAI, building AGI
29:02 neuroscience and psychology, outsiders
34:35 Early adopters of ML
38:34 WojNet, where is credit due?
42:45 press covering AI
46:38 Aligning technology and need
Read the transcript and discover awesome ML projects:
http://wandb.me/cade-metz
Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery

Mar 11, 2021 • 56min
Dave Selinger — AI and the Next Generation of Security Systems
Learn why traditional home security systems tend to fail and how Dave’s love of tinkering and deep learning are helping him and the team at Deep Sentinel avoid those same pitfalls. He also discusses the importance of combatting racial bias by designing race-agnostic systems and what their approach is to solving that problem.
Dave Selinger is the co-founder and CEO of Deep Sentinel, an intelligent crime prediction and prevention system that stops crime before it happens using deep learning vision techniques. Prior to founding Deep Sentinel, Dave co-founded RichRelevance, an AI recommendation company.
https://www.deepsentinel.com/
https://www.meetup.com/East-Bay-Tri-Valley-Machine-Learning-Meetup/
https://twitter.com/daveselinger
Topics covered:
0:00 Sneak peek, smart vs dumb cameras, intro
0:59 What is Deep Sentinel, how does it work?
6:00 Hardware, edge devices
10:40 OpenCV Fork, tinkering
16:18 ML Meetup, Climbing the AI research ladder
20:36 Challenge of Safety critical applications
27:03 New models, re-training, exhibitionists and voyeurs
31:17 How do you prove your cameras are better?
34:24 Angel investing in AI companies
38:00 Social responsibility with data
43:33 Combatting bias with data systems
52:22 Biggest bottlenecks production
Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Read the transcript and discover more awesome machine learning material here:
http://wandb.me/Dave-selinger-podcast
Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery

Mar 4, 2021 • 54min
Tim & Heinrich — Democraticizing Reinforcement Learning Research
Since reinforcement learning requires hefty compute resources, it can be tough to keep up without a serious budget of your own. Find out how the team at Facebook AI Research (FAIR) is looking to increase access and level the playing field with the help of NetHack, an archaic rogue-like video game from the late 80s.
Links discussed:
The NetHack Learning Environment:
https://ai.facebook.com/blog/nethack-learning-environment-to-advance-deep-reinforcement-learning/
Reinforcement learning, intrinsic motivation:
https://arxiv.org/abs/2002.12292
Knowledge transfer:
https://arxiv.org/abs/1910.08210
Tim Rocktäschel is a Research Scientist at Facebook AI Research (FAIR) London and a Lecturer in the Department of Computer Science at University College London (UCL). At UCL, he is a member of the UCL Centre for Artificial Intelligence and the UCL Natural Language Processing group. Prior to that, he was a Postdoctoral Researcher in the Whiteson Research Lab, a Stipendiary Lecturer in Computer Science at Hertford College, and a Junior Research Fellow in Computer Science at Jesus College, at the University of Oxford.
https://twitter.com/_rockt
Heinrich Kuttler is an AI and machine learning researcher at Facebook AI Research (FAIR) and before that was a research engineer and team lead at DeepMind.
https://twitter.com/HeinrichKuttler
https://www.linkedin.com/in/heinrich-kuttler/
Topics covered:
0:00 a lack of reproducibility in RL
1:05 What is NetHack and how did the idea come to be?
5:46 RL in Go vs NetHack
11:04 performance of vanilla agents, what do you optimize for
18:36 transferring domain knowledge, source diving
22:27 human vs machines intrinsic learning
28:19 ICLR paper - exploration and RL strategies
35:48 the future of reinforcement learning
43:18 going from supervised to reinforcement learning
45:07 reproducibility in RL
50:05 most underrated aspect of ML, biggest challenges?
Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery

Feb 18, 2021 • 46min
Daphne Koller — Digital Biology and the Next Epoch of Science
From teaching at Stanford to co-founding Coursera, insitro, and Engageli, Daphne Koller reflects on the importance of education, giving back, and cross-functional research.
Daphne Koller is the founder and CEO of insitro, a company using machine learning to rethink drug discovery and development. She is a MacArthur Fellowship recipient, member of the National Academy of Engineering, member of the American Academy of Arts and Science, and has been a Professor in the Department of Computer Science at Stanford University. In 2012, Daphne co-founded Coursera, one of the world's largest online education platforms. She is also a co-founder of Engageli, a digital platform designed to optimize student success.
https://www.insitro.com/
https://www.insitro.com/jobs
https://www.engageli.com/
https://www.coursera.org/
Follow Daphne on Twitter: https://twitter.com/DaphneKoller
https://www.linkedin.com/in/daphne-koller-4053a820/
Topics covered:
0:00 Giving back and intro
2:10 insitro's mission statement and Eroom's Law
3:21 The drug discovery process and how ML helps
10:05 Protein folding
15:48 From 2004 to now, what's changed?
22:09 On the availability of biology and vision datasets
26:17 Cross-functional collaboration at insitro
28:18 On teaching and founding Coursera
31:56 The origins of Engageli
36:38 Probabilistic graphic models
39:33 Most underrated topic in ML
43:43 Biggest day-to-day challenges
Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery

Feb 11, 2021 • 36min
Piero Molino — The Secret Behind Building Successful Open Source Projects
Piero shares the story of how Ludwig was created, as well as the ins and outs of how Ludwig works and the future of machine learning with no code.
Piero is a Staff Research Scientist in the Hazy Research group at Stanford University. He is a former founding member of Uber AI, where he created Ludwig, worked on applied projects (COTA, Graph Learning for Uber Eats, Uber’s Dialogue System), and published research on NLP, Dialogue, Visualization, Graph Learning, Reinforcement Learning, and Computer Vision.
Topics covered:
0:00 Sneak peek and intro
1:24 What is Ludwig, at a high level?
4:42 What is Ludwig doing under the hood?
7:11 No-code machine learning and data types
14:15 How Ludwig started
17:33 Model performance and underlying architecture
21:52 On Python in ML
24:44 Defaults and W&B integration
28:26 Perspective on NLP after 10 years in the field
31:49 Most underrated aspect of ML
33:30 Hardest part of deploying ML models in the real world
Learn more about Ludwig: https://ludwig-ai.github.io/ludwig-docs/
Piero's Twitter: https://twitter.com/w4nderlus7
Follow Piero on Linkedin: https://www.linkedin.com/in/pieromolino/?locale=en_US
Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery

Feb 5, 2021 • 49min
Rosanne Liu — Conducting Fundamental ML Research as a Nonprofit
How Rosanne is working to democratize AI research and improve diversity and fairness in the field through starting a non-profit after being a founding member of Uber AI Labs, doing lots of amazing research, and publishing papers at top conferences.
Rosanne is a machine learning researcher, and co-founder of ML Collective, a nonprofit organization for open collaboration and mentorship. Before that, she was a founding member of Uber AI. She has published research at NeurIPS, ICLR, ICML, Science, and other top venues. While at school she used neural networks to help discover novel materials and to optimize fuel efficiency in hybrid vehicles.
ML Collective: http://mlcollective.org/
Controlling Text Generation with Plug and Play Language Models: https://eng.uber.com/pplm/
LCA: Loss Change Allocation for Neural Network Training: https://eng.uber.com/research/lca-loss-change-allocation-for-neural-network-training/
Topics covered
0:00 Sneak peek, Intro
1:53 The origin of ML Collective
5:31 Why a non-profit and who is MLC for?
14:30 LCA, Loss Change Allocation
18:20 Running an org, research vs admin work
20:10 Advice for people trying to get published
24:15 on reading papers and Intrinsic Dimension paper
36:25 NeurIPS - Open Collaboration
40:20 What is your reward function?
44:44 Underrated aspect of ML
47:22 How to get involved with MLC
Get our podcast on these other platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices:
https://wandb.ai/gallery

Jan 28, 2021 • 47min
Sean Gourley — NLP, National Defense, and Establishing Ground Truth
In this episode of Gradient Dissent, Primer CEO Sean Gourley and Lukas Biewald sit down to talk about NLP, working with vast amounts of information, and how crucially it relates to national defense. They also chat about their experience of being second-time founders coming from a data science background and how it affects the way they run their companies. We hope you enjoy this episode!
Sean Gourley is the founder and CEO Primer, a natural language processing startup in San Francisco. Previously, he was CTO of Quid an augmented intelligence company that he cofounded back in 2009. And prior to that, he worked on self-repairing nano circuits at NASA Ames. Sean has a PhD in physics from Oxford, where his research as a road scholar focused on graph theory, complex systems, and the mathematical patterns underlying modern war.
Follow Sean on Twitter:
https://primer.ai/
https://twitter.com/sgourley
Topics Covered:
0:00 Sneak peek, intro
1:42 Primer's mission and purpose
4:29 The Diamond Age – How do we train machines to observe the world and help us understand it
7:44 a self-writing Wikipedia
9:30 second-time founder
11:26 being a founder as a data scientist
15:44 commercializing algorithms
17:54 Is GPT-3 worth the hype? The mind-blowing scale of transformers
23:00 AI Safety, military/defense
29:20 disinformation, does ML play a role?
34:55 Establishing ground truth and informational provenance
39:10 COVID misinformation, Masks, division
44:07 most underrated aspect of ML
45:09 biggest bottlenecks in ML?
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery

4 snips
Jan 21, 2021 • 50min
Peter Wang — Anaconda, Python, and Scientific Computing
Peter Wang talks about his journey of being the CEO of and co-founding Anaconda, his perspective on the Python programming language, and its use for scientific computing.
Peter Wang has been developing commercial scientific computing and visualization software for over 15 years. He has extensive experience in software design and development across a broad range of areas, including 3D graphics, geophysics, large data simulation and visualization, financial risk modeling, and medical imaging.
Peter’s interests in the fundamentals of vector computing and interactive visualization led him to co-found Anaconda (formerly Continuum Analytics). Peter leads the open source and community innovation group.
As a creator of the PyData community and conferences, he devotes time and energy to growing the Python data science community and advocating and teaching Python at conferences around the world. Peter holds a BA in Physics from Cornell University.
Follow peter on Twitter: https://twitter.com/pwang
https://www.anaconda.com/
Intake: https://www.anaconda.com/blog/intake-...
https://pydata.org/
Scientific Data Management in the Coming Decade paper: https://arxiv.org/pdf/cs/0502008.pdf
Topics covered:
0:00 (intro) Technology is not value neutral; Don't punt on ethics
1:30 What is Conda?
2:57 Peter's Story and Anaconda's beginning
6:45 Do you ever regret choosing Python?
9:39 On other programming languages
17:13 Scientific Data Management in the Coming Decade
21:48 Who are your customers?
26:24 The ML hierarchy of needs
30:02 The cybernetic era and Conway's Law
34:31 R vs python
42:19 Most underrated: Ethics - Don't Punt
46:50 biggest bottlenecks: open-source, python
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery

Jan 14, 2021 • 1h 3min
Chris Anderson — Robocars, Drones, and WIRED Magazine
Chris shares his journey starting from playing in R.E.M, becoming interested in physics to leading WIRED Magazine for 11 years. His robot fascination lead to starting a company that manufactures drones, and creating a community democratizing self-driving cars.
Chris Anderson is the CEO of 3D Robotics, founder of the Linux Foundation Dronecode Project and founder of the DIY Drones and DIY Robocars communities. From 2001 through 2012 he was the Editor in Chief of Wired Magazine. He's also the author of the New York Times bestsellers `The Long Tail` and `Free` and `Makers: The New Industrial Revolution`. In 2007 he was named to "Time 100," most influential men and women in the world.
Links discussed in this episode:
DIY Robocars: diyrobocars.com
Getting Started with Robocars: https://diyrobocars.com/2020/10/31/getting-started-with-robocars/
DIY Robotics Meet Up: https://www.meetup.com/DIYRobocars
Other Works
3DRobotics: https://www.3dr.com/
The Long Tail by Chris Anderson: https://www.amazon.com/Long-Tail-Future-Business-Selling/dp/1401309666/ref=sr_1_1?dchild=1&keywords=The+Long+Tail&qid=1610580178&s=books&sr=1-1
Interesting links Chris shared
OpenMV: https://openmv.io/
Intel Tracking Camera: https://www.intelrealsense.com/tracking-camera-t265/
Zumi Self-Driving Car Kit: https://www.robolink.com/zumi/
Possible Minds: Twenty-Five Ways of Looking at AI: https://www.amazon.com/Possible-Minds-Twenty-Five-Ways-Looking/dp/0525557997
Topics discussed:
0:00 sneak peek and intro
1:03 Battle of the REM's
3:35 A brief stint with Physics
5:09 Becoming a journalist and the woes of being a modern physicis
9:25 WIRED in the aughts
12:13 perspectives on "The Long Tail"
20:47 getting into drones
25:08 "Take a smartphone, add wings"
28:07 How did you get to autonomous racing cars?
33:30 COVID and virtual environments
38:40 Chris's hope for Robocars
40:54 Robocar hardware, software, sensors
53:49 path to Singularity/ regulations on drones
58:50 "the golden age of simulation"
1:00:22 biggest challenge in deploying ML models
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery