

Gradient Dissent: Conversations on AI
Lukas Biewald
Join Lukas Biewald on Gradient Dissent, an AI-focused podcast brought to you by Weights & Biases. Dive into fascinating conversations with industry giants from NVIDIA, Meta, Google, Lyft, OpenAI, and more. Explore the cutting-edge of AI and learn the intricacies of bringing models into production.
Episodes
Mentioned books

4 snips
Jan 21, 2021 • 50min
Peter Wang — Anaconda, Python, and Scientific Computing
Peter Wang talks about his journey of being the CEO of and co-founding Anaconda, his perspective on the Python programming language, and its use for scientific computing.
Peter Wang has been developing commercial scientific computing and visualization software for over 15 years. He has extensive experience in software design and development across a broad range of areas, including 3D graphics, geophysics, large data simulation and visualization, financial risk modeling, and medical imaging.
Peter’s interests in the fundamentals of vector computing and interactive visualization led him to co-found Anaconda (formerly Continuum Analytics). Peter leads the open source and community innovation group.
As a creator of the PyData community and conferences, he devotes time and energy to growing the Python data science community and advocating and teaching Python at conferences around the world. Peter holds a BA in Physics from Cornell University.
Follow peter on Twitter: https://twitter.com/pwang
https://www.anaconda.com/
Intake: https://www.anaconda.com/blog/intake-...
https://pydata.org/
Scientific Data Management in the Coming Decade paper: https://arxiv.org/pdf/cs/0502008.pdf
Topics covered:
0:00 (intro) Technology is not value neutral; Don't punt on ethics
1:30 What is Conda?
2:57 Peter's Story and Anaconda's beginning
6:45 Do you ever regret choosing Python?
9:39 On other programming languages
17:13 Scientific Data Management in the Coming Decade
21:48 Who are your customers?
26:24 The ML hierarchy of needs
30:02 The cybernetic era and Conway's Law
34:31 R vs python
42:19 Most underrated: Ethics - Don't Punt
46:50 biggest bottlenecks: open-source, python
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery

Jan 14, 2021 • 1h 3min
Chris Anderson — Robocars, Drones, and WIRED Magazine
Chris shares his journey starting from playing in R.E.M, becoming interested in physics to leading WIRED Magazine for 11 years. His robot fascination lead to starting a company that manufactures drones, and creating a community democratizing self-driving cars.
Chris Anderson is the CEO of 3D Robotics, founder of the Linux Foundation Dronecode Project and founder of the DIY Drones and DIY Robocars communities. From 2001 through 2012 he was the Editor in Chief of Wired Magazine. He's also the author of the New York Times bestsellers `The Long Tail` and `Free` and `Makers: The New Industrial Revolution`. In 2007 he was named to "Time 100," most influential men and women in the world.
Links discussed in this episode:
DIY Robocars: diyrobocars.com
Getting Started with Robocars: https://diyrobocars.com/2020/10/31/getting-started-with-robocars/
DIY Robotics Meet Up: https://www.meetup.com/DIYRobocars
Other Works
3DRobotics: https://www.3dr.com/
The Long Tail by Chris Anderson: https://www.amazon.com/Long-Tail-Future-Business-Selling/dp/1401309666/ref=sr_1_1?dchild=1&keywords=The+Long+Tail&qid=1610580178&s=books&sr=1-1
Interesting links Chris shared
OpenMV: https://openmv.io/
Intel Tracking Camera: https://www.intelrealsense.com/tracking-camera-t265/
Zumi Self-Driving Car Kit: https://www.robolink.com/zumi/
Possible Minds: Twenty-Five Ways of Looking at AI: https://www.amazon.com/Possible-Minds-Twenty-Five-Ways-Looking/dp/0525557997
Topics discussed:
0:00 sneak peek and intro
1:03 Battle of the REM's
3:35 A brief stint with Physics
5:09 Becoming a journalist and the woes of being a modern physicis
9:25 WIRED in the aughts
12:13 perspectives on "The Long Tail"
20:47 getting into drones
25:08 "Take a smartphone, add wings"
28:07 How did you get to autonomous racing cars?
33:30 COVID and virtual environments
38:40 Chris's hope for Robocars
40:54 Robocar hardware, software, sensors
53:49 path to Singularity/ regulations on drones
58:50 "the golden age of simulation"
1:00:22 biggest challenge in deploying ML models
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on these other platforms:
YouTube: http://wandb.me/youtube
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery

Dec 4, 2020 • 46min
Adrien Treuille — Building Blazingly Fast Tools That People Love
Adrien shares his journey from making games that advance science (Eterna, Foldit) to creating a Streamlit, an open-source app framework enabling ML/Data practitioners to easily build powerful and interactive apps in a few hours.
Adrien is co-founder and CEO of Streamlit, an open-source app framework that helps create beautiful data apps in hours in pure Python. Dr. Treuille has been a Zoox VP, Google X project lead, and Computer Science faculty at Carnegie Mellon. He has won numerous scientific awards, including the MIT TR35. Adrien has been featured in the documentaries What Will the Future Be Like by PBS/NOVA, and Lo and Behold by Werner Herzog.
https://twitter.com/myelbows
https://www.linkedin.com/in/adrien-treuille-52215718/
https://www.streamlit.io/
https://eternagame.org/
https://fold.it/
Topics covered:
0:00 sneak peek/Streamlit
0:47 intro
1:21 from aspiring guitar player to machine learning
4:16 Foldit - games that train humans
10:08 Eterna - another game and its relation to ML
16:15 Research areas as a professor at Carnegie Mellon
18:07 the origin of Streamlit
23:53 evolution of Streamlit: data science-ing a pivot
30:20 on programming languages
32:20 what’s next for Streamlit
37:34 On meditation and work/life
41:40 Underrated aspect of Machine Learning
443:07 Biggest challenge in deploying ML in the real world
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on YouTube, Apple, Spotify, and Google!
YouTube: http://wandb.me/youtube
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/google-podcasts
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work:
http://wandb.me/salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.

Nov 20, 2020 • 47min
Peter Norvig – Singularity Is in the Eye of the Beholder
We're thrilled to have Peter Norvig join us to talk about the evolution of deep learning, his industry-defining book, his work at Google, and what he thinks the future holds for machine learning research.
Peter Norvig is a Director of Research at Google Inc; previously he directed Google's core search algorithms group. He is co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field, and co-teacher of an Artificial Intelligence class that signed up 160,000. Prior to his work at Google, Norvig was NASA's chief computer scientist.
Peter's website:
https://norvig.com/
Topics covered:
0:00 singularity is in the eye of the beholder
0:32 introduction
1:09 project Euler
2:42 advent of code/pytudes
4:55 new sections in the new version of his book
10:32 unreasonable effectiveness of data Paper 15 years later
14:44 what advice would you give to a young researcher?
16:03 computing power in the evolution of deep learning
19:19 what's been surprising in the development of AI?
24:21 from alpha go to human-like intelligence
28:46 What in AI has been surprisingly hard or easy?
32:11 synthetic data and language
35:16 singularity is in the eye of the beholder
38:43 the future of python in ML and why he used it in his book
43:00 underrated topic in ML and bottlenecks in production
Visit our podcasts homepage for transcripts and more episodes!
https://www.wandb.com/podcast
Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: https://tiny.cc/GD_Google
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
https://tiny.cc/wb-salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
https://bit.ly/wb-slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://wandb.ai/gallery

Nov 13, 2020 • 35min
Robert Nishihara — The State of Distributed Computing in ML
The story of Ray and what lead Robert to go from reinforcement learning researcher to creating open-source tools for machine learning and beyond
Robert is currently working on Ray, a high-performance distributed execution framework for AI applications. He studied mathematics at Harvard. He’s broadly interested in applied math, machine learning, and optimization, and was a member of the Statistical AI Lab, the AMPLab/RISELab, and the Berkeley AI Research Lab at UC Berkeley.
robertnishihara.com
https://anyscale.com/
https://github.com/ray-project/ray
https://twitter.com/robertnishihara
https://www.linkedin.com/in/robert-nishihara-b6465444/
Topics covered:
0:00 sneak peak + intro
1:09 what is Ray?
3:07 Spark and Ray
5:48 reinforcement learning
8:15 non-ml use cases of ray
10:00 RL in the real world and and common uses of Ray
13:49 Ppython in ML
16:38 from grad school to ML tools company
20:40 pulling product requirements in surprising directions
23:25 how to manage a large open source community
27:05 Ray Tune
29:35 where do you see bottlenecks in production?
31:39 An underrated aspect of Machine Learning
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: http://tiny.cc/GD_Google
Subscribe to our YouTube channel for videos of these podcasts and more Machine learning-related videos:
https://www.youtube.com/c/WeightsBiases
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://bit.ly/wb-slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://app.wandb.ai/gallery

Oct 29, 2020 • 59min
Ines & Sofie — Building Industrial-Strength NLP Pipelines
Sofie and Ines walk us through how the new spaCy library helps build end to end SOTA natural language processing workflows.
Ines Montani is the co-founder of Explosion AI, a digital studio specializing in tools for AI technology. She's a core developer of spaCy, one of the leading open-source libraries for Natural Language Processing in Python and Prodigy, a new data annotation tool powered by active learning. Before founding Explosion AI, she was a freelance front-end developer and strategist.
https://twitter.com/_inesmontani
Sofie Van Landeghem is a Natural Language Processing and Machine Learning engineer at Explosion.ai. She is a Software Engineer at heart, with an absurd love for quality assurance and testing, introducing proper levels of abstraction, and ensuring code robustness and modularity.
She has more than 12 years of experience in Natural Language Processing and Machine Learning, including in the pharmaceutical industry and the food industry.
https://twitter.com/oxykodit
https://spacy.io/
https://prodi.gy/
https://thinc.ai/
https://explosion.ai/
Topics covered:
0:00 Sneak peek
0:35 intro
2:29 How spaCy was started
6:11 Business model, open source
9:55 What was spaCy designed to solve?
12:23 advances in NLP and modern practices in industry
17:19 what differentiates spaCy from a more research focused NLP library?
19:28 Multi-lingual/domain specific support
23:52 spaCy V3 configuration
28:16 Thoughts on Python, Syphon, other programming languages for ML
33:45 Making things clear and reproducible
37:30 prodigy and getting good training data
44:09 most underrated aspect of ML
51:00 hardest part of putting models into production
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: bit.ly/2WdrUvI
Spotify: bit.ly/2SqtadF
Google:tiny.cc/GD_Google
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
tiny.cc/wb-salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
bit.ly/wb-slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
app.wandb.ai/gallery

Oct 15, 2020 • 37min
Daeil Kim — The Unreasonable Effectiveness of Synthetic Data
Supercharging computer vision model performance by generating years of training data in minutes.
Daeil Kim is the co-founder and CEO of AI.Reverie(https://aireverie.com/), a startup that specializes in creating high quality synthetic training data for computer vision algorithms. Before that, he was a senior data scientist at the New York Times. And before that he got his PhD in computer science from Brown University, focusing on machine learning and Bayesian statistics. He's going to talk about tools that will advance machine learning progress, and he's going to talk about synthetic data.
https://twitter.com/daeil
Topics covered:
0:00 Diversifying content
0:23 Intro+bio
1:00 From liberal arts to synthetic data
8:48 What is synthetic data?
11:24 Real world examples of synthetic data
16:16 Understanding performance gains using synthetic data
21:32 The future of Synthetic data and AI.Reverie
23:21 The composition of people at AI.reverie and ML
28:28 The evolution of ML tools and systems that Daeil uses
33:16 Most underrated aspect of ML and common misconceptions
34:42 Biggest challenge in making synthetic data work in the real world
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: bit.ly/2WdrUvI
Spotify: bit.ly/2SqtadF
Google:tiny.cc/GD_Google
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
tiny.cc/wb-salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
bit.ly/wb-slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
app.wandb.ai/gallery

Oct 1, 2020 • 1h 19min
Joaquin Candela — Definitions of Fairness
Joaquin chats about scaling and democratizing AI at Facebook, while understanding fairness and algorithmic bias.
---
Joaquin Quiñonero Candela is Distinguished Tech Lead for Responsible AI at Facebook, where he aims to understand and mitigate the risks and unintended consequences of the widespread use of AI across Facebook. He was previously Director of Society and AI Lab and Director of Engineering for Applied ML. Before joining Facebook, Joaquin taught at the University of Cambridge, and worked at Microsoft Research.
Connect with Joaquin:
Personal website: https://quinonero.net/
Twitter: https://twitter.com/jquinonero
LinkedIn: https://www.linkedin.com/in/joaquin-qui%C3%B1onero-candela-440844/
---
Topics Discussed:
0:00 Intro, sneak peak
0:53 Looking back at building and scaling AI at Facebook
10:31 How do you ship a model every week?
15:36 Getting buy-in to use a system
19:36 More on ML tools
24:01 Responsible AI at Facebook
38:33 How to engage with those effected by ML decisions
41:54 Approaches to fairness
53:10 How to know things are built right
59:34 Diversity, inclusion, and AI
1:14:21 Underrated aspect of AI
1:16:43 Hardest thing when putting models into production
Transcript:
http://wandb.me/gd-joaquin-candela
Links Discussed:
Race and Gender (2019): https://arxiv.org/pdf/1908.06165.pdf
Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning (2019): https://arxiv.org/abs/1912.10389
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification (2018): http://proceedings.mlr.press/v81/buolamwini18a.html
---
Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google Podcasts: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected

Sep 29, 2020 • 51min
Richard Socher — The Challenges of Making ML Work in the Real World
Richard Socher, ex-Chief Scientist at Salesforce, joins us to talk about The AI Economist, NLP protein generation and biggest challenge in making ML work in the real world.
Richard Socher was the Chief scientist (EVP) at Salesforce where he lead teams working on fundamental research(einstein.ai/), applied research, product incubation, CRM search, customer service automation and a cross-product AI platform for unstructured and structured data. Previously, he was an adjunct professor at Stanford’s computer science department and the founder and CEO/CTO of MetaMind(www.metamind.io/) which was acquired by Salesforce in 2016. In 2014, he got my PhD in the [CS Department](www.cs.stanford.edu/) at Stanford. He likes paramotoring and water adventures, traveling and photography. More info:
- Forbes article:
https://www.forbes.com/sites/gilpress/2017/05/01/emerging-artificial-intelligence-ai-leaders-richard-socher-salesforce/) with more info about Richard's bio.
- CS224n - NLP with Deep Learning(http://cs224n.stanford.edu/) the class Richard used to teach.
- TEDx talk(https://www.youtube.com/watch?v=8cmx7V4oIR8) about where AI is today and where it's going.
Research:
Google Scholar Link(https://scholar.google.com/citations?user=FaOcyfMAAAAJ&hl=en)
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies
Arxiv link(https://arxiv.org/abs/2004.13332), blog(https://blog.einstein.ai/the-ai-economist/), short video(https://www.youtube.com/watch?v=4iQUcGyQhdA), Q&A(https://salesforce.com/company/news-press/stories/2020/4/salesforce-ai-economist/), Press: VentureBeat(https://venturebeat.com/2020/04/29/salesforces-ai-economist-taps-reinforcement-learning-to-generate-optimal-tax-policies/), TechCrunch(https://techcrunch.com/2020/04/29/salesforce-researchers-are-working-on-an-ai-economist-for-more-equitable-tax-policy/)
ProGen: Language Modeling for Protein Generation:
bioRxiv link(https://www.biorxiv.org/content/10.1101/2020.03.07.982272v2), [blog](https://blog.einstein.ai/progen/) ]
Dye-sensitized solar cells under ambient light powering machine learning: towards autonomous smart sensors for the internet of things
Issue11, (**Chemical Science 2020**). paper link(https://pubs.rsc.org/en/content/articlelanding/2020/sc/c9sc06145b#!divAbstract)
CTRL: A Conditional Transformer Language Model for Controllable Generation:
Arxiv link(https://arxiv.org/abs/1909.05858), code pre-trained and fine-tuning(https://github.com/salesforce/ctrl), blog(https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/)
Genie: a generator of natural language semantic parsers for virtual assistant commands:
PLDI 2019 pdf link(https://almond-static.stanford.edu/papers/genie-pldi19.pdf), https://almond.stanford.edu
Topics Covered:
0:00 intro
0:42 the AI economist
7:08 the objective function and Gini Coefficient
12:13 on growing up in Eastern Germany and cultural differences
15:02 Language models for protein generation (ProGen)
27:53 CTRL: conditional transformer language model for controllable generation
37:52 Businesses vs Academia
40:00 What ML applications are important to salesforce
44:57 an underrated aspect of machine learning
48:13 Biggest challenge in making ML work in the real world
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on Soundcloud, Apple, Spotify, and Google!
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: http://tiny.cc/GD_Google
Weights and Biases makes developer tools for deep learning.
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon
Join our community of ML practitioners:
http://bit.ly/wb-slack
Our gallery features curated machine learning reports by ML researchers.
https://app.wandb.ai/gallery

Sep 17, 2020 • 60min
Zack Chase Lipton — The Medical Machine Learning Landscape
How Zack went from being a musician to professor, how medical applications of Machine Learning are developing, and the challenges of counteracting bias in real world applications.
Zachary Chase Lipton is an assistant professor of Operations Research and Machine Learning at Carnegie Mellon University.
His research spans core machine learning methods and their social impact and addresses diverse application areas, including clinical medicine and natural language processing. Current research focuses include robustness under distribution shift, breast cancer screening, the effective and equitable allocation of organs, and the intersection of causal thinking with messy data.
He is the founder of the Approximately Correct (approximatelycorrect.com) blog and the creator of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks.
Zack’s blog - http://approximatelycorrect.com/
Detecting and Correcting for Label Shift with Black Box Predictors: https://arxiv.org/pdf/1802.03916.pdf
Algorithmic Fairness from a Non-Ideal Perspective https://www.datascience.columbia.edu/data-good-zachary-lipton-lecture
Jonas Peter’s lectures on causality:
https://youtu.be/zvrcyqcN9Wo
0:00 Sneak peek: Is this a problem worth solving?
0:38 Intro
1:23 Zack’s journey from being a musician to a professor at CMU
4:45 Applying machine learning to medical imaging
10:14 Exploring new frontiers: the most impressive deep learning applications for healthcare
12:45 Evaluating the models – Are they ready to be deployed in hospitals for use by doctors?
19:16 Capturing the signals in evolving representations of healthcare data
27:00 How does the data we capture affect the predictions we make
30:40 Distinguishing between associations and correlations in data – Horror vs romance movies
34:20 The positive effects of augmenting datasets with counterfactually flipped data
39:25 Algorithmic fairness in the real world
41:03 What does it mean to say your model isn’t biased?
43:40 Real world implications of decisions to counteract model bias
49:10 The pragmatic approach to counteracting bias in a non-ideal world
51:24 An underrated aspect of machine learning
55:11 Why defining the problem is the biggest challenge for machine learning in the real world
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on YouTube, Apple, and Spotify!
YouTube: https://www.youtube.com/c/WeightsBiases
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://bit.ly/wandb-forum
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://app.wandb.ai/gallery