

Gradient Dissent: Conversations on AI
Lukas Biewald
Join Lukas Biewald on Gradient Dissent, an AI-focused podcast brought to you by Weights & Biases. Dive into fascinating conversations with industry giants from NVIDIA, Meta, Google, Lyft, OpenAI, and more. Explore the cutting-edge of AI and learn the intricacies of bringing models into production.
Episodes
Mentioned books

Oct 16, 2020 • 37min
Daeil Kim — The Unreasonable Effectiveness of Synthetic Data
Supercharging computer vision model performance by generating years of training data in minutes.
Daeil Kim is the co-founder and CEO of AI.Reverie(https://aireverie.com/), a startup that specializes in creating high quality synthetic training data for computer vision algorithms. Before that, he was a senior data scientist at the New York Times. And before that he got his PhD in computer science from Brown University, focusing on machine learning and Bayesian statistics. He's going to talk about tools that will advance machine learning progress, and he's going to talk about synthetic data.
https://twitter.com/daeil
Topics covered:
0:00 Diversifying content
0:23 Intro+bio
1:00 From liberal arts to synthetic data
8:48 What is synthetic data?
11:24 Real world examples of synthetic data
16:16 Understanding performance gains using synthetic data
21:32 The future of Synthetic data and AI.Reverie
23:21 The composition of people at AI.reverie and ML
28:28 The evolution of ML tools and systems that Daeil uses
33:16 Most underrated aspect of ML and common misconceptions
34:42 Biggest challenge in making synthetic data work in the real world
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on Apple, Spotify, and Google!
Apple Podcasts: bit.ly/2WdrUvI
Spotify: bit.ly/2SqtadF
Google:tiny.cc/GD_Google
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
tiny.cc/wb-salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
bit.ly/wb-slack
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
app.wandb.ai/gallery

Oct 1, 2020 • 1h 19min
Joaquin Candela — Definitions of Fairness
Joaquin chats about scaling and democratizing AI at Facebook, while understanding fairness and algorithmic bias.
---
Joaquin Quiñonero Candela is Distinguished Tech Lead for Responsible AI at Facebook, where he aims to understand and mitigate the risks and unintended consequences of the widespread use of AI across Facebook. He was previously Director of Society and AI Lab and Director of Engineering for Applied ML. Before joining Facebook, Joaquin taught at the University of Cambridge, and worked at Microsoft Research.
Connect with Joaquin:
Personal website: https://quinonero.net/
Twitter: https://twitter.com/jquinonero
LinkedIn: https://www.linkedin.com/in/joaquin-qui%C3%B1onero-candela-440844/
---
Topics Discussed:
0:00 Intro, sneak peak
0:53 Looking back at building and scaling AI at Facebook
10:31 How do you ship a model every week?
15:36 Getting buy-in to use a system
19:36 More on ML tools
24:01 Responsible AI at Facebook
38:33 How to engage with those effected by ML decisions
41:54 Approaches to fairness
53:10 How to know things are built right
59:34 Diversity, inclusion, and AI
1:14:21 Underrated aspect of AI
1:16:43 Hardest thing when putting models into production
Transcript:
http://wandb.me/gd-joaquin-candela
Links Discussed:
Race and Gender (2019): https://arxiv.org/pdf/1908.06165.pdf
Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning (2019): https://arxiv.org/abs/1912.10389
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification (2018): http://proceedings.mlr.press/v81/buolamwini18a.html
---
Get our podcast on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google Podcasts: http://wandb.me/google-podcasts
YouTube: http://wandb.me/youtube
Soundcloud: http://wandb.me/soundcloud
Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning:
http://wandb.me/slack
Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more:
https://wandb.ai/fully-connected

Sep 29, 2020 • 51min
Richard Socher — The Challenges of Making ML Work in the Real World
Richard Socher, ex-Chief Scientist at Salesforce, joins us to talk about The AI Economist, NLP protein generation and biggest challenge in making ML work in the real world.
Richard Socher was the Chief scientist (EVP) at Salesforce where he lead teams working on fundamental research(einstein.ai/), applied research, product incubation, CRM search, customer service automation and a cross-product AI platform for unstructured and structured data. Previously, he was an adjunct professor at Stanford’s computer science department and the founder and CEO/CTO of MetaMind(www.metamind.io/) which was acquired by Salesforce in 2016. In 2014, he got my PhD in the [CS Department](www.cs.stanford.edu/) at Stanford. He likes paramotoring and water adventures, traveling and photography. More info:
- Forbes article:
https://www.forbes.com/sites/gilpress/2017/05/01/emerging-artificial-intelligence-ai-leaders-richard-socher-salesforce/) with more info about Richard's bio.
- CS224n - NLP with Deep Learning(http://cs224n.stanford.edu/) the class Richard used to teach.
- TEDx talk(https://www.youtube.com/watch?v=8cmx7V4oIR8) about where AI is today and where it's going.
Research:
Google Scholar Link(https://scholar.google.com/citations?user=FaOcyfMAAAAJ&hl=en)
The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies
Arxiv link(https://arxiv.org/abs/2004.13332), blog(https://blog.einstein.ai/the-ai-economist/), short video(https://www.youtube.com/watch?v=4iQUcGyQhdA), Q&A(https://salesforce.com/company/news-press/stories/2020/4/salesforce-ai-economist/), Press: VentureBeat(https://venturebeat.com/2020/04/29/salesforces-ai-economist-taps-reinforcement-learning-to-generate-optimal-tax-policies/), TechCrunch(https://techcrunch.com/2020/04/29/salesforce-researchers-are-working-on-an-ai-economist-for-more-equitable-tax-policy/)
ProGen: Language Modeling for Protein Generation:
bioRxiv link(https://www.biorxiv.org/content/10.1101/2020.03.07.982272v2), [blog](https://blog.einstein.ai/progen/) ]
Dye-sensitized solar cells under ambient light powering machine learning: towards autonomous smart sensors for the internet of things
Issue11, (**Chemical Science 2020**). paper link(https://pubs.rsc.org/en/content/articlelanding/2020/sc/c9sc06145b#!divAbstract)
CTRL: A Conditional Transformer Language Model for Controllable Generation:
Arxiv link(https://arxiv.org/abs/1909.05858), code pre-trained and fine-tuning(https://github.com/salesforce/ctrl), blog(https://blog.einstein.ai/introducing-a-conditional-transformer-language-model-for-controllable-generation/)
Genie: a generator of natural language semantic parsers for virtual assistant commands:
PLDI 2019 pdf link(https://almond-static.stanford.edu/papers/genie-pldi19.pdf), https://almond.stanford.edu
Topics Covered:
0:00 intro
0:42 the AI economist
7:08 the objective function and Gini Coefficient
12:13 on growing up in Eastern Germany and cultural differences
15:02 Language models for protein generation (ProGen)
27:53 CTRL: conditional transformer language model for controllable generation
37:52 Businesses vs Academia
40:00 What ML applications are important to salesforce
44:57 an underrated aspect of machine learning
48:13 Biggest challenge in making ML work in the real world
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on Soundcloud, Apple, Spotify, and Google!
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
Google: http://tiny.cc/GD_Google
Weights and Biases makes developer tools for deep learning.
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon
Join our community of ML practitioners:
http://bit.ly/wb-slack
Our gallery features curated machine learning reports by ML researchers.
https://app.wandb.ai/gallery

Sep 17, 2020 • 60min
Zack Chase Lipton — The Medical Machine Learning Landscape
How Zack went from being a musician to professor, how medical applications of Machine Learning are developing, and the challenges of counteracting bias in real world applications.
Zachary Chase Lipton is an assistant professor of Operations Research and Machine Learning at Carnegie Mellon University.
His research spans core machine learning methods and their social impact and addresses diverse application areas, including clinical medicine and natural language processing. Current research focuses include robustness under distribution shift, breast cancer screening, the effective and equitable allocation of organs, and the intersection of causal thinking with messy data.
He is the founder of the Approximately Correct (approximatelycorrect.com) blog and the creator of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks.
Zack’s blog - http://approximatelycorrect.com/
Detecting and Correcting for Label Shift with Black Box Predictors: https://arxiv.org/pdf/1802.03916.pdf
Algorithmic Fairness from a Non-Ideal Perspective https://www.datascience.columbia.edu/data-good-zachary-lipton-lecture
Jonas Peter’s lectures on causality:
https://youtu.be/zvrcyqcN9Wo
0:00 Sneak peek: Is this a problem worth solving?
0:38 Intro
1:23 Zack’s journey from being a musician to a professor at CMU
4:45 Applying machine learning to medical imaging
10:14 Exploring new frontiers: the most impressive deep learning applications for healthcare
12:45 Evaluating the models – Are they ready to be deployed in hospitals for use by doctors?
19:16 Capturing the signals in evolving representations of healthcare data
27:00 How does the data we capture affect the predictions we make
30:40 Distinguishing between associations and correlations in data – Horror vs romance movies
34:20 The positive effects of augmenting datasets with counterfactually flipped data
39:25 Algorithmic fairness in the real world
41:03 What does it mean to say your model isn’t biased?
43:40 Real world implications of decisions to counteract model bias
49:10 The pragmatic approach to counteracting bias in a non-ideal world
51:24 An underrated aspect of machine learning
55:11 Why defining the problem is the biggest challenge for machine learning in the real world
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on YouTube, Apple, and Spotify!
YouTube: https://www.youtube.com/c/WeightsBiases
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!
Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:
http://tiny.cc/wb-salon
Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:
http://bit.ly/wandb-forum
Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
https://app.wandb.ai/gallery

Sep 9, 2020 • 44min
Anthony Goldbloom — How to Win Kaggle Competitions
Anthony Goldbloom is the founder and CEO of Kaggle. In 2011 & 2012, Forbes Magazine named Anthony as one of the 30 under 30 in technology. In 2011, Fast Company featured him as one of the innovative thinkers who are changing the future of business.
He and Lukas discuss the differences in strategies that do well in Kaggle competitions vs academia vs in production. They discuss his 2016 Ted talk through the lens of 2020, frameworks, and languages.
Topics Discussed:
0:00 Sneak Peek
0:20 Introduction
0:45 methods used in kaggle competitions vs mainstream academia
2:30 Feature engineering
3:55 Kaggle Competitions now vs 10 years ago
8:35 Data augmentation strategies
10:06 Overfitting in Kaggle Competitions
12:53 How to not overfit
14:11 Kaggle competitions vs the real world
18:15 Getting into ML through Kaggle
22:03 Other Kaggle products
25:48 Favorite under appreciated kernel or dataset
28:27 Python & R
32:03 Frameworks
35:15 2016 Ted talk though the lens of 2020
37:54 Reinforcement Learning
38:43 What’s the topic in ML that people don’t talk about enough?
42:02 Where are the biggest bottlenecks in deploying ML software?
Check out Kaggle: https://www.kaggle.com/
Follow Anthony on Twitter: https://twitter.com/antgoldbloom
Watch his 2016 Ted Talk: https://www.ted.com/talks/anthony_goldbloom_the_jobs_we_ll_lose_to_machines_and_the_ones_we_won_t
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
Get our podcast on Soundcloud, Apple, and Spotify!
Soundcloud: https://bit.ly/2YnGjIq
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!
Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.
* Blog: https://www.wandb.com/articles
* Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
* Join our community of ML practitioners working on interesting problems - https://www.wandb.com/ml-community
Host: Lukas Biewald - https://twitter.com/l2k
Producer: Lavanya Shukla - https://twitter.com/lavanyaai
Editor: Cayla Sharp - http://caylasharp.com/

Sep 2, 2020 • 35min
Suzana Ilić — Cultivating Machine Learning Communities
👩💻Today our guest is Suzanah Ilić!
Suzanah is a founder of Machine Learning Tokyo which is a nonprofit organization dedicated to democratizing Machine Learning. They are a team of ML Engineers and Researchers and a community of more than 3000 people.
Machine Learning Tokyo: https://mltokyo.ai/
Follow Suzanah on twitter: https://twitter.com/suzatweet
Check out our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it.
👩🏼🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.
- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum
🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/

Aug 25, 2020 • 51min
Jeremy Howard — The Story of fast.ai and Why Python Is Not the Future of ML
Jeremy Howard is a founding researcher at fast.ai, a research institute dedicated to making Deep Learning more accessible. Previously, he was the CEO and Founder at Enlitic, an advanced machine learning company in San Francisco, California.
Howard is a faculty member at Singularity University, where he teaches data science. He is also a Young Global Leader with the World Economic Forum, and spoke at the World Economic Forum Annual Meeting 2014 on "Jobs For The Machines."
Howard advised Khosla Ventures as their Data Strategist, identifying the biggest opportunities for investing in data-driven startups and mentoring their portfolio companies to build data-driven businesses. Howard was the founding CEO of two successful Australian startups, FastMail and Optimal Decisions Group. Before that, he spent eight years in management consulting, at McKinsey & Company and AT Kearney.
TOPICS COVERED:
0:00 Introduction
0:52 Dad things
2:40 The story of Fast.ai
4:57 How the courses have evolved over time
9:24 Jeremy’s top down approach to teaching
13:02 From Fast.ai the course to Fast.ai the library
15:08 Designing V2 of the library from the ground up
21:44 The ingenious type dispatch system that powers Fast.ai
25:52 Were you able to realize the vision behind v2 of the library
28:05 Is it important to you that Fast.ai is used by everyone in the world, beyond the context of learning
29:37 Real world applications of Fast.ai, including animal husbandry
35:08 Staying ahead of the new developments in the field
38:50 A bias towards learning by doing
40:02 What’s next for Fast.ai
40.35 Python is not the future of Machine Learning
43:58 One underrated aspect of machine learning
45:25 Biggest challenge of machine learning in the real world
Follow Jeremy on Twitter:
https://twitter.com/jeremyphoward
Links:
Deep learning R&D & education: http://fast.ai
Software: http://docs.fast.ai
Book: http://up.fm/book
Course: http://course.fast.ai
Papers:
The business impact of deep learning
https://dl.acm.org/doi/10.1145/2487575.2491127
De-identification Methods for Open Health Data
https://www.jmir.org/2012/1/e33/
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
🔊 Get our podcast on Soundcloud, Apple, and Spotify!
YouTube: https://www.youtube.com/c/WeightsBiases
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!
👩🏼🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.
- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum
🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/

Aug 12, 2020 • 45min
Anantha Kancherla — Building Level 5 Autonomous Vehicles
As Lyft’s VP of Engineering, Software at Level 5, Autonomous Vehicle Program, Anantha Kancherla has a birds-eye view on what it takes to make self-driving cars work in the real world. He previously worked on Windows at Microsoft focusing on DirectX, Graphics and UI; Facebook’s mobile Newsfeed and core mobile experiences; and led the Collaboration efforts at Dropbox involving launching Dropbox Paper as well as improving core collaboration functionality in Dropbox.
He and Lukas dive into the challenges of working on large projects and how to approach breaking down a major project into pieces, tracking progress and addressing bugs.
Check out Lyft’s Self-Driving Website:
https://self-driving.lyft.com/
And this article on building the self-driving team at Lyft:
https://medium.com/lyftlevel5/going-from-zero-to-sixty-building-lyfts-self-driving-software-team-1ac693800588
Follow Lyft Level 5 on Twitter:
https://twitter.com/LyftLevel5
Topics covered:
0:00 Sharp Knives
0:44 Introduction
1:07 Breaking down a big goal
8:15 Breaking down Metrics
10:50 Allocating Resources
12:40 Interventions
13:27 What part still has lots ofroom for improvement?
14:25 Various ways of deploying models
15:30 Rideshare
15:57 Infrastructure, updates
17:28 Model versioning
19:16 Model improvement goals
22:42 Unit testing
25:12 Interactions of models
26:30 Improvements in data vs models
29:50 finding the right data
30:38 Deploying models into production
32:17 Feature drift
34:20 When to file bug tickets
37:25 Processes and growth
40:56 Underrated aspect
42:34 Biggest challenges
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF

Aug 5, 2020 • 55min
Bharath Ramsundar — Deep Learning for Molecules and Medicine Discovery
Bharath created the deepchem.io open-source project to grow the deep drug discovery open source community, co-created the moleculenet.ai benchmark suite to facilitate development of molecular algorithms, and more. Bharath’s graduate education was supported by a Hertz Fellowship, the most selective graduate fellowship in the sciences. Bharath is the lead author of “TensorFlow for Deep Learning: From Linear Regression to Reinforcement Learning”, a developer’s introduction to modern machine learning, with O’Reilly Media.
Today, Bharath is focused on designing the decentralized protocols that will unlock data and AI to create the next stage of the internet. He received a BA and BS from UC Berkeley in EECS and Mathematics and was valedictorian of his graduating class in mathematics. He did his PhD in computer science at Stanford University where he studied the application of deep-learning to problems in drug-discovery.
Follow Bharath on Twitter and Github
https://twitter.com/rbhar90
rbharath.github.io
Check out some of his projects:
https://deepchem.io/
https://moleculenet.ai/
https://scholar.google.com/citations?user=LOdVDNYAAAAJ&hl=en&oi=ao
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!
👩🏼🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.
- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum
🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/

Jul 29, 2020 • 43min
Chip Huyen — ML Research and Production Pipelines
Chip Huyen is a writer and computer scientist currently working at a startup that focuses on machine learning production pipelines. Previously, she’s worked at NVIDIA, Netflix, and Primer. She helped launch Coc Coc - Vietnam’s second most popular web browser with 20+ million monthly active users. Before all of that, she was a best selling author and traveled the world.
Chip graduated from Stanford, where she created and taught the course on TensorFlow for Deep Learning Research.
Check out Chip's recent article on ML Tools: https://huyenchip.com/2020/06/22/mlops.html
Follow Chip on Twitter: https://twitter.com/chipro
And on her Website: https://huyenchip.com/
Visit our podcasts homepage for transcripts and more episodes!
www.wandb.com/podcast
🔊 Get our podcast on Apple and Spotify!
Apple Podcasts: https://bit.ly/2WdrUvI
Spotify: https://bit.ly/2SqtadF
We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!
👩🏼🚀Weights and Biases:
We’re always free for academics and open source projects. Email carey@wandb.com with any questions or feature suggestions.
- Blog: https://www.wandb.com/articles
- Gallery: See what you can create with W&B - https://app.wandb.ai/gallery
- Continue the conversation on our slack community - http://bit.ly/wandb-forum
🎙Host: Lukas Biewald - https://twitter.com/l2k
👩🏼💻Producer: Lavanya Shukla - https://twitter.com/lavanyaai
📹Editor: Cayla Sharp - http://caylasharp.com/


