Vanishing Gradients

Hugo Bowne-Anderson
undefined
Mar 9, 2022 • 1h 44min

Episode 4: Machine Learning at T-Mobile

Hugo speaks with Heather Nolis, Principal Machine Learning engineer at T-mobile, about what data science, machine learning, and AI look like at T-mobile, along with Heather’s path from a software development intern there to principal ML engineer running a team of 15.They talk about: how to build a DS culture from scratch and what executive-level support looks like, as well as how to demonstrate machine learning value early on from a shark tank style pitch night to the initial investment through to the POC and building out the function; all the great work they do with R and the Tidyverse in production; what it’s like to be a lesbian in tech, and about what it was like to discover she was autistic and how that impacted her work; how to measure and demonstrate success and ROI for the org; some massive data science fails!; how to deal with execs wanting you to use the latest GPT-X – in a fragmented tooling landscape; how to use the simplest technology to deliver the most value.Finally, the team just hired their first FT ethicist and they speak about how ethics can be embedded in a team and across an institution.LinksPut R in prod (https://putrinprod.com/): Tools and guides to put R models into productionEnterprise Web Services with Neural Networks Using R and TensorFlow (https://medium.com/tmobile-tech/enterprise-web-services-with-neural-networks-using-r-and-tensorflow-a09c1b100c11)Heather on twitter (https://twitter.com/heatherklus) T-Mobile is hiring! (https://www.t-mobile.com/careers)Hugo's upcoming fireside chat and AMA with Hilary Parker about how to actually produce sustainable business value using machine learning and product management for ML! (https://www.eventbrite.com/e/select-ml-project-where-value-is-not-null-tickets-284000161127?aff=hba) This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hugobowne.substack.com
undefined
Mar 1, 2022 • 1h 33min

Episode 3: Language Tech For All

Rachael Tatman is a senior developer advocate for Rasa, where she’s helping developers build and deploy ML chatbots using their open source framework.Rachael has a PhD in Linguistics from the University of Washington where her research was on computational sociolinguistics, or how our social identity affects the way we use language in computational contexts. Previously she was a data scientist at Kaggle and she’s still a Kaggle Grandmaster.In this conversation, Rachael and I talk about the history of NLP and conversational AI//chatbots and we dive into the fascinating tension between rule-based techniques and ML and deep learning – we also talk about how to incorporate machine and human intelligence together by thinking through questions such as “should a response to a human ever be automated?” Spoiler alert: the answer is a resounding NO WAY! In this journey, something that becomes apparent is that many of the trends, concepts, questions, and answers, although framed for NLP and chatbots, are applicable to much of data science, more generally.We also discuss the data scientist’s responsibility to end-users and stakeholders using, among other things, the lens of considering those whose data you’re working with to be data donors.We then consider what globalized language technology looks like and can look like, what we can learn from the history of science here, particularly given that so much training data and models are in English when it accounts for so little of language spoken globally. LinksRachael's website (https://www.rctatman.com/)Rasa (https://rasa.com/)Speech and Language Processing (https://web.stanford.edu/~jurafsky/slp3/)by Dan Jurafsky and James H. Martin Masakhane (https://twitter.com/MasakhaneNLP), putting African languages on the #NLP map since 2019The Distributed AI Research Institute (https://www.dair-institute.org/), a space for independent, community-rooted AI research, free from Big Tech’s pervasive influenceThe Algorithmic Justice League (https://www.ajl.org/), unmasking AI harms and biasesBlack in AI (https://blackinai.github.io/#/), increasing the presence and inclusion of Black people in the field of AI by creating space for sharing ideas, fostering collaborations, mentorship and advocacyHugo's blog post on his new job and why it's exciting for him to double down on helping scientists do better science (https://outerbounds.com/blog/hba-excited-to-join-metaflow-and-outerbounds/) This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hugobowne.substack.com
undefined
7 snips
Feb 20, 2022 • 1h 46min

Episode 2: Making Data Science Uncool Again

Jeremy Howard is a data scientist, researcher, developer, educator, and entrepreneur. Jeremy is a founding researcher at fast.ai, a research institute dedicated to making deep learning more accessible. He is also a Distinguished Research Scientist at the University of San Francisco, the chair of WAMRI, and is Chief Scientist at platform.ai.In this conversation, we’ll be talking about the history of data science, machine learning, and AI, where we’ve come from and where we’re going, how new techniques can be applied to real-world problems, whether it be deep learning to medicine or porting techniques from computer vision to NLP. We’ll also talk about what’s present and what’s missing in the ML skills revolution, what software engineering skills data scientists need to learn, how to cope in a space of such fragmented tooling, and paths for emerging out of the shadow of FAANG. If that’s not enough, we’ll jump into how spreading DS skills around the globe involves serious investments in education, building software, communities, and research, along with diving into the social challenges that the information age and the AI revolution (so to speak) bring with it.But to get to all of this, you’ll need to listen to a few minutes of us chatting about chocolate biscuits in Australia!Links* fast.ai · making neural nets uncool again* nbdev: create delightful python projects using Jupyter Notebooks (https://github.com/fastai/nbdev)* The fastai book, published as Jupyter Notebooks (https://github.com/fastai/fastbook)* Deep Learning for Coders with fastai and PyTorch (https://www.oreilly.com/library/view/deep-learning-for/9781492045519/)* The wonderful and terrifying implications of computers that can learn (https://www.youtube.com/watch?v=t4kyRyKyOpo) -- Jeremy' awesome TED talk!* Manna (https://marshallbrain.com/manna) by Marshall Brain* Ghost Work (https://ghostwork.info/) by Mary L. Gray and Siddharth Suri* Uberland (https://www.ucpress.edu/book/9780520324800/uberland) by Alex Rosenblat This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hugobowne.substack.com
undefined
Feb 16, 2022 • 6min

Episode 1: Introducing Vanishing Gradients

In this brief introduction, Hugo introduces the rationale behind launching a new data science podcast and gets excited about his upcoming guests: Jeremy Howard, Rachael Tatman, and Heather Nolis!Original music, bleeps, and blops by local Sydney legend PlaneFace (https://planeface.bandcamp.com/album/fishing-from-an-asteroid)! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hugobowne.substack.com

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app