The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Sam Charrington
undefined
Jan 17, 2018 • 39min

Accelerating Deep Learning with Mixed Precision Arithmetic with Greg Diamos - TWiML Talk #97

In this show I speak with Greg Diamos, senior computer systems researcher at Baidu. Greg joined me before his talk at the Deep Learning Summit, where he spoke on “The Next Generation of AI Chips.” Greg’s talk focused on some work his team was involved in that accelerates deep learning training by using mixed 16-bit and 32-bit floating point arithmetic. We cover a ton of interesting ground in this conversation, and if you’re interested in systems level thinking around scaling and accelerating deep learning, you’re really going to like this one. And of course, if you like this one, you’re also going to like TWiML Talk #14 with Greg’s former colleague, Shubho Sengupta, which covers a bunch of related topics. This show is part of a series of shows recorded at the RE•WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you’ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration.
undefined
Jan 15, 2018 • 35min

Composing Graphical Models With Neural Networks with David Duvenaud - TWiML Talk #96

In this episode, we hear from David Duvenaud, assistant professor in the Computer Science and Statistics departments at the University of Toronto. David joined me after his talk at the Deep Learning Summit on “Composing Graphical Models With Neural Networks for Structured Representations and Fast Inference.” In our conversation, we discuss the generalized modeling and inference framework that David and his team have created, which combines the strengths of both probabilistic graphical models and deep learning methods. He gives us a walkthrough of his use case which is to automatically segment and categorize mouse behavior from raw video, and we discuss how the framework is applied here and for other use cases. We also discuss some of the differences between the frequentist and bayesian statistical approaches. The notes for this show can be found at twimlai.com/talk/96
undefined
Jan 12, 2018 • 34min

Embedded Deep Learning at Deep Vision with Siddha Ganju - TWiML Talk #95

In this episode we hear from Siddha Ganju, data scientist at computer vision startup Deep Vision. Siddha joined me at the AI Conference a while back to chat about the challenges of developing deep learning applications “at the edge,” i.e. those targeting compute- and power-constrained environments.In our conversation, Siddha provides an overview of Deep Vision’s embedded processor, which is optimized for ultra-low power requirements, and we dig into the data processing pipeline and network architecture process she uses to support sophisticated models in embedded devices. We dig into the specific the hardware and software capabilities and restrictions typical of edge devices and how she utilizes techniques like model pruning and compression to create embedded models that deliver needed performance levels in resource constrained environments, and discuss use cases such as facial recognition, scene description and activity recognition. Siddha's research interests also include natural language processing and visual question answering, and we spend some time discussing the latter as well.
undefined
5 snips
Jan 11, 2018 • 46min

Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley - TWiML Talk #94

Kenneth Stanley, Professor in the Department of Computer Science at the University of Central Florida and senior research scientist at Uber AI Labs, discusses neuroevolution, genetic algorithms, and novel neural network architectures in this podcast. They explore concepts like NEAT, HyperNEAT, and novelty search. They also discuss the intertwining of biology and computation, the challenges in objective functions, and the synergy between neuro-evolution and deep learning.
undefined
Jan 8, 2018 • 34min

A Quantum Computing Primer and Implications for AI with Davide Venturelli - TWiML Talk #93

Today, I'm joined by Davide Venturelli, science operations manager and quantum computing team lead for the Universities Space Research Association’s Institute for Advanced Computer Science at NASA Ames. Davide joined me backstage at the NYU Future Labs AI Summit a while back to give me some insight into a topic that I’ve been curious about for some time now, quantum computing. We kick off our discussion about the core ideas behind quantum computing, including what it is, how it’s applied and the ways it relates to computing as we know it today. We discuss the practical state of quantum computers and what their capabilities are, and the kinds of things you can do with them. And of course, we explore the intersection between AI and quantum computing, how quantum computing may one day accelerate machine learning, and how interested listeners can get started down the quantum rabbit hole. The notes for this show can be found at twimlai.com/talk/93
undefined
Dec 22, 2017 • 47min

Learning State Representations with Yael Niv - TWiML Talk #92

This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I speak with Yael Niv, professor of neuroscience and psychology at Princeton University. Yael joined me after her invited talk on “Learning State Representations.” In this interview Yael and I explore the relationship between neuroscience and machine learning. In particular, we discusses the importance of state representations in human learning, some of her experimental results in this area, and how a better understanding of representation learning can lead to insights into machine learning problems such as reinforcement and transfer learning. Did I mention this was a nerd alert show? I really enjoyed this interview and I know you will too. Be sure to send over any thoughts or feedback via the show notes page at twimlai.com/talk/92.
undefined
Dec 21, 2017 • 30min

Philosophy of Intelligence with Matthew Crosby - TWiML Talk #91

This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests.This time around i'm joined by Matthew Crosby, a researcher at Imperial College London, working on the Kinds of Intelligence Project. Matthew joined me after the NIPS Symposium of the same name, an event that brought researchers from a variety of disciplines together towards three aims: a broader perspective of the possible types of intelligence beyond human intelligence, better measurements of intelligence, and a more purposeful analysis of where progress should be made in AI to best benefit society. Matthew’s research explores intelligence from a philosophical perspective, exploring ideas like predictive processing and controlled hallucination, and how these theories of intelligence impact the way we approach creating artificial intelligence. This was a very interesting conversation, i'm sure you’ll enjoy.
undefined
Dec 20, 2017 • 40min

Geometric Deep Learning with Joan Bruna & Michael Bronstein - TWiML Talk #90

This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. This time around I'm joined by Joan Bruna, Assistant Professor at the Courant Institute of Mathematical Sciences and the Center for Data Science at NYU, and Michael Bronstein, associate professor at Università della Svizzera italiana (Switzerland) and Tel Aviv University. Joan and Michael join me after their tutorial on Geometric Deep Learning on Graphs and Manifolds. In our conversation we dig pretty deeply into the ideas behind geometric deep learning and how we can use it in applications like 3D vision, sensor networks, drug design, biomedicine, and recommendation systems. This is definitely a Nerd Alert show, and one that will get your multi-dimensional neurons firing. Enjoy!
undefined
Dec 19, 2017 • 37min

AI at the NASA Frontier Development Lab with Sara Jennings, Timothy Seabrook and Andres Rodriguez

This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode i'm joined by Sara Jennings, Timothy Seabrook and Andres Rodriguez to discuss NASA’s Frontier Development Lab or FDL. The FDL is an intense 8-week applied AI research accelerator, focused on tackling knowledge gaps useful to the space program. In our discussion, Sara, producer at the FDL, provides some insight into its goals and structure. Timothy, a researcher at FDL, describes his involvement with the program, including some of the projects he worked on while on-site. He also provides a look into some of this year’s FDL projects, including Planetary Defense, Solar Storm Prediction, and Lunar Water Location. Last but not least, Andres, Sr. Principal Engineer at Intel's AIPG, joins us to detail Intel’s support of the FDL, and how the various elements of the Intel AI stack supported the FDL research. This is a jam packed conversation, so be sure to check the show notes page at twimlai.com/talk/89 for all the links and tidbits from this episode.
undefined
Dec 19, 2017 • 32min

Using Deep Learning and Google Street View to Estimate Demographics with Timnit Gebru

This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I sit down with Timnit Gebru, postdoctoral researcher at Microsoft Research in the Fairness, Accountability, Transparency and Ethics in AI, or FATE, group. Timnit is also one of the organizers behind the Black in AI group, which held a very interesting symposium and poster session at NIPS. I’ll link to the group’s page in the show notes. I’ve been following Timnit’s work for a while now and was really excited to get a chance to sit down with her and pick her brain. We packed a ton into this conversation, especially keying in on her recently released paper “Using Deep Learning and Google Street View to Estimate the Demographic Makeup of the US”. Timnit describes the pipeline she developed for this research, and some of the challenges she faced building and end-to-end model based on google street view images, census data and commercial car vendor data. We also discuss the role of social awareness in her work, including an explanation of how domain adaptation and fairness are related and her view of the major research directions in the domain of fairness. The notes for this show can be found at twimlai.com/talk/88 For series information, visit twimlai.com/nips2017

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app