Data Science at Home

Francesco Gadaleta
undefined
Nov 11, 2017 • 14min

Episode 29: Fail your AI company in 9 steps

In order to succeed with artificial intelligence, it is better to know how to fail first. It is easier than you think. Here are 9 easy steps to fail your AI startup.
undefined
Nov 4, 2017 • 21min

Episode 28: Towards Artificial General Intelligence: preliminary talk

The enthusiasm for artificial intelligence is raising some concerns especially with respect to some ventured conclusions about what AI can really do and what its direct descendent, artificial general intelligence would be capable of doing in the immediate future. From stealing jobs, to exterminating the entire human race, the creativity (of some) seems to have no limits.  In this episode I make sure that everyone comes back to reality - which might sound less exciting than Hollywood but definitely more... real. 
undefined
Oct 30, 2017 • 18min

Episode 27: Techstars accelerator and the culture of fireflies

In the aftermath of the Barclays Accelerator, powered by Techstars experience, one of the most innovative and influential startup accelerators in the world, I’d like to give back to the community lessons learned, including the need for confidence, soft-skills, and efficiency, to be applied to startups that deal with artificial intelligence and data science. In this episode I also share some thoughts about the culture of fireflies in modern and dynamic organisations.
undefined
Oct 23, 2017 • 54min

Episode 26: Deep Learning and Alzheimer

In this episode I speak about Deep Learning technology applied to Alzheimer disorder prediction. I had a great chat with Saman Sarraf, machine learning engineer at Konica Minolta, former lab manager at the Rotman Research Institute at Baycrest, University of Toronto and author of DeepAD: Alzheimer′ s Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI. I hope you enjoy the show.
undefined
Oct 16, 2017 • 16min

Episode 25: How to become data scientist [RB]

In this episode, I speak about the requirements and the skills to become data scientist and join an amazing community that is changing the world with data analyticsa
undefined
Oct 9, 2017 • 21min

Episode 24: How to handle imbalanced datasets

In machine learning and data science in general it is very common to deal at some point with imbalanced datasets and class distributions. This is the typical case where the number of observations that belong to one class is significantly lower than those belonging to the other classes.  Actually this happens all the time, in several domains, from finance, to healthcare to social media, just to name a few I have personally worked with. Think about a bank detecting fraudulent transactions among millions or billions of daily operations, or equivalently in healthcare for the identification of rare disorders. In genetics but also with clinical lab tests this is a normal scenario, in which, fortunately there are very few patients affected by a disorder and therefore very few cases wrt the large pool of healthy patients (or not affected). There is no algorithm that can take into account the class distribution or the amount of observations in each class, if it is not explicitly designed to handle such situations. In this episode I speak about some effective techniques to handle imbalanced datasets, advising the right method, or the most appropriate one to the right dataset or problem. In this episode I explain how to deal with such common and challenging scenarios.
undefined
Oct 3, 2017 • 19min

Episode 23: Why do ensemble methods work?

Ensemble methods have been designed to improve the performance of the single model, when the single model is not very accurate. According to the general definition of ensembling, it consists in building a number of single classifiers and then combining or aggregating their predictions into one classifier that is usually stronger than the single one. The key idea behind ensembling is that some models will do well when they model certain aspects of the data while others will do well in modelling other aspects. In this episode I show with a numeric example why and when ensemble methods work.
undefined
Sep 25, 2017 • 20min

Episode 22: Parallelising and distributing Deep Learning

Continuing the discussion of the last two episodes, there is one more aspect of deep learning that I would love to consider and therefore left as a full episode, that is parallelising and distributing deep learning on relatively large clusters. As a matter of fact, computing architectures are changing in a way that is encouraging parallelism more than ever before. And deep learning is no exception and despite the greatest improvements with commodity GPUs - graphical processing units, when it comes to speed, there is still room for improvement. Together with the last two episodes, this one completes the picture of deep learning at scale. Indeed, as I mentioned in the previous episode, How to master optimisation in deep learning, the function optimizer is the horsepower of deep learning and neural networks in general. A slow and inaccurate optimisation method leads to networks that slowly converge to unreliable results. In another episode titled “Additional strategies for optimizing deeplearning” I explained some ways to improve function minimisation and model tuning in order to get better parameters in less time. So feel free to listen to these episodes again, share them with your friends, even re-broadcast or download for your commute. While the methods that I have explained so far represent a good starting point for prototyping a network, when you need to switch to production environments or take advantage of the most recent and advanced hardware capabilities of your GPU, well... in all those cases, you would like to do something more.  
undefined
Sep 18, 2017 • 15min

Episode 21: Additional optimisation strategies for deep learning

In the last episode How to master optimisation in deep learning I explained some of the most challenging tasks of deep learning and some methodologies and algorithms to improve the speed of convergence of a minimisation method for deep learning. I explored the family of gradient descent methods - even though not exhaustively - giving a list of approaches that deep learning researchers are considering for different scenarios. Every method has its own benefits and drawbacks, pretty much depending on the type of data, and data sparsity. But there is one method that seems to be, at least empirically, the best approach so far. Feel free to listen to the previous episode, share it, re-broadcast or just download for your commute. In this episode I would like to continue that conversation about some additional strategies for optimising gradient descent in deep learning and introduce you to some tricks that might come useful when your neural network stops learning from data or when the learning process becomes so slow that it really seems it reached a plateau even by feeding in fresh data.
undefined
Aug 28, 2017 • 19min

Episode 20: How to master optimisation in deep learning

The secret behind deep learning is not really a secret. It is function optimisation. What a neural network essentially does, is optimising a function. In this episode I illustrate a number of optimisation methods and explain which one is the best and why.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app