Machine Learning Archives - Software Engineering Daily cover image

Machine Learning Archives - Software Engineering Daily

Latest episodes

undefined
Jun 5, 2017 • 48min

Video Object Segmentation with the DAVIS Challenge Team

Video object segmentation allows computer vision to identify objects as they move through space in a video. The DAVIS challenge is a contest among machine learning researchers working off of a shared dataset of annotated videos. The organizers of the DAVIS challenge join the show today to explain how video object segmentation models are trained and how different competitors take part in the DAVIS challenge. A good companion to this episode is our discussion of Convolutional Neural Networks with Matt Zeiler. Software Engineering Daily is looking for sponsors for Q3. If your company has a product or service, or if you are hiring, Software Engineering Daily reaches 23,000 developers listening daily. Send me an email: jeff@softwareengineeringdaily.com The post Video Object Segmentation with the DAVIS Challenge Team appeared first on Software Engineering Daily.
undefined
May 12, 2017 • 45min

Poker Artificial Intelligence with Noam Brown

Humans have now been defeated by computers at heads up no-limit holdem poker. Some people thought this wouldn’t be possible. Sure, we can teach a computer to beat a human at Go or Chess. Those games have a smaller decision space. There is no hidden information. There is no bluffing. Poker must be different! It is too human to be automated. The game space of poker is different than that of Go. It has 10^160 different situations–which is more than the number of atoms in the universe. And the game space keeps getting bigger as the stack sizes of the two competitors gets bigger. But it is still possible for a computer to beat a human at calculating game theory optimal decisions–if you approach the problem correctly. Libratus was developed by CMU professor Tuomas Sandholm, along with my guest today Noam Brown. The Libratus team taught their AI the rules of poker, they gave it a reward function (to win as much money as possible), and they told it to optimize that reward function. Then they had Libratus train itself with simulations. After enough training, Libratus was ready to crush human competitors, which it did in hilarious, entertaining fashion. There is a video from Engadget on YouTube about the AI competing against professional humans. In this episode, Noam Brown explains how they built Libratus, what it means for poker players, and what the implications are for humanity–if we can automate poker, what can’t we automate? Stay tuned at the end of this episode for the Indeed Prime tip on hiring developers. The post Poker Artificial Intelligence with Noam Brown appeared first on Software Engineering Daily.
undefined
May 10, 2017 • 50min

Convolutional Neural Networks with Matt Zeiler

Convolutional neural networks are a machine learning tool that uses layers of convolution and pooling to process and classify inputs. CNNs are useful for identifying objects in images and video. In this episode, we focus on the application of convolutional neural networks to image and video recognition and classification. Matt Zeiler is the CEO of Clarifai, an API for image and video recognition. Matt takes us through the basics of a convolutional neural network–you don’t need any background in machine learning to understand the content of the episode. He also discusses the subjective aspects of image and video recognition, and some of the tactics Clarifai has explored. This is far from a solved problem. Matt also discusses the infrastructure of Clarifai–how they use Kubernetes, how models are deployed, and how models are updated. The post Convolutional Neural Networks with Matt Zeiler appeared first on Software Engineering Daily.
undefined
May 1, 2017 • 42min

Google Brain Music Generation with Doug Eck

Most popular music today uses a computer as the central instrument. A single musician is often selecting the instruments, programming the drum loops, composing the melodies, and mixing the track to get the right overall atmosphere. With so much work to do on each song, popular musicians need to simplify–the result is that pop music today consists of simple melodies without much chord progression. Magenta is a project out of Google Brain to design algorithms that learn how to generate art and music. One goal of Magenta is to advance the state of the art in machine intelligence for music and art generation. Another goal is to build a community of artists, coders, and machine learning researchers who can collaborate. Engineers today are happy to outsource server management to a cloud service provider. Similarly, a musician can use Magenta for creation of a melody, so she can focus on other aspects of a song, such as instrumentation. Doug Eck is a research scientist at Google. In today’s episode, we explore the Magenta project and the future of music. Software Engineering Daily is having our third Meetup, Wednesday May 3rd at Galvanize in San Francisco. The theme of this Meetup is Fraud and Risk in Software. We will have great food, engaging speakers, and a friendly, intellectual atmosphere. To find out more, go to softwareengineeringdaily.com/meetup. We would love to get your feedback on Software Engineering Daily. Please fill out the listener survey, available on softwareengineeringdaily.com/survey. The post Google Brain Music Generation with Doug Eck appeared first on Software Engineering Daily.
undefined
Apr 3, 2017 • 53min

Hedge Fund Artificial Intelligence with Xander Dunn

A hedge fund is a collection of investors that make bets on the future. The “hedge” refers to the fact that the investors often try to diversify their strategies so that the direction of their bets are less correlated, and they can be successful in a variety of future scenarios. Engineering-focused hedge funds have used what might be called “machine learning” for a long time to predict what will happen in the future. Numerai is a hedge fund that crowdsources its investment strategies by allowing anyone to train models against Numerai’s data. A model that succeeds in a simulated environment will be adopted by Numerai and used within its real money portfolio. The engineers who create the models are rewarded in proportion to how well the models perform. Xander Dunn is a software engineer at Numerai and in this episode he explains what a hedge fund is, why the traditional strategies are not optimal, and how Numerai creates the right incentive structure to crowdsource market intelligence. This interview was fun and thought provoking–Numerai is one of those companies that makes me very excited about the future. The post Hedge Fund Artificial Intelligence with Xander Dunn appeared first on Software Engineering Daily.
undefined
Mar 21, 2017 • 41min

Multiagent Systems with Peter Stone

Multiagent systems involve the interaction of autonomous agents that may be acting independently or in collaboration with each other. Examples of these systems include financial markets, robot soccer matches, and automated warehouses. Today’s guest Peter Stone is a professor of computer science who specializies in multiagent systems and robotics. In this episode, we discuss some of the canonical problems of multiagent systems, which have some overlap with the canonical problems of distributed systems–for example, the problems of coordinating between different agents with varying levels of trust resembles the problem of establishing consistency across servers in a database cluster. Peter has recently contributed to the 100 year study of artificial intelligence, so we also had a chance to discuss the opportunities and roadblocks for AI in the near future. And since Peter teaches computer science at my alma mater, UT Austin, I had to ask him a few questions about the curriculum.   The post Multiagent Systems with Peter Stone appeared first on Software Engineering Daily.
undefined
Mar 20, 2017 • 60min

Biological Machine Learning with Jason Knight

Biology research is complex. The sample size of a biological data set is often too small to make confident judgments about the biological system being studied. During Jason Knight’s PhD research, the RNA sequence data that he was studying was not significant enough to make strong conclusions about the gene regulatory networks he was trying to understand. After working in academia, and then at Human Longevity, Inc Jason came to the conclusion that the best way to work towards biology breakthroughs was to work on the computer systems that enable those breakthroughs. He went to work at Nervana Systems on hardware and software for deep learning. Nervana was subsequently acquired by Intel. In this episode, we discuss how machine learning can be applied to biology today, and how industrial research and development is key to enabling more breakthroughs in the future. The main lesson I took away from this show is that while we have seen phenomenal breakthroughs in certain areas of health–like image recognition applied to diabetic retinopathy or skin cancer–the challenges of reverse engineering our genome to understand how nucleic acids fit together into humans are still out of reach, and improving the hardware used for deep learning will be necessary to tackle these kinds of informational challenges. The post Biological Machine Learning with Jason Knight appeared first on Software Engineering Daily.
undefined
Mar 17, 2017 • 52min

Stripe Machine Learning with Michael Manapat

Every company that deals with payments deals with fraud. The question is not whether fraud will occur on your system, but rather how much of it you can detect and prevent. If a payments company flags too many transactions as fraudulent, then real transactions might accidentally get flagged as well. But if you don’t reject enough of the fraudulent transactions, you might not be able to make any money at all. Because fraud detection is such a difficult optimization problem, it is a good fit for machine learning. Today’s guest Michael Manapat works on machine learning fraud detection at Stripe. This conversation explores aspects of both data science and data engineering. Michael seems to benefit from having a depth of knowledge in both aspects of the data pipeline, which made me question whether data science and data engineering are roles that an engineering organization wants to separate. This is the third in a series of episodes about Stripe engineering. Throughout these episodes, we’ve tried to give a picture for how Stripe’s engineering culture works. We hope to do more experimental series like this in the future. Please give us feedback for what you think of the format by sending us email, joining the Slack group, or filling out our listener survey. All of these things are available on softwareengineeringdaily.com. The post Stripe Machine Learning with Michael Manapat appeared first on Software Engineering Daily.
undefined
Feb 16, 2017 • 50min

Machine Learning is Hard with Zayd Enam

Machine learning frameworks like Torch and TensorFlow have made the job of a machine learning engineer much easier. But machine learning is still hard. Debugging a machine learning model is a slow, messy process. A bug in a machine learning model does not always mean a complete failure. Your model could continue to deliver usable results even in the presence of a mistaken implementation. Perhaps you made a mistake when cleaning your data, leading to an incorrectly trained model. It is a general rule in computer science that partial failures are harder to fix than complete failures. In this episode, Zayd Enam describes the different dimensions on which a machine learning model can develop an error. Zayd is a machine learning researcher at the Stanford AI Lab, so I also asked him about AI risk, job displacement, and academia versus industry. Show Notes Why ML is hard The post Machine Learning is Hard with Zayd Enam appeared first on Software Engineering Daily.
undefined
Feb 10, 2017 • 46min

Deep Learning with Adam Gibson

Deep learning uses neural networks to identify patterns. Neural networks allow us to sequence “layers” of computing, with each layer using learning algorithms such as unsupervised learning, supervised learning, and reinforcement learning. Deep learning has taken off in the last few years, but it has been around for much longer. Adam Gibson founded Skymind, the company behind Deeplearning4j. Deeplearning4j is a distributed deep learning library for Scala and Java. It integrates with Hadoop and Spark, and is specifically designed to run in business environments on distributed GPUs and CPUs. Adam joins the show today to discuss the history and future of deep learning. The post Deep Learning with Adam Gibson appeared first on Software Engineering Daily.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode