Practical AI

Practical AI LLC
undefined
Apr 23, 2019 • 46min

Deep Reinforcement Learning

Adam Stooke, a PhD student at UC Berkeley, dives into the fascinating world of deep reinforcement learning and robotics. He shares insights from his journey transitioning from physics to AI, emphasizing the trial-and-error nature of reinforcement learning. Adam discusses the impact of GPU computing, particularly the NVIDIA DGX1, on accelerating complex tasks like gaming. He highlights key advancements in the field from organizations like DeepMind and OpenAI, offering valuable advice for newcomers about the importance of hands-on experience.
undefined
Apr 15, 2019 • 52min

Making the world a better place at the AI for Good Foundation

In this engaging discussion, James Hodson, Director of the AI for Good Foundation, shares his mission to harness AI for global good. He highlights initiatives aimed at enhancing food production and combating climate change, revealing innovative applications that bridge technology and humanitarian efforts. James emphasizes the importance of collaboration, community involvement, and mentorship in effective aid delivery. Listeners are inspired to leverage their skills and join the movement to create positive societal changes through AI.
undefined
Apr 8, 2019 • 49min

GIPHY's celebrity detector

Nick Hasty, the Head of R&D at GIPHY, shares insights on their innovative celebrity detector project, a groundbreaking advancement in AI. He discusses GIPHY's journey from a GIF search engine to incorporating cutting-edge technology for celebrity detection. The conversation highlights the challenges of building a diverse and accurate dataset, the importance of community in the development process, and ongoing bias testing. Hasty also touches on the exciting potential for creativity and interactivity that this technology brings to the world of digital communication.
undefined
Apr 2, 2019 • 52min

The landscape of AI infrastructure

The hosts dive into AI infrastructure, discussing personal setups and cloud solutions. They highlight essential tools like Docker, Jupyter, and various data science platforms. The conversation shifts to challenges in data management, emphasizing the importance of compliance and the choice between cloud and on-prem systems. Insights on optimizing workflows and hardware are shared, along with the impact of infrastructure on project scalability. They also tease an engaging future topic on brain science, bridging neuroscience with AI concepts.
undefined
Mar 25, 2019 • 1h 6min

Growing up to become a world-class AI expert

Anima Anandkumar, a pioneer in deep learning and AI, shares her inspiring journey from childhood in India to becoming a leader at NVIDIA and Caltech. She discusses her early passion for mathematics and the influence of her family. Anandkumar reflects on her academic transformation from IIT to Cornell and her experiences during her PhD. She emphasizes the integration of physics with AI in improving drone technology and advocates for inclusivity in the AI community, urging for a collaborative approach that addresses ethical concerns.
undefined
Mar 18, 2019 • 39min

Social AI with Hugging Face

Clément Delangue, Co-founder and CEO of Hugging Face, dives into the world of social AI. He highlights how it fosters emotional connections and enhances user interaction, unlike traditional AI. Clem discusses innovative products like sassy chatbots and selfie-trading AIs. He also shares insights on the evolution of natural language understanding, the importance of large datasets, and the advantages of working in smaller AI organizations, emphasizing flexibility and innovation.
undefined
Mar 11, 2019 • 41min

The White House Executive Order on AI

Explore the recent White House executive order on artificial intelligence and its role in maintaining U.S. leadership in the field. Discuss the ethical concerns surrounding AI regulations and the impact on civil liberties. Compare the U.S. strategy to China's aggressive approach in AI investment. Debate the effectiveness of the order amidst criticisms about its vagueness and the necessity of federal funding and workforce training. Advocate for improved public education on AI to foster a knowledgeable community.
undefined
Mar 4, 2019 • 51min

Staving off disaster through AI safety research

While covering Applied Machine Learning Days in Switzerland, Chris met El Mahdi El Mhamdi by chance, and was fascinated with his work doing AI safety research at EPFL. El Mahdi agreed to come on the show to share his research into the vulnerabilities in machine learning that bad actors can take advantage of. We cover everything from poisoned data sets and hacked machines to AI-generated propaganda and fake news, so grab your James Bond 007 kit from Q Branch, and join us for this important conversation on the dark side of artificial intelligence. Join the discussionChangelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!Sponsors:Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2018. Start your server - head to linode.com/changelog Rollbar – We move fast and fix things because of Rollbar. Resolve errors in minutes. Deploy with confidence. Learn more at rollbar.com/changelog. Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com. Featuring:El Mahdi El Mhamdi – Website, XChris Benson – Website, GitHub, LinkedIn, XShow Notes: El Mahdi El Mhamdi on LinkedIn Google Scholar Personal blog World Health Organization | Ten threats to global health in 2019 AggregaThor The Hidden Vulnerability of Distributed Learning in Byzantium Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent Asynchronous Byzantine Machine Learning (the case of SGD) Something missing or broken? PRs welcome!
undefined
Feb 25, 2019 • 41min

OpenAI's new "dangerous" GPT-2 language model

This week we discuss GPT-2, a new transformer-based language model from OpenAI that has everyone talking. It’s capable of generating incredibly realistic text, and the AI community has lots of concerns about potential malicious applications. We help you understand GPT-2 and we discuss ethical concerns, responsible release of AI research, and resources that we have found useful in learning about language models. Join the discussionChangelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!Sponsors:Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2018. Start your server - head to linode.com/changelog Rollbar – We move fast and fix things because of Rollbar. Resolve errors in minutes. Deploy with confidence. Learn more at rollbar.com/changelog. Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com. Featuring:Chris Benson – Website, GitHub, LinkedIn, XDaniel Whitenack – Website, GitHub, XShow Notes:Relevant learning resources: Jay Alammar “Illustrated” blog articles: The illustrated transformer The illustrated BERT, elmo, and co Machine Learning Explained blog: An In-Depth Tutorial to AllenNLP (From Basics to ELMo and BERT) Paper Dissected: “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” Explained References/notes: GPT-2 blog post from OpenAI GPT-2 Paper GPT-2 GitHub Repo GPT-2 PyTorch implementation Episode 22 of Practical AI about BERT OpenAI’s GPT-2: the model, the hype, and the controversy (towardsdatascience) The AI Text Generator That’s Too Dangerous to Make Public (Wired) Transformer paper Preparing for malicious uses of AI (OpenAI blog) Something missing or broken? PRs welcome!
undefined
Feb 20, 2019 • 38min

AI for social good at Intel

While at Applied Machine Learning Days in Lausanne, Switzerland, Chris had an inspiring conversation with Anna Bethke, Head of AI for Social Good at Intel. Anna reveals how she started the AI for Social Good program at Intel, and goes on to share the positive impact this program has had - from stopping animal poachers, to helping the National Center for Missing & Exploited Children. Through this AI for Social Good program, Intel clearly demonstrates how a for-profit business can effectively use AI to make the world a better place for us all. Join the discussionChangelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!Sponsors:Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com. Rollbar – We move fast and fix things because of Rollbar. Resolve errors in minutes. Deploy with confidence. Learn more at rollbar.com/changelog. Linode – Our cloud server of choice. Deploy a fast, efficient, native SSD cloud server for only $5/month. Get 4 months free using the code changelog2018. Start your server - head to linode.com/changelog Algolia – Our search partner. Algolia’s full suite search APIs enable teams to develop unique search and discovery experiences across all platforms and devices. We’re using Algolia to power our site search here at Changelog.com. Get started for free and learn more at algolia.com. Featuring:Anna Bethke – LinkedIn, XChris Benson – Website, GitHub, LinkedIn, XShow Notes: AI for Social Good at Intel National Center for Missing & Exploited Children Data for Democracy Data Kind Delta Analytics Driven Data Partnership on AI Tech Jobs for Good Applied Machine Learning Days Something missing or broken? PRs welcome!

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app