Last Week in AI

Skynet Today
undefined
Jan 7, 2021 • 23min

NY‘s moratorium on facial recognition, deepfakes in 2020, and more!

This week dives into crucial AI news, including Google’s push for a 'positive tone' in research amid ethical concerns. A wrongful arrest linked to facial recognition ignites debate on accountability and ethics. The discussion shifts to the mainstream rise of deepfakes and their surprisingly limited misuse during the 2020 elections, showcasing potential for growth in entertainment. Lastly, the booming investment in AI startups during the pandemic highlights a significant trend, with exciting acquisitions like Amazon's buyout of Zoox.
undefined
Dec 12, 2020 • 59min

A narrowing of AI research? with Juan Mateos-Garcia

Juan Mateos-Garcia, Director of Data Analytics at Nesta, dives into his recent paper on AI research trends. He discusses the limitations of traditional metrics in evaluating AI innovations and raises concerns over environmental impacts. The importance of diversity in research is emphasized, particularly in the face of deep learning's dominance. Mateos-Garcia warns against the stagnation of ideas despite rising publications, advocating for strategic funding to support varied, innovative approaches in AI.
undefined
Dec 10, 2020 • 23min

The firing of Dr.Timnit Gebru, AlphaFold, and Unions Against AI

The podcast dives into the dismissal of a prominent AI ethics researcher at Google, raising questions about corporate decision-making and transparency. It also celebrates DeepMind's AlphaFold, a groundbreaking advancement in protein folding that promises to revolutionize drug discovery. Additionally, labor unions are highlighted for their efforts to advocate for workers' rights in an AI-dominated landscape, emphasizing the need for ethical deployment and addressing potential inequalities.
undefined
Dec 5, 2020 • 25min

The De-democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research with Nur Ahmed

Nur Ahmed, a Strategy PhD candidate at Ivey Business School and Research Fellow at ScotiaBank Digital Banking Lab, dives into the de-democratization of AI. He discusses the alarming concentration of power in AI research and the impact of corporate interests on innovation. Highlighting the widening compute divide, Nur calls attention to the diminishing contributions from non-elite institutions and advocates for more equitable access to AI technology. He emphasizes the need for policy interventions to ensure diverse voices in the AI landscape.
undefined
Dec 3, 2020 • 43min

To PhD or not to PhD, AI Bias, Facial Recognition Ethics, GPT-3

The discussion opens with a hot debate on the pros and cons of pursuing a PhD in machine learning, featuring contrasting opinions. They dive into the pressing issue of bias in AI, stressing the need to ensure algorithms reflect society fairly. Ethical concerns surrounding facial recognition technology are highlighted, advocating for accountability in its use. Lastly, GPT-3 takes center stage with humorous yet unsatisfactory responses to profound questions, and its growing role in creative fields is examined, showcasing both its potential and challenges.
undefined
Nov 28, 2020 • 30min

Machine Learning for Art with Google's Emil Wallner

In this conversation with Emil Wallner, a machine learning researcher at Google Arts & Culture Lab and creator of mlart.co, listeners explore the fascinating link between AI and artistic expression. Emil discusses creative projects like GAN-generated animations and a robot that crafts poetic texts from images. He highlights the growing accessibility of machine learning tools for artists, making technology inclusive for all. Their chat also touches on historical innovations and collaborative efforts that enrich the art landscape, encouraging creative exploration with AI.
undefined
Nov 26, 2020 • 31min

The way we train AI is fundamentally flawed, bias, the compute divide

This discussion dives into the critical flaws in AI training methodologies, emphasizing the concept of 'under specification.' Facebook's struggles with moderating harmful content are highlighted, showcasing the challenges in AI supervision. The exploration of AI bias reveals stark inequalities, as the 'compute divide' accelerates disparities in research output. The need for greater diversity in AI research is stressed, along with initiatives aimed at creating a more equitable landscape for future innovation. It's a thought-provoking look at the current state of AI.
undefined
Nov 19, 2020 • 10min

AI‘s replication crisis, reddit discussions, government-sponsored medical AI

The podcast dives into the intriguing topic of AI's replication crisis, highlighting transparency issues in research. Discussions on Reddit reveal challenges in machine learning accessibility and motivation. The US government’s new initiative to pay doctors for using AI algorithms in healthcare raises questions about integration and efficacy. Additionally, the episode critiques the negative societal impacts of recommendation algorithms from major tech companies, prompting listeners to consider their influence on everyday life.
undefined
Nov 13, 2020 • 29min

Geoff Hinton‘s Hot Take, Robots in Walmart and Art, Confidence in AI for Healthcare

Geoff Hinton believes deep learning could achieve full artificial intelligence, sparking fascinating debates in the community. Walmart has scrapped its plans for robot shelf scanning, raising questions about automation in retail. Confidence in AI within healthcare is on the rise, with leaders expecting quicker returns on investment. Artist Sougwen Chung is innovating by designing AI robots to collaborate in her artistic endeavors, blending creativity with technology in exciting new ways.
undefined
Nov 8, 2020 • 33min

OpenAI's "Scaling Laws for Autoregressive Generative Modeling"

Tom Henighan, a member of OpenAI's safety team and co-author of a groundbreaking paper on scaling laws in generative modeling, shares his insights on model performance. He discusses how scaling influences test loss in autoregressive models, revealing a power law behavior. The importance of balancing model size with computational capacity is emphasized, advocating for an optimal 'Goldilocks' range. Tom also highlights the impact of transformer architectures and model pruning on generative capabilities, sparking excitement for future AI advancements.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app