Towards Data Science cover image

Towards Data Science

Latest episodes

undefined
Jun 30, 2021 • 49min

90. Jeffrey Ding - China’s AI ambitions and why they matter

There are a lot of reasons to pay attention to China’s AI initiatives. Some are purely technological: Chinese companies are producing increasingly high-quality AI research, and they’re poised to become even more important players in AI over the next few years. For example, Huawei recently put together their own version of OpenAI’s massive GPT-3 language model — a feat that leveraged massive scale compute that pushed the limits of current systems, calling for deep engineering and technical know-how. But China’s AI ambitions are also important geopolitically. In order to build powerful AI systems, you need a lot of compute power. And in order to get that, you need a lot of computer chips, which are notoriously hard to manufacture. But most of the world’s computer chips are currently made in democratic Taiwan, which China claims as its own territory. You can see how quickly this kind of thing can lead to international tension. Still, the story of US-China AI isn’t just one of competition and decoupling, but also of cooperation — or at least, that’s the case made by my guest today, China AI expert and Stanford researcher Jeffrey Ding. In addition to studying Chinese AI ecosystem as part of his day job, Jeff published the very popular China AI newsletter, which offers a series of translations and analyses of Chinese language articles about AI. Jeff acknowledges the competitive dynamics of AI research, but argues that focusing only on controversial applications of AI — like facial recognition and military applications — causes us to ignore or downplay areas where real collaboration can happen, like language translation for example.
undefined
Jun 23, 2021 • 37min

89. Pointing AI in the right direction - A cross-over episode with the Banana Data podcast!

This special episode of the Towards Data Science podcast is a cross-over with our friends over at the Banana Data podcast. We’ll be zooming out and talking about some of the most important current challenges AI creates for humanity, and some of the likely future directions the technology might take.
undefined
Jun 16, 2021 • 54min

88. Oren Etzioni - The case against (worrying about) existential risk from AI

Few would disagree that AI is set to become one of the most important economic and social forces in human history. But along with its transformative potential has come concern about a strange new risk that AI might pose to human beings. As AI systems become exponentially more capable of achieving their goals, some worry that even a slight misalignment between those goals and our own could be disastrous. These concerns are shared by many of the most knowledgeable and experienced AI specialists, at leading labs like OpenAI, DeepMind, CHAI Berkeley, Oxford and elsewhere. But they’re not universal: I recently had Melanie Mitchell — computer science professor and author who famously debated Stuart Russell on the topic of AI risk — on the podcast to discuss her objections to the AI catastrophe argument. And on this episode, we’ll continue our exploration of the case for AI catastrophic risk skepticism with an interview with Oren Etzioni, CEO of the Allen Institute for AI, a world-leading AI research lab that’s developed many well-known projects, including the popular AllenNLP library, and Semantic Scholar. Oren has a unique perspective on AI risk, and the conversation was lots of fun!
undefined
Jun 9, 2021 • 1h 10min

87. Evan Hubinger - The Inner Alignment Problem

How can you know that a super-intelligent AI is trying to do what you asked it to do? The answer, it turns out, is: not easily. And unfortunately, an increasing number of AI safety researchers are warning that this is a problem we’re going to have to solve sooner rather than later, if we want to avoid bad outcomes — which may include a species-level catastrophe. The type of failure mode whereby AIs optimize for things other than those we ask them to is known as an inner alignment failure in the context of AI safety. It’s distinct from outer alignment failure, which is what happens when you ask your AI to do something that turns out to be dangerous, and it was only recognized by AI safety researchers as its own category of risk in 2019. And the researcher who led that effort is my guest for this episode of the podcast, Evan Hubinger. Evan is an AI safety veteran who’s done research at leading AI labs like OpenAI, and whose experience also includes stints at Google, Ripple and Yelp. He currently works at the Machine Intelligence Research Institute (MIRI) as a Research Fellow, and joined me to talk about his views on AI safety, the alignment problem, and whether humanity is likely to survive the advent of superintelligent AI.
undefined
Jun 2, 2021 • 1h 26min

86. Andy Jones - AI Safety and the Scaling Hypothesis

When OpenAI announced the release of their  GPT-3 API last year, the tech world was shocked. Here was a language model, trained only to perform a simple autocomplete task, which turned out to be capable of language translation, coding, essay writing, question answering and many other tasks that previously would each have required purpose-built systems. What accounted for GPT-3’s ability to solve these problems? How did it beat state-of-the-art AIs that were purpose-built to solve tasks it was never explicitly trained for? Was it a brilliant new algorithm? Something deeper than deep learning? Well… no. As algorithms go, GPT-3 was relatively simple, and was built using a by-then fairly standard transformer architecture. Instead of a fancy algorithm, the real difference between GPT-3 and everything that came before was size: GPT-3 is a simple-but-massive, 175B-parameter model, about 10X bigger than the next largest AI system. GPT-3 is only the latest in a long line of results that now show that scaling up simple AI techniques can give rise to new behavior, and far greater capabilities. Together, these results have motivated a push toward AI scaling: the pursuit of ever larger AIs, trained with more compute on bigger datasets. But scaling is expensive: by some estimates, GPT-3 cost as much as $5M to train. As a result, only well-resources companies like Google, OpenAI and Microsoft have been able to experiment with scaled models. That’s a problem for independent AI safety researchers, who want to better understand how advanced AI systems work, and what their most dangerous behaviors might be, but who can’t afford a $5M compute budget. That’s why a recent paper by Andy Jones, an independent researcher specialized in AI scaling, is so promising: Andy’s paper shows that, at least in some contexts, the capabilities of large AI systems can be predicted from those of smaller ones. If the result generalizes, it could give independent researchers the ability to run cheap experiments on small systems, which nonetheless generalize to expensive, scaled AIs like GPT-3. Andy was kind enough to join me for this episode of the podcast.
undefined
19 snips
May 26, 2021 • 1h 6min

85. Brian Christian - The Alignment Problem

In 2016, OpenAI published a blog describing the results of one of their AI safety experiments. In it, they describe how an AI that was trained to maximize its score in a boat racing game ended up discovering a strange hack: rather than completing the race circuit as fast as it could, the AI learned that it could rack up an essentially unlimited number of bonus points by looping around a series of targets, in a process that required it to ram into obstacles, and even travel in the wrong direction through parts of the circuit. This is a great example of the alignment problem: if we’re not extremely careful, we risk training AIs that find dangerously creative ways to optimize whatever thing we tell them to optimize for. So building safe AIs — AIs that are aligned with our values — involves finding ways to very clearly and correctly quantify what we want our AIs to do. That may sound like a simple task, but it isn’t: humans have struggled for centuries to define “good” metrics for things like economic health or human flourishing, with very little success. Today’s episode of the podcast features Brian Christian — the bestselling author of several books related to the connection between humanity and computer science & AI. His most recent book, The Alignment Problem, explores the history of alignment research, and the technical and philosophical questions that we’ll have to answer if we’re ever going to safely outsource our reasoning to machines. Brian’s perspective on the alignment problem links together many of the themes we’ve explored on the podcast so far, from AI bias and ethics to existential risk from AI.
undefined
May 19, 2021 • 54min

84. Eliano Marques - The (evolving) world of AI privacy and data security

We all value privacy, but most of us would struggle to define it. And there’s a good reason for that: the way we think about privacy is shaped by the technology we use. As new technologies emerge, which allow us to trade data for services, or pay for privacy in different forms, our expectations shift and privacy standards evolve. That shifting landscape makes privacy a moving target. The challenge of understanding and enforcing privacy standards isn’t novel, but it’s taken on a new importance given the rapid progress of AI in recent years. Data that would have been useless just a decade ago — unstructured text data and many types of images come to mind — are now a treasure trove of value, for example. Should companies have the right to use data they originally collected at a time when its value was limited, when it no longer is? Do companies have an obligation to provide maximum privacy without charging their customers directly for it? Privacy in AI is as much a philosophical question as a technical one, and to discuss it, I was joined by Eliano Marques, Executive VP of Data and AI at Protegrity, a company that specializes in privacy and data protection for large companies. Eliano has worked in data privacy for the last decade.
undefined
May 12, 2021 • 53min

83. Rosie Campbell - Should all AI research be published?

When OpenAI developed its GPT-2 language model in early 2019, they initially chose not to publish the algorithm, owing to concerns over its potential for malicious use, as well as the need for the AI industry to experiment with new, more responsible publication practices that reflect the increasing power of modern AI systems. This decision was controversial, and remains that way to some extent even today: AI researchers have historically enjoyed a culture of open publication and have defaulted to sharing their results and algorithms. But whatever your position may be on algorithms like GPT-2, it’s clear that at some point, if AI becomes arbitrarily flexible and powerful, there will be contexts in which limits on publication will be important for public safety. The issue of publication norms in AI is complex, which is why it’s a topic worth exploring with people who have experience both as researchers, and as policy specialists — people like today’s Towards Data Science podcast guest, Rosie Campbell. Rosie is the Head of Safety Critical AI at Partnership on AI (PAI), a nonprofit that brings together startups, governments, and big tech companies like Google, Facebook, Microsoft and Amazon, to shape best practices, research, and public dialogue about AI’s benefits for people and society. Along with colleagues at PAI, Rosie recently finished putting together a white paper exploring the current hot debate over publication norms in AI research, and making recommendations for researchers, journals and institutions involved in AI research.
undefined
May 5, 2021 • 54min

82. Jakob Foerster - The high cost of automated weapons

Automated weapons mean fewer casualties, faster reaction times, and more precise strikes. They’re a clear win for any country that deploys them. You can see the appeal. But they’re also a classic prisoner’s dilemma. Once many nations have deployed them, humans no longer have to be persuaded to march into combat, and the barrier to starting a conflict drops significantly. The real risks that come from automated weapons systems like drones aren’t always the obvious ones. Many of them take the form of second-order effects — the knock-on consequences that come from setting up a world where multiple countries have large automated forces. But what can we do about them? That’s the question we’ll be taking on during this episode of the podcast with Jakob Foerster, an early pioneer in multi-agent reinforcement learning, and incoming faculty member at the University of Toronto. Jakob has been involved in the debate over weaponized drone automation for some time, and recently wrote an open letter to German politicians urging them to consider the risks associated with the deployment of this technology.
undefined
Apr 28, 2021 • 56min

81. Nicolas Miailhe - AI risk is a global problem

In December 1938, a frustrated nuclear physicist named Leo Szilard wrote a letter to the British Admiralty telling them that he had given up on his greatest invention — the nuclear chain reaction. "The idea of a nuclear chain reaction won’t work. There’s no need to keep this patent secret, and indeed there’s no need to keep this patent too. It won’t work." — Leo Szilard What Szilard didn’t know when he licked the envelope was that, on that very same day, a research team in Berlin had just split the uranium atom for the very first time. Within a year, the Manhatta Project would begin, and by 1945, the first atomic bomb was dropped on the Japanese city of Hiroshima. It was only four years later — barely a decade after Szilard had written off the idea as impossible — that Russia successfully tested its first atomic weapon, kicking off a global nuclear arms race that continues in various forms to this day. It’s a surprisingly short jump from cutting edge technology to global-scale risk. But although the nuclear story is a high-profile example of this kind of leap, it’s far from the only one. Today, many see artificial intelligence as a class of technology whose development will lead to global risks — and as a result, as a technology that needs to be managed globally. In much the same way that international treaties have allowed us to reduce the risk of nuclear war, we may need global coordination around AI to mitigate its potential negative impacts. One of the world’s leading experts on AI’s global coordination problem is Nicolas Miailhe. Nicolas is the co-founder of The Future Society, a global nonprofit whose primary focus is encouraging responsible adoption of AI, and ensuring that countries around the world come to a common understanding of the risks associated with it. Nicolas is a veteran of the prestigious Harvard Kennedy School of Government, an appointed expert to the Global Partnership on AI, and advises cities, governments, international organizations about AI policy.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode