
Towards Data Science
Note: The TDS podcast's current run has ended.
Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.
Latest episodes

Apr 21, 2021 • 45min
80. Yan Li - The Surprising Challenges of Global AI Philanthropy
We’ve recorded quite a few podcasts recently about the problems AI does and may create, now and in the future. We’ve talked about AI safety, alignment, bias and fairness.
These are important topics, and we’ll continue to discuss them, but I also think it’s important not to lose sight of the value that AI and tools like it bring to the world in the here and now. So for this episode of the podcast, I spoke with Dr Yan Li, a professor who studies data management and analytics, and the co-founder of Techies Without Borders, a nonprofit dedicated to using tech for humanitarian good. Yan has firsthand experience developing and deploying technical solutions for use in poor countries around the world, from Tibet to Haiti.

Apr 14, 2021 • 57min
79. Ryan Carey - What does your AI want?
AI safety researchers are increasingly focused on understanding what AI systems want. That may sound like an odd thing to care about: after all, aren’t we just programming AIs to want certain things by providing them with a loss function, or a number to optimize?
Well, not necessarily. It turns out that AI systems can have incentives that aren’t necessarily obvious based on their initial programming. Twitter, for example, runs a recommender system whose job is nominally to figure out what tweets you’re most likely to engage with. And while that might make you think that it should be optimizing for matching tweets to people, another way Twitter can achieve its goal is by matching people to tweets — that is, making people easier to predict, by nudging them towards simplistic and partisan views of the world. Some have argued that’s a key reason that social media has had such a divisive impact on online political discourse.
So the incentives of many current AIs already deviate from those of their programmers in important and significant ways — ways that are literally shaping society. But there’s a bigger reason they matter: as AI systems continue to develop more capabilities, inconsistencies between their incentives and our own will become more and more important. That’s why my guest for this episode, Ryan Carey, has focused much of his research on identifying and controlling the incentives of AIs. Ryan is a former medical doctor, now pursuing a PhD in machine learning and doing research on AI safety at Oxford University’s Future of Humanity Institute.

Apr 7, 2021 • 45min
78. Melanie Mitchell - Existential risk from AI: A skeptical perspective
As AI systems have become more powerful, an increasing number of people have been raising the alarm about its potential long-term risks. As we’ve covered on the podcast before, many now argue that those risks could even extend to the annihilation of our species by superhuman AI systems that are slightly misaligned with human values.
There’s no shortage of authors, researchers and technologists who take this risk seriously — and they include prominent figures like Eliezer Yudkowsky, Elon Musk, Bill Gates, Stuart Russell and Nick Bostrom. And while I think the arguments for existential risk from AI are sound, and aren’t widely enough understood, I also think that it’s important to explore more skeptical perspectives.
Melanie Mitchell is a prominent and important voice on the skeptical side of this argument, and she was kind enough to join me for this episode of the podcast. Melanie is the Davis Professor of complexity at the Santa Fe Institute, a Professor of computer science at Portland State University, and the author of Artificial Intelligence: a Guide for Thinking Humans — a book in which she explores arguments for AI existential risk through a critical lens. She’s an active player in the existential risk conversation, and recently participated in a high-profile debate with Stuart Russell, arguing against his AI risk position.

Mar 31, 2021 • 57min
77. Josh Fairfield - AI advances, but can the law keep up?
Powered by Moore’s law, and a cluster of related trends, technology has been improving at an exponential pace across many sectors. AI capabilities in particular have been growing at a dizzying pace, and it seems like every year brings us new breakthroughs that would have been unimaginable just a decade ago. GPT-3, AlphaFold and DALL-E were developed in the last 12 months — and all of this in a context where the leading machine learning model has been increasing in size tenfold every year for the last decade.
To many, there’s a sharp contrast between the breakneck pace of these advances and the rate at which the laws that govern technologies like AI evolves. Our legal systems are chock full of outdated laws, and politicians and regulators often seem almost comically behind the technological curve. But while there’s no question that regulators face an uphill battle in trying to keep up with a rapidly changing tech landscape, my guest today thinks they have a good shot of doing so — as long as they start to think about the law a bit differently.
His name is Josh Fairfield, and he’s a law and technology scholar and former director of R&D at pioneering edtech company Rosetta Stone. Josh has consulted with U.S. government agencies, including the White House Office of Technology and the Homeland Security Privacy Office, and literally wrote a book about the strategies policymakers can use to keep up with tech like AI.

Mar 24, 2021 • 1h 11min
76. Stuart Armstrong - AI: Humanity's Endgame?
Paradoxically, it may be easier to predict the far future of humanity than to predict our near future.
The next fad, the next Netflix special, the next President — all are nearly impossible to anticipate. That’s because they depend on so many trivial factors: the next fad could be triggered by a viral video someone filmed on a whim, and well, the same could be true of the next Netflix special or President for that matter.
But when it comes to predicting the far future of humanity, we might oddly be on more solid ground. That’s not to say predictions can be made with confidence, but at least they can be made based on economic analysis and first principles reasoning. And most of that analysis and reasoning points to one of two scenarios: we either attain heights we’ve never imagined as a species, or everything we care about gets wiped out in a cosmic scale catastrophe.
Few people have spent more time thinking about the possible endgame of human civilization as my guest for this episode of the podcast, Stuart Armstrong. Stuart is a Research Fellow at Oxford University’s Future of Humanity Institute, where he studies the various existential risks that face our species, focusing most of his work specifically on risks from AI. Stuart is a fascinating and well-rounded thinker with a fresh perspective to share on just about everything you could imagine, and I highly recommend giving the episode a listen.

Mar 17, 2021 • 59min
75. Georg Northoff - Consciousness and AI
For the past decade, progress in AI has mostly been driven by deep learning — a field of research that draws inspiration directly from the structure and function of the human brain. By drawing an analogy between brains and computers, we’ve been able to build computer vision, natural language and other predictive systems that would have been inconceivable just ten years ago.
But analogies work two ways. Now that we have self-driving cars and AI systems that regularly outperform humans at increasingly complex tasks, some are wondering whether reversing the usual approach — and drawing inspiration from AI to inform out approach to neuroscience — might be a promising strategy. This more mathematical approach to neuroscience is exactly what today’s guest, Georg Nortoff, is working on. Georg is a professor of neuroscience, psychiatry, and philosophy at the University of Ottawa, and as part of his work developing a more mathematical foundation for neuroscience, he’s explored a unique and intriguing theory of consciousness that he thinks might serve as a useful framework for developing more advanced AI systems that will benefit human beings.

Mar 10, 2021 • 52min
74. Ethan Perez - Making AI safe through debate
Most AI researchers are confident that we will one day create superintelligent systems — machines that can significantly outperform humans across a wide variety of tasks.
If this ends up happening, it will pose some potentially serious problems. Specifically: if a system is superintelligent, how can we maintain control over it? That’s the core of the AI alignment problem — the problem of aligning advanced AI systems with human values.
A full solution to the alignment problem will have to involve at least two things. First, we’ll have to know exactly what we want superintelligent systems to do, and make sure they don’t misinterpret us when we ask them to do it (the “outer alignment” problem). But second, we’ll have to make sure that those systems are genuinely trying to optimize for what we’ve asked them to do, and that they aren’t trying to deceive us (the “inner alignment” problem).
Creating systems that are inner-aligned and superintelligent might seem like different problems — and many think that they are. But in the last few years, AI researchers have been exploring a new family of strategies that some hope will allow us to achieve both superintelligence and inner alignment at the same time. Today’s guest, Ethan Perez, is using these approaches to build language models that he hopes will form an important part of the superintelligent systems of the future. Ethan has done frontier research at Google, Facebook, and MILA, and is now working full-time on developing learning systems with generalization abilities that could one day exceed those of human beings.

Mar 3, 2021 • 1h 8min
73. David Roodman - Economic history and the road to the singularity
There’s a minor mystery in economics that may suggest that things are about to get really, really weird for humanity.
And that mystery is this: many economic models predict that, at some point, human economic output will become infinite.
Now, infinities really don’t tend to happen in the real world. But when they’re predicted by otherwise sound theories, they tend to indicate a point at which the assumptions of these theories break down in some fundamental way. Often, that’s because of things like phase transitions: when gases condense or liquids evaporate, some of their thermodynamic parameters go to infinity — not because anything “infinite” is really happening, but because the equations that define a gas cease to apply when those gases become liquids and vice-versa.
So how should we think of economic models that tell us that human economic output will one day reach infinity? Is it reasonable to interpret them as predicting a phase transition in the human economy — and if so, what might that transition look like? These are hard questions to answer, but they’re questions that my guest David Roodman, a Senior Advisor at Open Philanthropy, has thought about a lot.
David has centered his investigations on what he considers to be a plausible culprit for a potential economic phase transition: the rise of transformative AI technology. His work explores a powerful way to think about how, and even when, transformative AI may change how the economy works in a fundamental way.

Feb 24, 2021 • 1h 22min
72. Margot Gerritsen - Does AI have to be understandable to be ethical?
As AI systems have become more ubiquitous, people have begun to pay more attention to their ethical implications. Those implications are potentially enormous: Google’s search algorithm and Twitter’s recommendation system each have the ability to meaningfully sway public opinion on just about any issue. As a result, Google and Twitter’s choices have an outsized impact — not only on their immediate user base, but on society in general.
That kind of power comes with risk of intentional misuse (for example, Twitter might choose to boost tweets that express views aligned with their preferred policies). But while intentional misuse is an important issue, equally challenging is the problem of avoiding unintentionally bad outputs from AI systems.
Unintentionally bad AIs can lead to various biases that make algorithms perform better for some people than for others, or more generally to systems that are optimizing for things we actually don’t want in the long run. For example, platforms like Twitter and YouTube have played an important role in the increasing polarization of their US (and worldwide) user bases. They never intended to do this, of course, but their effect on social cohesion is arguably the result of internal cultures based on narrow metric optimization: when you optimize for short-term engagement, you often sacrifice long-term user well-being.
The unintended consequences of AI systems are hard to predict, almost by definition. But their potential impact makes them very much worth thinking and talking about — which is why I sat down with Stanford professor, co-director of the Women in Data Science (WiDS) initiative, and host of the WiDS podcast Margot Gerritsen for this episode of the podcast.

Feb 17, 2021 • 1h 9min
71. Ben Garfinkel - Superhuman AI and the future of democracy and government
As we continue to develop more and more sophisticated AI systems, an increasing number of economists, technologists and futurists have been trying to predict what the likely end point of all this progress might be. Will human beings be irrelevant? Will we offload all of our decisions — from what we want to do with our spare time, to how we govern societies — to machines? And what is the emergence of highly capable and highly general AI systems mean for the future of democracy and governance?
These questions are impossible to answer completely and directly, but it may be possible to get some hints by taking a long-term view at the history of human technological development. That’s a strategy that my guest, Ben Garfinkel, is applying in his research on the future of AI. Ben is a physicist and mathematician who now does research on forecasting risks from emerging technologies at Oxford’s Future of Humanity Institute.
Apart from his research on forecasting the future impact of technologies like AI, Ben has also spent time exploring some classic arguments for AI risk, many of which he disagrees with. Since we’ve had a number of guests on the podcast who do take these risks seriously, I thought it would be worth speaking to Ben about his views as well, and I’m very glad I did.