Towards Data Science cover image

Towards Data Science

Latest episodes

undefined
Feb 10, 2021 • 45min

70. Sarah Williams - What does ethical AI even mean?

There’s no question that AI ethics has received a lot of well-deserved attention lately. But ask the average person what ethics AI means, and you’re as likely as not to get a blank stare. I think that’s largely because every data science or machine learning problem comes with a unique ethical context, so it can be hard to pin down ethics principles that generalize to a wide class of AI problems. Fortunately, there are researchers who focus on just this issue— and my guest today, Sarah Williams, is one of them. Sarah is an associate professor of urban planning and the director of the Civic Data Design Lab at MIT’s School of Architecture and Planning School. Her job is to study applications of data science to urban planning, and to work with policymakers on applying AI in an ethical way. Through that process, she’s distilled several generalizable AI ethics principles that have practical and actionable implications. This episode was a wide-ranging discussion about everything from the way our ideologies can colour our data analysis to the challenges governments face when trying to regulate AI.
undefined
Feb 3, 2021 • 1h 6min

69. Anders Sandberg - Answering the Fermi Question: Is AI our Great Filter?

The apparent absence of alien life in our universe has been a source of speculation and controversy in scientific circles for decades. If we assume that there’s even a tiny chance that intelligent life might evolve on a given planet, it seems almost impossible to imagine that the cosmos isn’t brimming with alien civilizations. So where are they? That’s what Anders Sandberg calls the “Fermi Question”: given the unfathomable size of the universe, how come we have seen no signs of alien life? Anders is a researcher at the University of Oxford’s Future of Humanity Institute, where he tries to anticipate the ethical, philosophical and practical questions that human beings are going to have to face as we approach what could be a technologically unbounded future. That work focuses to a great extent on superintelligent AI and the existential risks it might create. As part of that work, he’s studied the Fermi Question in great detail, and what it implies for the scarcity of life and the value of the human species.
undefined
Jan 27, 2021 • 1h 1min

68. Silvia Milano - Ethical problems with recommender systems

One of the consequences of living in a world where we have every kind of data we could possible want at our fingertips, is that we have far more data available to us than we could possibly review. Wondering which university program you should enter? You could visit any one of a hundred thousand websites that each offer helpful insights, or take a look at ten thousand different program options on hundreds of different universities’ websites. The only snag is that, by the time you finish that review, you probably could have graduated. Recommender systems allow us to take controlled sips from the information fire hose that’s pointed our way every day of the week, by highlighting a small number of particularly relevant or valuable items from a vast catalog. And while they’re incredibly valuable pieces of technology, they also have some serious ethical failure modes — many of which arise because companies tend to build recommenders to reflect user feedback, without thinking of the broader implications these systems have for society and human civilization. Those implications are significant, and growing fast. Recommender algorithms deployed by Twitter and Google regularly shape public opinion on the key moral issues of our time — sometimes intentionally, and sometimes even by accident. So rather than allowing society to be reshaped in the image of these powerful algorithms, perhaps it’s time we asked some big questions about the kind of world we want to live in, and worked backward to figure out what our answers would imply for the way we evaluate recommendation engines. That’s exactly why I wanted to speak with Silvia Milano, my guest for this episode of the podcast. Silvia is an expert of the ethics of recommender systems, and a researcher at Oxford’s Future of Humanity Institute and at the Oxford Internet Institute, where she’s been involved in work aimed at better understanding the hidden impact of recommendation algorithms, and what can be done to mitigate their more negative effects. Our conversation took us led us to consider complex questions, including the definition of identity, the human right to self-determination, and the interaction of governments with technology companies.
undefined
Jan 20, 2021 • 50min

67. Joaquin Quiñonero-Candela - Responsible AI at Facebook

Facebook routinely deploys recommendation systems and predictive models that affect the lives of billions of people everyday. That kind of reach comes with great responsibility — among other things, the responsibility to develop AI tools that ethical, fair and well characterized. This isn’t an easy task. Human beings have spent thousands of years arguing about what “fairness” and “ethics” mean, and haven’t come close to a consensus. Which is precisely why the responsible AI community has to involve as many disparate perspectives as possible in determining what policies to explore and recommend — a practice that Facebook’s Responsible AI team has applied itself. For this episode of the podcast, I’m joined by Joaquin Quiñonero-Candela, the Distinguished Tech Lead for Responsible AI at Facebook. Joaquin has been at the forefront of the AI ethics and fairness movements for years, and has overseen the formation of Facebook’s responsible AI team. As a result, he’s one of relatively few people with hands-on experience making critical AI ethics decisions at scale, and seeing their effects. Our conversation covered a lot of ground, from philosophical questions about the definition of fairness, to practical challenges that arise when implementing certain ethical AI frameworks.
undefined
Jan 13, 2021 • 48min

66. Owain Evans - Predicting the future of AI

Most researchers agree we’ll eventually reach a point where our AI systems begin to exceed human performance at virtually every economically valuable task, including the ability to generalize from what they’ve learned to take on new tasks that they haven’t seen before. These artificial general intelligences (AGIs) would in all likelihood have transformative effects on our economies, our societies and even our species. No one knows what these effects will be, or when AGI systems will be developed that can bring them about. But that doesn’t mean these things aren’t worth predicting or estimating. The more we know about the amount of time we have to develop robust solutions to important AI ethics, safety and policy problems, the more clearly we can think about what problems should be receiving our time and attention today. That’s the thesis that motivates a lot of work on AI forecasting: the attempt to predict key milestones in AI development, on the path to AGI and super-human artificial intelligence. It’s still early days for this space, but it’s received attention from an increasing number of the AI safety and AI capabilities researchers. One of those researchers is Owain Evans, whose work at Oxford University’s Future of Humanity Institute is focused on techniques for learning about human beliefs, preferences and values from observing human behavior or interacting with humans. Owain joined me for this episode of the podcast to talk about AI forecasting, the problem of inferring human values, and the ecosystem of research organizations that support this type of research.
undefined
Jan 6, 2021 • 43min

65. Helen Toner - The strategic and security implications of AI

With every new technology comes the potential for abuse. And while AI is clearly starting to deliver an awful lot of value, it’s also creating new systemic vulnerabilities that governments now have to worry about and address. Self-driving cars can be hacked.  Speech synthesis can make traditional ways of verifying someone’s identity less reliable. AI can be used to build weapons systems that are less predictable. As AI technology continues to develop and become more powerful, we’ll have to worry more about safety and security. But competitive pressures risk encouraging companies and countries to focus on capabilities research rather than responsible AI development. Solving this problem will be a big challenge, and it’s probably going to require new national AI policies, and international norms and standards that don’t currently exist. Helen Toner is Director of Strategy at the Center for Security and Emerging Technology (CSET), a US policy think tank that connects policymakers to experts on the security implications of new technologies like AI. Her work spans national security and technology policy, and international AI competition, and she’s become an expert on AI in China, in particular. Helen joined me for a special AI policy-themed episode of the podcast.
undefined
4 snips
Dec 30, 2020 • 51min

64. David Krueger - Managing the incentives of AI

What does a neural network system want to do? That might seem like a straightforward question. You might imagine that the answer is “whatever the loss function says it should do.” But when you dig into it, you quickly find that the answer is much more complicated than that might imply. In order to accomplish their primary goal of optimizing a loss function, algorithms often develop secondary objectives (known as instrumental goals) that are tactically useful for that main goal. For example, a computer vision algorithm designed to tell faces apart might find it beneficial to develop the ability to detect noses with high fidelity. Or in a more extreme case, a very advanced AI might find it useful to monopolize the Earth’s resources in order to accomplish its primary goal — and it’s been suggested that this might actually be the default behavior of powerful AI systems in the future. So, what does an AI want to do? Optimize its loss function — perhaps. But a sufficiently complex system is likely to also manifest instrumental goals. And if we don’t develop a deep understanding of AI incentives, and reliable strategies to manage those incentives, we may be in for an unpleasant surprise when unexpected and highly strategic behavior emerges from systems with simple and desirable primary goals. Which is why it’s a good thing that my guest today, David Krueger, has been working on exactly that problem. David studies deep learning and AI alignment at MILA, and joined me to discuss his thoughts on AI safety, and his work on managing the incentives of AI systems.
undefined
Dec 23, 2020 • 1h 12min

63. Geordie Rose - Will AGI need to be embodied?

The leap from today’s narrow AI to a more general kind of intelligence seems likely to happen at some point in the next century. But no one knows exactly how: at the moment, AGI remains a significant technical and theoretical challenge, and expert opinion about what it will take to achieve it varies widely. Some think that scaling up existing paradigms — like deep learning and reinforcement learning — will be enough, but others think these approaches are going to fall short. Geordie Rose is one of them, and his voice is one that’s worth listening to: he has deep experience with hard tech, from founding D-Wave (the world’s first quantum computing company), to building Kindred Systems, a company pioneering applications of reinforcement learning in industry that was recently acquired for $350 million dollars. Geordie is now focused entirely on AGI. Through his current company, Sanctuary AI, he’s working on an exciting and unusual thesis. At the core of this thesis is the idea is that one of the easiest paths to AGI will be to build embodied systems: AIs with physical structures that can move around in the real world and interact directly with objects. Geordie joined me for this episode of the podcast to discuss his AGI thesis, as well as broader questions about AI safety and AI alignment.
undefined
Dec 16, 2020 • 50min

62. Nicolai Baldin - AI meets the law: Bias, fairness, privacy and regulation

The fields of AI bias and AI fairness are still very young. And just like most young technical fields, they’re dominated by theoretical discussions: researchers argue over what words like “privacy” and “fairness” mean, but don’t do much in the way of applying these definitions to real-world problems. Slowly but surely, this is all changing though, and government oversight has had a big role to play in that process. Laws like GDPR — passed by the European Union in 2016 —are starting to impose concrete requirements on companies that want to use consumer data, or build AI systems with it. There are pros and cons to legislating machine learning, but one thing’s for sure: there’s no looking back. At this point, it’s clear that  government-endorsed definitions of “bias” and “fairness” in AI systems are going to be applied to companies (and therefore to consumers), whether they’re well-developed and thoughtful or not. Keeping up with the philosophy of AI is a full-time job for most, but actually applying that philosophy to real-world corporate data is its own additional challenge. My guest for this episode of the podcast is doing just that: Nicolai Baldin is a former Cambridge machine learning researcher, and now the  founder and CEO of Synthesized, a startup that specializes in helping companies apply privacy, AI fairness and bias best practices to their data. Nicolai is one of relatively few people working on concrete problems in these areas, and has a unique perspective on the space as a result.
undefined
Dec 9, 2020 • 1h 22min

61. Ben Goertzel - The unorthodox path to AGI

No one knows for sure what it’s going to take to make artificial general intelligence work. But that doesn’t mean that there aren’t prominent research teams placing big bets on different theories: DeepMind seems to be hoping that a brain emulation strategy will pay off, whereas OpenAI is focused on achieving AGI by scaling up existing deep learning and reinforcement learning systems with more data, more compute. Ben Goertzel —a pioneering AGI researcher, and the guy who literally coined the term “AGI” — doesn’t think either of these approaches is quite right. His alternative approach is the strategy currently being used by OpenCog, an open-source AGI project he first released in 2008. Ben is also a proponent of decentralized AI development, due to his concerns about centralization of power through AI, as the technology improves. For that reason, he’s currently working on building a decentralized network of AIs through SingularityNET, a blockchain-powered AI marketplace that he founded in 2017. Ben has some interesting and contrarian views on AGI, AI safety, and consciousness, and he was kind enough to explore them with me on this episode of the podcast.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app