80,000 Hours Podcast cover image

80,000 Hours Podcast

Latest episodes

undefined
Aug 5, 2019 • 2h 12min

#62 - Paul Christiano on messaging the future, increasing compute, & how CO2 impacts your brain

Imagine that – one day – humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out? In his second appearance on the 80,000 Hours Podcast, machine learning researcher and polymath Paul Christiano suggests we try to answer this question with a related thought experiment: are there any messages we might want to send back to our ancestors in the year 1700 that would have made history likely to go in a better direction than it did? It seems there probably are. • Links to learn more, summary, and full transcript. • Paul's first appearance on the show in episode 44. • An out-take on decision theory. We could tell them hard-won lessons from history; mention some research questions we wish we'd started addressing earlier; hand over all the social science we have that fosters peace and cooperation; and at the same time steer clear of engineering hints that would speed up the development of dangerous weapons. But, as Christiano points out, even if we could satisfactorily figure out what we'd like to be able to tell our ancestors, that's just the first challenge. We'd need to leave the message somewhere that they could identify and dig up. While there are some promising options, this turns out to be remarkably hard to do, as anything we put on the Earth's surface quickly gets buried far underground. But even if we figure out a satisfactory message, and a ways to ensure it's found, a civilization this far in the future won't speak any language like our own. And being another species, they presumably won't share as many fundamental concepts with us as humans from 1700. If we knew a way to leave them thousands of books and pictures in a material that wouldn't break down, would they be able to decipher what we meant to tell them, or would it simply remain a mystery? That's just one of many playful questions discussed in today's episode with Christiano — a frequent writer who's willing to brave questions that others find too strange or hard to grapple with. We also talk about why divesting a little bit from harmful companies might be more useful than I'd been thinking. Or whether creatine might make us a bit smarter, and carbon dioxide filled conference rooms make us a lot stupider. Finally, we get a big update on progress in machine learning and efforts to make sure it's reliably aligned with our goals, which is Paul's main research project. He responds to the views that DeepMind's Pushmeet Kohli espoused in a previous episode, and we discuss whether we'd be better off if AI progress turned out to be most limited by algorithmic insights, or by our ability to manufacture enough computer processors. Some other issues that come up along the way include: • Are there any supplements people can take that make them think better? • What implications do our views on meta-ethics have for aligning AI with our goals? • Is there much of a risk that the future will contain anything optimised for causing harm? • An out-take about the implications of decision theory, which we decided was too confusing and confused to stay in the main recording. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Jul 17, 2019 • 1h 55min

#61 - Helen Toner on emerging technology, national security, and China

From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did. Some think this is the best historical analogy we have for how machine learning could alter life in the 21st century. In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to communicate quickly with units in the field over great distances. How might international security be altered if the impact of machine learning reaches a similar scope to that of electricity? Today's guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for such disruptive technical changes that might threaten international peace. • Links to learn more, summary and full transcript • Philosophy is one of the hardest grad programs. Is it worth it, if you want to use ideas to change the world? by Arden Koehler and Will MacAskill • The case for building expertise to work on US AI policy, and how to do it by Niel Bowerman • AI strategy and governance roles on the job board Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop 'intuitions' that inform their judgement about future cases. This is something humans do constantly, whether we're playing tennis, reading someone's face, diagnosing a patient, or figuring out which business ideas are likely to succeed. Sometimes these ML algorithms can seem uncannily insightful, and they're only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth -- all in the first five minutes of our day. Rapid advances in ML, and the many prospective military applications, have people worrying about an 'AI arms race' between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could "destabilize everything from nuclear détente to human friendships." Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands. But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy? In today's episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen's experience living and studying in China. We cover: • Why immigration is the main policy area that should be affected by AI advances today. • Why talking about an 'arms race' in AI is premature. • How Bobby Kennedy may have positively affected the Cuban Missile Crisis. • Whether it's possible to become a China expert and still get a security clearance. • Can access to ML algorithms be restricted, or is that just not practical? • Whether AI could help stabilise authoritarian regimes. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Jun 28, 2019 • 2h 12min

#60 - Phil Tetlock on why accurate forecasting matters for everything, and how you can do it better

Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case? Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can't assess the likelihood of different outcomes we're in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul's Drag Race. Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day. He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better. Along with other psychologists, he identified that many ordinary people are attracted to a 'folk probability' that draws just three distinctions — 'impossible', 'possible' and 'certain' — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% as against 57% likely. • Links to learn more, summary and full transcript • The calibration training app • Sign up for the Civ-5 counterfactual forecasting tournament • A review of the evidence on good forecasting practices • Learn more about Effective Altruism Global In the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014. That was five years ago. In today's interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement. We discuss how his work can be applied to your personal life to answer high-stakes questions, like how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by the Open Philanthropy Project and Clearer Thinking that teaches you to distinguish your '70 percents' from your '80 percents'.) We also bring some tough methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions that shape the world their profession, as it has been for Tetlock over many decades. We view Tetlock's work as so core to living well that we've brought him back for a second and longer appearance on the show — his first was back in episode 15. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Jun 17, 2019 • 1h 43min

#59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?Sunstein — coauthor of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens. He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable. • Links to learn more, summary and full transcript. • 80,000 Hours Annual Review 2018. • How to donate to 80,000 Hours. In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions. According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case. In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss: • How much people misrepresent their views in democratic countries. • Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis. • When is it justified to encourage your own group to polarise? • Sunstein's difficult experiences as a pioneer of animal rights law. • Whether activists can do better by spending half their resources on public opinion surveys. • Should people be more or less outspoken about their true views? • What might be the next social revolution to take off? • How can we learn about social movements that failed and disappeared? • How to find out what people really think. Chapters:• Rob’s intro (00:00:00)• Cass's Harvard lecture on How Change Happens (00:02:59)• Rob & Cass's conversation about the book (00:41:43) The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Jun 3, 2019 • 1h 30min

#58 – Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI

When you're building a bridge, responsibility for making sure it won't fall over isn't handed over to a few 'bridge not falling down engineers'. Making sure a bridge is safe to use and remains standing in a storm is completely central to the design, and indeed the entire project.When it comes to artificial intelligence, commentators often distinguish between enhancing the capabilities of machine learning systems and enhancing their safety. But to Pushmeet Kohli, principal scientist and research team leader at DeepMind, research to make AI robust and reliable is no more a side-project in AI design than keeping a bridge standing is a side-project in bridge design.Far from being an overhead on the 'real' work, it’s an essential part of making AI systems work at all. We don’t want AI systems to be out of alignment with our intentions, and that consideration must arise throughout their development.Professor Stuart Russell — co-author of the most popular AI textbook — has gone as far as to suggest that if this view is right, it may be time to retire the term ‘AI safety research’ altogether. • Want to be notified about high-impact opportunities to help ensure AI remains safe and beneficial? Tell us a bit about yourself and we’ll get in touch if an opportunity matches your background and interests. • Links to learn more, summary and full transcript. • And a few added thoughts on non-research roles. With the goal of designing systems that are reliably consistent with desired specifications, DeepMind have recently published work on important technical challenges for the machine learning community. For instance, Pushmeet is looking for efficient ways to test whether a system conforms to the desired specifications, even in peculiar situations, by creating an 'adversary' that proactively seeks out the worst failures possible. If the adversary can efficiently identify the worst-case input for a given model, DeepMind can catch rare failure cases before deploying a model in the real world. In the future single mistakes by autonomous systems may have very large consequences, which will make even small failure probabilities unacceptable. He's also looking into 'training specification-consistent models' and formal verification', while other researchers at DeepMind working on their AI safety agenda are figuring out how to understand agent incentives, avoid side-effects, and model AI rewards. In today’s interview, we focus on the convergence between broader AI research and robustness, as well as: • DeepMind’s work on the protein folding problem • Parallels between ML problems and past challenges in software development and computer security • How can you analyse the thinking of a neural network? • Unique challenges faced by DeepMind’s technical AGI safety team • How do you communicate with a non-human intelligence? • What are the biggest misunderstandings about AI safety and reliability? • Are there actually a lot of disagreements within the field? • The difficulty of forecasting AI development Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
May 13, 2019 • 2h 18min

Rob Wiblin on human nature, new technology, and living a happy, healthy & ethical life

This is a cross-post of some interviews Rob did recently on two other podcasts — Mission Daily (from 2m) and The Good Life (from 1h13m). Some of the content will be familiar to regular listeners — but if you’re at all interested in Rob’s personal thoughts, there should be quite a lot of new material to make listening worthwhile. The first interview is with Chad Grills. They focused largely on new technologies and existential risks, but also discuss topics like: • Why Rob is wary of fiction • Egalitarianism in the evolution of hunter gatherers • How to stop social media screwing up politics • Careers in government versus business The second interview is with Prof Andrew Leigh - the Shadow Assistant Treasurer in Australia. This one gets into more personal topics than we usually cover on the show, like: • What advice would Rob give to his teenage self? • Which person has most shaped Rob’s view of living an ethical life? • Rob’s approach to giving to the homeless • What does Rob do to maximise his own happiness? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Apr 23, 2019 • 2h 50min

#57 – Tom Kalil on how to do the most good in government

You’re 29 years old, and you’ve just been given a job in the White House. How do you quickly figure out how the US Executive Branch behemoth actually works, so that you can have as much impact as possible - before you quit or get kicked out?That was the challenge put in front of Tom Kalil in 1993.He had enough success to last a full 16 years inside the Clinton and Obama administrations, working to foster the development of the internet, then nanotechnology, and then cutting-edge brain modelling, among other things.But not everyone figures out how to move the needle. In today's interview, Tom shares his experience with how to increase your chances of getting an influential role in government, and how to make the most of the opportunity if you get in.Links to learn more, summary and full transcript. Interested in US AI policy careers? Apply for one-on-one career advice here.Vacancies at the Center for Security and Emerging Technology.Our high-impact job board, which features other related opportunities. He believes that Congressional gridlock leads people to greatly underestimate how much the Executive Branch can and does do on its own every day. Decisions by individuals change how billions of dollars are spent; regulations are enforced, and then suddenly they aren't; and a single sentence in the State of the Union can get civil servants to pay attention to a topic that would otherwise go ignored. Over years at the White House Office of Science and Technology Policy, 'Team Kalil' built up a white board of principles. For example, 'the schedule is your friend': setting a meeting date with the President can force people to finish something, where they otherwise might procrastinate. Or 'talk to who owns the paper'. People would wonder how Tom could get so many lines into the President's speeches. The answer was "figure out who's writing the speech, find them with the document, and tell them to add the line." Obvious, but not something most were doing. Not everything is a precise operation though. Tom also tells us the story of NetDay, a project that was put together at the last minute because the President incorrectly believed it was already organised – and decided he was going to announce it in person. In today's episode we get down to nuts & bolts, and discuss: • How did Tom spin work on a primary campaign into a job in the next White House? • Why does Tom think hiring is the most important work he did, and how did he decide who to bring onto the team? • How do you get people to do things when you don't have formal power over them? • What roles in the US government are most likely to help with the long-term future, or reducing existential risks? • Is it possible, or even desirable, to get the general public interested in abstract, long-term policy ideas? • What are 'policy entrepreneurs' and why do they matter? • What is the role for prizes in promoting science and technology? What are other promising policy ideas? • Why you can get more done by not taking credit. • What can the White House do if an agency isn't doing what it wants? • How can the effective altruism community improve the maturity of our policy recommendations? • How much can talented individuals accomplish during a short-term stay in government? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Apr 15, 2019 • 2h 58min

#56 - Persis Eskander on wild animal welfare and what, if anything, to do about it

Elephants in chains at travelling circuses; pregnant pigs trapped in coffin sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, right? Maybe, but maybe not. While we tend to have a romanticised view of nature, life in the wild includes a range of extremely negative experiences. Many animals are hunted by predators, and constantly have to remain vigilant about the risk of being killed, and perhaps experiencing the horror of being eaten alive. Resource competition often leads to chronic hunger or starvation. Their diseases and injuries are never treated. In winter animals freeze to death; in droughts they die of heat or thirst. There are fewer than 20 people in the world dedicating their lives to researching these problems. But according to Persis Eskander, researcher at the Open Philanthropy Project, if we sum up the negative experiences of all wild animals, their sheer number could make the scale of the problem larger than most other near-term concerns. Links to learn more, summary and full transcript. Persis urges us to recognise that nature isn’t inherently good or bad, but rather the result of an amoral evolutionary process. For those that can't survive the brutal indifference of their environment, life is often a series of bad experiences, followed by an even worse death. But should we actually intervene? How do we know what animals are sentient? How often do animals feel hunger, cold, fear, happiness, satisfaction, boredom, and intense agony? Are there long-term technologies that could eventually allow us to massively improve wild animal welfare? For most of these big questions, the answer is: we don’t know. And Persis thinks we're far away from knowing enough to start interfering with ecosystems. But that's all the more reason to start looking at these questions. There are some concrete steps we could take today, like improving the way wild caught fish are slaughtered. Fish might lack the charisma of a lion or the intelligence of a pig, but if they have the capacity to suffer — and evidence suggests that they do — we should be thinking of ways to kill them painlessly rather than allowing them to suffocate to death over hours. In today’s interview we explore wild animal welfare as a new field of research, and discuss: • Do we have a moral duty towards wild animals or not? • How should we measure the number of wild animals? • What are some key activities that generate a lot of suffering or pleasure for wild animals that people might not fully appreciate? • Is there a danger in imagining how we as humans would feel if we were put into their situation? • Should we eliminate parasites and predators? • How important are insects? • How strongly should we focus on just avoiding humans going in and making things worse? • How does this compare to work on farmed animal suffering? • The most compelling arguments for humanity not dedicating resources to wild animal welfare • Is there much of a case for the idea that this work could improve the very long-term future of humanity? Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly discuss: • The importance of figuring out your values • Chemistry, psychology, and other different paths towards working on wild animal welfare • How to break into new fields Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
undefined
Mar 31, 2019 • 2h 31min

#55 – Lutter & Winter on founding charter cities with outstanding governance to end poverty

Mark Lutter and Tamara Winter discuss founding charter cities for ending poverty through innovative governance solutions. They explore the potential impact on global poverty, challenges faced, and comparisons to special economic zones. The discussion also covers historical attempts, governance models, Honduran legislation, and the role of charter cities in achieving outstanding governance to combat poverty.
undefined
Mar 19, 2019 • 2h 54min

#54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were both developed using the same general-purpose reinforcement learning algorithm. How is this possible and what does it show? In today's interview Jack Clark, Policy Director at OpenAI, explains that from a computational perspective using a hand and playing Dota 2 are remarkably similar problems. A robot hand needs to hold an object, move its fingers, and rotate it to the desired position. In Dota 2 you control a team of several different people, moving them around a map to attack an enemy. Your hand has 20 or 30 different joints to move. The number of main actions in Dota 2 is 10 to 20, as you move your characters around a map. When you’re rotating an objecting in your hand, you sense its friction, but you don’t directly perceive the entire shape of the object. In Dota 2, you're unable to see the entire map and perceive what's there by moving around – metaphorically 'touching' the space. Read our new in-depth article on becoming an AI policy specialist: The case for building expertise to work on US AI policy, and how to do it Links to learn more, summary and full transcript This is true of many apparently distinct problems in life. Compressing different sensory inputs down to a fundamental computational problem which we know how to solve only requires the right general-purpose software. The creation of such increasingly 'broad-spectrum' learning algorithms like has been a key story of the last few years, and this development like have unpredictable consequences, heightening the huge challenges that already exist in AI policy. Today’s interview is a mega-AI-policy-quad episode; Jack is joined by his colleagues Amanda Askell and Miles Brundage, on the day they released their fascinating and controversial large general language model GPT-2. We discuss: • What are the most significant changes in the AI policy world over the last year or two? • What capabilities are likely to develop over the next five, 10, 15, 20 years? • How much should we focus on the next couple of years, versus the next couple of decades? • How should we approach possible malicious uses of AI? • What are some of the potential ways OpenAI could make things worse, and how can they be avoided? • Publication norms for AI research • Where do we stand in terms of arms races between countries or different AI labs? • The case for creating newsletters • Should the AI community have a closer relationship to the military? • Working at OpenAI vs. working in the US government • How valuable is Twitter in the AI policy world? Rob is then joined by two of his colleagues – Niel Bowerman & Michelle Hutchinson – to quickly discuss: • The reaction to OpenAI's release of GPT-2 • Jack’s critique of our US AI policy article • How valuable are roles in government? • Where do you start if you want to write content for a specific audience? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode