The 80000 Hours Podcast on Artificial Intelligence cover image

The 80000 Hours Podcast on Artificial Intelligence

Latest episodes

undefined
Sep 1, 2023 • 2h 42min

Ten: Nova DasSarma on why information security may be critical to the safe development of AI systems

Originally released in June 2022. If a business has spent $100 million developing a product, it's a fair bet that they don't want it stolen in two seconds and uploaded to the web where anyone can use it for free. This problem exists in extreme form for AI companies. These days, the electricity and equipment required to train cutting-edge machine learning models that generate uncanny human text and images can cost tens or hundreds of millions of dollars. But once trained, such models may be only a few gigabytes in size and run just fine on ordinary laptops. Today's guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic. One of her jobs is to stop hackers exfiltrating Anthropic's incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such models on internet-connected servers, and the poor state of computer security in general, this is a serious challenge.Links to learn more, summary and full transcript.The worries aren't purely commercial though. This problem looms especially large for the growing number of people who expect that in coming decades we'll develop so-called artificial 'general' intelligence systems that can learn and apply a wide range of skills all at once, and thereby have a transformative effect on society.If aligned with the goals of their owners, such general AI models could operate like a team of super-skilled assistants, going out and doing whatever wonderful (or malicious) things are asked of them. This might represent a huge leap forward for humanity, though the transition to a very different new economy and power structure would have to be handled delicately.If unaligned with the goals of their owners or humanity as a whole, such broadly capable models would naturally 'go rogue,' breaking their way into additional computer systems to grab more computing power — all the better to pursue their goals and make sure they can't be shut off.As Nova explains, in either case, we don't want such models disseminated all over the world before we've confirmed they are deeply safe and law-abiding, and have figured out how to integrate them peacefully into society. In the first scenario, premature mass deployment would be risky and destabilising. In the second scenario, it could be catastrophic -- perhaps even leading to human extinction if such general AI systems turn out to be able to self-improve rapidly rather than slowly.If highly capable general AI systems are coming in the next 10 or 20 years, Nova may be flying below the radar with one of the most important jobs in the world.We'll soon need the ability to 'sandbox' (i.e. contain) models with a wide range of superhuman capabilities, including the ability to learn new skills, for a period of careful testing and limited deployment — preventing the model from breaking out, and criminals from breaking in. Nova and her colleagues are trying to figure out how to do this, but as this episode reveals, even the state of the art is nowhere near good enough.In today's conversation, Rob and Nova cover:• How good or bad is information security today• The most secure computer systems that exist• How to design an AI training compute centre for maximum efficiency• Whether 'formal verification' can help us design trustworthy systems• How wide the gap is between AI capabilities and AI safety• How to disincentivise hackers• What should listeners do to strengthen their own security practices• And much more.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.Producer: Keiran HarrisAudio mastering: Ben Cordell and Beppe RådvikTranscriptions: Katy Moore
undefined
Sep 1, 2023 • 2h 5min

Eleven: Catherine Olsson & Daniel Ziegler on the fast path into high-impact ML engineering roles

Guests Catherine Olsson and Daniel Ziegler discuss their highly accelerated career paths in ML engineering, sharing tips and advice for others interested in this field. They talk about their experiences at OpenAI and Google Brain, including projects like Universe and Dota 2. They emphasize the importance of diving in and learning on the job, and discuss the skills required for high-impact ML engineering roles. The chapter also includes insights on transitioning into ML engineering, considerations for choosing organizations, and implementing research papers.
undefined
Sep 1, 2023 • 2h 24min

Bonus: Preventing an AI-related catastrophe (Article)

Originally released in August 2022.Today’s release is a professional reading of our new problem profile on preventing an AI-related catastrophe, written by Benjamin Hilton.We expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). We think more work needs to be done to reduce these risks.Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity. There have not yet been any satisfying answers to concerns about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is very neglected, and may well be tractable. We estimate that there are around 300 people worldwide working directly on this. As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. If worthwhile policies are developed, we’ll need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more.If you want to check out the links, footnotes and figures in today’s article, you can find those here.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app.Producer: Keiran HarrisEditing and narration: Perrin Walker and Shaun AckerAudio proofing: Katy Moore
undefined
Sep 1, 2023 • 48min

Bonus: China-related AI safety and governance paths (Article)

Article originally published February 2022.In this episode of 80k After Hours, Perrin Walker reads our career review of China-related AI safety and governance paths.Here’s the original piece if you’d like to learn more.You might also want to check out Benjamin Todd and Brian Tse's article on Improving China-Western coordination on global catastrophic risks.Get this episode by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.Editing and narration: Perrin WalkerAudio proofing: Katy Moore

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode