Dr. Jan Leike, a Research Scientist at DeepMind, shares valuable insights on how to join the world's leading AI team. He discusses the importance of completing a computer science and mathematics degree, publishing papers, finding a supportive supervisor, and attending top conferences. Jan also talks about the qualities of a good fit for research and highlights the pressing issue of AGI safety. They also touch upon misconceptions about AI, DeepMind's research focus, and failures of current AI systems.
Read more
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Working on AI safety is crucial to ensure the safe and beneficial use of artificial intelligence.
Effective objective functions and robustness of machine learning algorithms are important areas of research for AI safety.
Deep dives
The Importance of AI Safety
Working on AI safety is one of the most pressing problems that humanity faces due to the potential risks associated with advances in artificial intelligence. It is crucial to understand the potential dangers of AI systems and design strategies to ensure their safe and beneficial use.
Technical Challenges in AI Safety
Machine learning researchers focus on technical questions related to making AI systems safe. They explore issues such as designing effective objective functions and improving the robustness of machine learning algorithms. Deep reinforcement learning is a key area of research, aiming to train AI agents to behave optimally while considering human preferences and avoiding unintended consequences.
Misconceptions about AI Safety
Public debates on AI safety often exhibit unhelpful polarization, with some focusing solely on immediate concerns like self-driving cars while others become alarmist about far-future risks. In reality, informed and balanced discussions are needed to address the complex decisions and implications of AI development, building understanding and ensuring safety measures.
Career Paths in AI Safety Research
Pursuing a career in AI safety research provides immense opportunities to make a significant impact. While a PhD in machine learning is highly beneficial, there are alternative paths and intermediate steps to consider, such as internships, research residencies, and collaborations with established researchers. Technical research skills, critical thinking, and a research mindset are key for contributing effectively in this rapidly evolving field.
Want to help steer the 21st century’s most transformative technology? First complete an undergrad degree in computer science and mathematics. Prioritize harder courses over easier ones. Publish at least one paper before you apply for a PhD. Find a supervisor who’ll have a lot of time for you. Go to the top conferences and meet your future colleagues. And finally, get yourself hired.
That’s Dr Jan Leike’s advice on how to join him as a Research Scientist at DeepMind, the world’s leading AI team.
Jan is also a Research Associate at the Future of Humanity Institute at the University of Oxford, and his research aims to make machine learning robustly beneficial. His current focus is getting AI systems to learn good ‘objective functions’ in cases where we can’t easily specify the outcome we actually want.
How might you know you’re a good fit for research?
Jan says to check whether you get obsessed with puzzles and problems, and find yourself mulling over questions that nobody knows the answer to. To do research in a team you also have to be good at clearly and concisely explaining your new ideas to other people.
We also discuss:
* Where Jan's views differ from those expressed by Dario Amodei in episode 3
* Why is AGI safety one of the world’s most pressing problems?
* Common misconceptions about AI
* What are some of the specific things DeepMind is researching?
* The ways in which today’s AI systems can fail
* What are the best techniques available today for teaching an AI the right objective function?
* What’s it like to have some of the world’s greatest minds as coworkers?
* Who should do empirical research and who should do theoretical research
* What’s the DeepMind application process like?
* The importance of researchers being comfortable with the unknown.
*The 80,000 Hours Podcast is produced by Keiran Harris.*
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode