The Nonlinear Library

The Nonlinear Fund
undefined
May 17, 2024 • 5min

AF - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems by Joar Skalse

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems, published by Joar Skalse on May 17, 2024 on The AI Alignment Forum. I want to draw attention to a new paper, written by myself, David "davidad" Dalrymple, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, and Joshua Tenenbaum. In this paper we introduce the concept of "guaranteed safe (GS) AI", which is a broad research strategy for obtaining safe AI systems with provable quantitative safety guarantees. Moreover, with a sufficient push, this strategy could plausibly be implemented on a moderately short time scale. The key components of GS AI are: 1. A formal safety specification that mathematically describes what effects or behaviors are considered safe or acceptable. 2. A world model that provides a mathematical description of the environment of the AI system. 3. A verifier that provides a formal proof (or some other comparable auditable assurance) that the AI system satisfies the safety specification with respect to the world model. The first thing to note is that a safety specification in general is not the same thing as a reward function, utility function, or loss function (though they include these objects as special cases). For example, it may specify that the AI system should not communicate outside of certain channels, copy itself to external computers, modify its own source code, or obtain information about certain classes of things in the external world, etc. The safety specifications may be specified manually, generated by a learning algorithm, written by an AI system, or obtained through other means. Further detail is provided in the main paper. The next thing to note is that most useful safety specifications must be given relative to a world model. Without a world model, we can only use specifications defined directly over input-output relations. However, we want to define specifications over input-outcome relations instead. This is why a world model is a core component of GS AI. Also note that: 1. The world model need not be a "complete" model of the world. Rather, the required amount of detail and the appropriate level of abstraction depends on both the safety specification(s) and the AI system's context of use. 2. The world model should of course account for uncertainty, which may include both stochasticity and nondeterminism. 3. The AI system whose safety is being verified may or may not use a world model, and if it does, we may or may not be able to extract it. However, the world model that is used for the verification of the safety properties need not be the same as the world model of the AI system whose safety is being verified (if it has one). The world model would likely have to be AI-generated, and should ideally be interpretable. In the main paper, we outline a few potential strategies for producing such a world model. Finally, the verifier produces a quantitative assurance that the base-level AI controller satisfies the safety specification(s) relative to the world model(s). In the most straightforward form, this could simply take the shape of a formal proof. However, if a direct formal proof cannot be obtained, then there are weaker alternatives that would still produce a quantitative guarantee. For example, the assurance may take the form of a proof that bounds the probability of failing to satisfy the safety specification, or a proof that the AI system will converge towards satisfying the safety specification (with increasing amounts of data or computational resources, for example). Such proofs are of course often very hard to obtain. However, further progress in automated theorem proving (and relat...
undefined
May 17, 2024 • 2min

EA - Artiabout recent OpenAI departures by bruce

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Artiabout recent OpenAI departures, published by bruce on May 17, 2024 on The Effective Altruism Forum. A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them. Some quotes perhaps worth highlighting: Even when the team was functioning at full capacity, that "dedicated investment" was home to a tiny fraction of OpenAI's researchers and was promised only 20 percent of its computing power - perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and it's unclear if there'll be much focus on avoiding catastrophic risk from future AI models. Jan suggesting that compute for safety may have been deprioritised even despite the 20% commitment. (Wired claims that OpenAI confirms that their "superalignment team is no more"). "I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen," Kokotajlo told me. "I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit." (Additional kudos to Daniel Kokotajlo for not signing additional confidentiality obligations on departure, which is plausibly relevant for Jan too given his recent thread). Edit: Shakeel's article on the same topic. Kelsey's article about the nondisclosure/nondisparagement provisions that OpenAI employees have been offered Wired claims that OpenAI confirms that their "superalignment team is no more". 1. ^ Covered by Shakeel/Wired, but thought it'd be clearer to list all names together Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
May 17, 2024 • 3min

LW - AISafety.com - Resources for AI Safety by Søren Elverlin

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AISafety.com - Resources for AI Safety, published by Søren Elverlin on May 17, 2024 on LessWrong. There are many resources for those who wish to contribute to AI Safety, such as courses, communities, projects, jobs, events and training programs, funders and organizations. However, we often hear from people that they have trouble finding the right resources. To address this, we've built AISafety.com as a central hub - a list-of-lists - where community members maintain and curate these resources to increase their visibility and accessibility. In addition to presenting resources, the website is optimized to be an entry point for newcomers to AI Safety, capable of funnelling people towards understanding and contributing. The website was developed on a shoestring budget, relying extensively on volunteers and Søren paying out of pocket. We do not accept donations, but if you think this is valuable, you're welcome to help out by reporting issues or making suggestions in our tracker, commenting here, or volunteering your time to improve the site. Feedback If you're up for giving us some quick feedback, we'd be keen to hear your responses to these questions in a comment: 1. What's the % likelihood that you will use AISafety.com within the next 1 year? (Please be brutally honest) 1. What list of resources will you use? 2. What could be changed (features, content, design, whatever) to increase that chance? 2. What's the % likelihood that you will send AISafety.com to someone within the next 1 year? 1. What could be changed (features, content, design, whatever) to increase that chance? 3. Any other general feedback you'd like to share Credits Project owner and funder - Søren Elverlin Designer and frontend dev - Melissa Samworth QA and resources - Bryce Robertson Backend dev lead - nemo Volunteers - plex, Siao Si Looi, Mathilde da Rui, Coby Joseph, Bart Jaworski, Rika Warton, Juliette Culver, Jakub Bares, Jordan Pieters, Chris Cooper, Sophia Moss, Haiku, agucova, Joe/Genarment, Kim Holder (Moonwards), de_g0od, entity, Eschaton Reading guide embedded from AISafety.info by Aprillion (Peter Hozák) Jobs pulled from 80,000 Hours Jobs Board and intro video adapted from 80,000 Hours' intro with permission Communities list, The Map of Existential Safety, AI Ecosystem Projects, Events & Training programs adapted from their respective Alignment Ecosystem Development projects (join the Discord for discussion and other projects!). Funding list adapted from Future Funding List, maintained by AED. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
May 17, 2024 • 1min

EA - Announcing La bisagra de la historia, a Spanish-speaking podcast by Pablo

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing La bisagra de la historia, a Spanish-speaking podcast, published by Pablo on May 17, 2024 on The Effective Altruism Forum. We are pleased to announce the publication of La bisagra de la historia (in English, The Hinge of History), a podcast where we interview Spanish-speaking experts working on some of the most important problems of the century. The podcast is hosted by Laura González Salmerón and Pablo Stafforini, with occasional contributions by Pablo Melchor. Our first two episodes are now out: Adrià Garriga-Alonso sobre el riesgo existencial asociado a los modelos de lenguaje a gran escala. José Jaime Villalobos sobre la mitigación de riesgos existenciales y la protección de generaciones futuras a través del derecho. We will release a new episode at the beginning of every month. If you have any feedback about the episodes, or suggestions for future guests or topics, we'd love to hear from you. Acknowledgments: We thank Amplify for their financial support. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
May 17, 2024 • 9min

AF - Identifying Functionally Important Features with End-to-End Sparse Dictionary Learning by Dan Braun

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Identifying Functionally Important Features with End-to-End Sparse Dictionary Learning, published by Dan Braun on May 17, 2024 on The AI Alignment Forum. A short summary of the paper is presented below. This work was produced by Apollo Research in collaboration with Jordan Taylor (MATS + University of Queensland) . TL;DR: We propose end-to-end (e2e) sparse dictionary learning, a method for training SAEs that ensures the features learned are functionally important by minimizing the KL divergence between the output distributions of the original model and the model with SAE activations inserted. Compared to standard SAEs, e2e SAEs offer a Pareto improvement: They explain more network performance, require fewer total features, and require fewer simultaneously active features per datapoint, all with no cost to interpretability. We explore geometric and qualitative differences between e2e SAE features and standard SAE features. Introduction Current SAEs focus on the wrong goal: They are trained to minimize mean squared reconstruction error (MSE) of activations (in addition to minimizing their sparsity penalty). The issue is that the importance of a feature as measured by its effect on MSE may not strongly correlate with how important the feature is for explaining the network's performance. This would not be a problem if the network's activations used a small, finite set of ground truth features -- the SAE would simply identify those features, and thus optimizing MSE would have led the SAE to learn the functionally important features. In practice, however, Bricken et al. observed the phenomenon of feature splitting, where increasing dictionary size while increasing sparsity allows SAEs to split a feature into multiple, more specific features, representing smaller and smaller portions of the dataset. In the limit of large dictionary size, it would be possible to represent each individual datapoint as its own dictionary element. Since minimizing MSE does not explicitly prioritize learning features based on how important they are for explaining the network's performance, an SAE may waste much of its fixed capacity on learning less important features. This is perhaps responsible for the observation that, when measuring the causal effects of some features on network performance, a significant amount is mediated by the reconstruction residual errors (i.e. everything not explained by the SAE) and not mediated by SAE features (Marks et al.). Given these issues, it is therefore natural to ask how we can identify the functionally important features used by the network. We say a feature is functional important if it is important for explaining the network's behavior on the training distribution. If we prioritize learning functionally important features, we should be able to maintain strong performance with fewer features used by the SAE per datapoint as well as fewer overall features. To optimize SAEs for these properties, we introduce a new training method. We still train SAEs using a sparsity penalty on the feature activations (to reduce the number of features used on each datapoint), but we no longer optimize activation reconstruction. Instead, we replace the original activations with the SAE output and optimize the KL divergence between the original output logits and the output logits when passing the SAE output through the rest of the network, thus training the SAE end-to-end (e2e). One risk with this method is that it may be possible for the outputs of SAE_e2e to take a different computational pathway through subsequent layers of the network (compared with the original activations) while nevertheless producing a similar output distribution. For example, it might learn a new feature that exploits a particular transformation in a downstream layer that is unused by the regular netw...
undefined
May 17, 2024 • 9min

EA - Beneficentric Virtue Ethics by Richard Y Chappell

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beneficentric Virtue Ethics, published by Richard Y Chappell on May 17, 2024 on The Effective Altruism Forum. I've previously suggested a constraint on warranted hostility: the target must be ill-willed and/or unreasonable. Common hostility towards either utilitarianism or effective altruism seems to violate this constraint. I could see someone reasonably disagreeing with the former view, and at least abstaining from the latter project, but I don't think either could reasonably be regarded as inherently ill-willed or unreasonable. Perhaps the easiest way to see this is to just imagine a beneficentric virtue ethicist who takes scope-sensitive impartial benevolence to be the central (or even only) virtue. Their imagined virtuous agent seems neither ill-willed nor unreasonable. But the agent thus imagined would presumably be committed to the principles of effective altruism. On the stronger version, where benevolence is the sole virtue, the view described is just utilitarianism by another name.[1] The Good-Willed Utilitarian A lot of my research is essentially about why an ideally virtuous person would be a utilitarian or something close to it. (Equivalently: why benevolence plausibly trumps other virtues in importance.) Many philosophers make false assumptions about utilitarianism that unfairly malign the view and its proponents. For a series of important correctives, see, e.g., Bleeding-Heart Consequentialism, Level-up Impartiality, Theses on Mattering, How Intention Matters, and Naïve Instrumentalism vs Principled Proceduralism. (These posts should be required reading for anyone who wants to criticize utilitarianism.) Conversely, one of my central objections to non-consequentialist views is precisely that they seem to entail severe disrespect or inadequate concern for agents arbitrarily disadvantaged under the status quo. My new paradox of deontology and pre-commitment arguments both offer different ways of developing this underlying worry. As a result, I actually find it quite mysterious that more virtue ethicists aren't utilitarians. (Note that the demandingness objection to utilitarianism is effectively pleading to let us be less than ideally virtuous.) At its heart, I see utilitarianism as the combination of (exclusively) beneficentric moral goals + instrumental rationality. Beneficentric goals are clearly good, and plausibly warrant higher priority than any competing goals. ("Do you really think that X is more important than saving and improving lives?" seems like a pretty compelling objection for any non-utilitarian value X.) And instrumental rationality, like "competence", is an executive virtue: good to have in good people, bad to have in bad people. It doesn't turn good into bad. So it's very puzzling that so many seem to find utilitarianism "deeply appalling". To vindicate such a claim, you really need to trace the objectionability back to one of the two core components of the view: exclusively beneficentric goals, or instrumental rationality. Neither seems particularly "appalling".[2] Effective Altruism and Good Will Utilitarianism remains controversial. I get that. What's even more baffling is that hostility extends to effective altruism: the most transparently well-motivated moral view one could possibly imagine. If anyone really think that the ideally virtuous agent would be opposed to either altruism or effectiveness, I'd love to hear their reasoning! (I think this is probably the most clear-cut no-brainer in all of philosophy.) A year ago, philosopher Mary Townsend took a stab, writing that: any morality that prioritizes the distant, whether the distant poor or the distant future, is a theoretical-fanaticism, one that cares more about the coherence of its own ultimate intellectual triumph - and not getting its hands dirty - than about the fate of huma...
undefined
May 17, 2024 • 5min

EA - EA Netherland's Theory of Change by James Herbert

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Netherland's Theory of Change, published by James Herbert on May 17, 2024 on The Effective Altruism Forum. Below is a summary of our theory of change. We've been working with this theory for well over a year now, but it hasn't been widely shared. To develop it, we followed Charity Entrepreneurship's advice and used these three sources as a guide. However, we mostly used this, which is an updated version of one of the aforementioned sources. It isn't complete. We have a 35-page internal doc with the situation analysis, detailed descriptions of the target groups, etc. However, a few parts are out of date so we'd rather not share it widely. There are also several best practices we still need to incorporate, as suggested in resources such as this. Summary of our theory of change In diagram form In written form Ultimate Aim Help those in the Netherlands excel in contributing to the good, tentatively understanding 'the good' in impartial welfarist terms.[1] Main Challenge Addressed Too few people in the Netherlands are making the most of their opportunity to do good. In other words, there is an insufficient number of highly-engaged effective altruists (HEAs) in the country.[2] Target Groups 1. Proto-EAs 2. Organisers 3. Researchers and practitioners relevant to EA Proto-EAs Activities: Introductory courses, events, media appearances, online presence, and a network of groups. Short-term Outcomes: Increased involvement in the EA community. Long-term Impact: Enhanced knowledge, skills, attitudes, and behaviours related to altruism. Organisers Activities: Regular 1-1 meetings, knowledge-sharing calls/events, annual retreats, and a national slack workspace. Short-term Outcomes: Enhanced knowledge, skills, attitudes, and behaviours relating to EA community building. Long-term Impact: Groups that are better able to enhance the altruistic knowledge, skills, attitudes, and behaviours of their members. Researchers and practitioners relevant to EA Activities: Co-working space, retreats (e.g., AI safety), and fellowships (in progress). Short-term Outcomes: Enhanced knowledge, skills, networks, attitudes, and behaviours related to EA-relevant research and practice. Long-term Impact: Increased ability to contribute to EA as a research and a practical field, as well as related fields such as AI safety. FAQ Q. What about existing members of the community? A. Our strategy is to have a thriving network of organisers who will serve the needs of this target group. We supplement this with very light-touch things, e.g., a national WhatsApp community, small events at our office, etc. So far, this has been working OK, and has freed us to work on things that otherwise wouldn't get done. Q. What about career switchers? A. We experimented with this target group but found we weren't particularly successful. Around the same time, the School for Moral Ambition began to get off the ground, so this area felt far less neglected. Therefore, we decided to put our resources into field building (that's the 'researchers and practitioners relevant to EA' target group). Q. How are you doing monitoring and evaluation? A. Not well enough, but we're getting better and have plans[3] and a budget to invest in this further in the second half of 2024. Some programmes are fairly easy to monitor. For example, last year we had about 60 national intro programme completions,[4] the average LTR was 8.8/10, the average score on the exit quiz was 83%, and 44% reported feeling 'a good deal more' or 'substantially more' inclined to choose a high(er)-impact career path. In addition, CEA asks us to produce an annual report where we detail our progress and provide a set of 12 case studies (examples of individuals who have interacted with us and have then gone on to do cool things). Eventually, we want to have robust monitoring s...
undefined
May 17, 2024 • 1min

EA - Netherlands plans 2.4 billion euro aid spending cut by freedomandutility

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Netherlands plans 2.4 billion euro aid spending cut, published by freedomandutility on May 17, 2024 on The Effective Altruism Forum. https://www.reuters.com/world/europe/what-new-right-wing-dutch-government-plans-do-2024-05-16/ I sometimes feel that international development advocates are too focused on academia, philanthropies and multilateral organisations. I think advocates should be more aware of how vulnerable aid budgets are to cuts. For governments looking to reduce budgets, cuts to foreign aid are often the politically easiest way to achieve this. This vulnerability means that the expected value of engaging with politics to build cross-party coalitions in favour of development has a high expected-value. Even if advocates fail to increase aid budgets, they could be successful in preventing cuts. I think there is a need for more individuals, organisations and philanthropic funders to work in politics and build cross-party coalitions in favour of development. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
May 17, 2024 • 6min

EA - Announcing EA Brazil Summit by Leo Arruda

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing EA Brazil Summit, published by Leo Arruda on May 17, 2024 on The Effective Altruism Forum. The AEBr Summit, the first major event of Effective Altruism Brazil, will take place on June 29, 2024, in the city of São Paulo, at the Innovation Center of the University of São Paulo (INOVA USP). Our main objective is to expand and strengthen the Brazilian EA community by inviting both newcomers and experienced members to join us for a day of inspiring talks, workshops, and networking! Registrations are open until June 15! Why an AEBr Summit? 1. The EA movement in Brazil has grown in recent years, with new local and university groups in São Paulo and Rio de Janeiro. This AEBr Summit will be an occasion to promote a sense of community among all members in Brazil, connect people and organizations, and publicize and strengthen the movement. 2. Many people from Brazil face barriers such as visa issues and costs to attend international EAG(x) conferences in the USA, Mexico, and Europe. This hinders Brazilian talents from better integrating into the international community, wasting opportunities for the impact of one of the world's largest nations. Vision for AEBr Summit Our main objective is to foster meaningful connections between EAs and others focused on maximizing their impact. The event is also aimed at newcomers to EA, helping them discover the next steps in their EA journey and join a supportive community dedicated to doing good. With this in mind, the AEBr Summit aims to: Elevate the profile of EA in Brazil. Create and strengthen connections between Brazilian EA. Grow the EA community in Brazil by welcoming new people interest in effective altruism. Who is the AEBr Summit for? This conference is for you if: You live in Brazil, are new to EA, and are eager to learn more and connect with like-minded people. Most of our talks will be introductory and cover a wide range of topics. You are an experienced member in Brazil eager to engage with the community. You are an experienced international community member seeking to connect with Brazilian EAs who you might not have encountered before. Please note that we have a limited budget and won't be able to support international travel at this time. Feel free to contact us if we can provide other forms of support for those traveling to Brazil. What to expect The event will feature: Talks and workshops on pressing problems that the EA community is currently trying to solve The opportunity to meet, share advice with other EAs, and engage with social events around the Summit. If you're unsure whether to sign up, we encourage you to opt in! We want to create a event that is representative of the community as a whole, we're also actively seeking content suggestions, including speakers and activities. You can even suggest yourself! See you in São Paulo! The AEBr Summit Organizing Team 2024 Versão em português Anúncio da AE Brasil Summit A AEBr Summit, primeiro grande evento de Altruísmo Efetivo no Brasil, acontecerá no dia 29 de junho de 2024, na cidade de São Paulo, no Centro de Inovação da Universidade de São Paulo (INOVA USP). Nosso principal objetivo é expandir e fortalecer a comunidade brasileira de EA, convidando tanto novatos quanto membros experientes para se juntarem a nós em um dia de palestras inspiradoras, workshops e networking! As inscrições estão abertas até 15 de junho! Por que uma AEBr Summit? O movimento de AE no Brasil tem crescido nos últimos anos, com novos grupos locais e universitários em São Paulo e no Rio de Janeiro. Esta reunião do AE Brasil será uma ocasião para promover o senso de comunidade entre todos os membros do Brasil, conectar pessoas e organizações, além de divulgar e fortalecer o movimento. Muitas pessoas do Brasil enfrentam barreiras como questões de visto e custos para participar de conferên...
undefined
May 16, 2024 • 28min

EA - New career review: Nuclear weapons safety and security by Benjamin Hilton

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New career review: Nuclear weapons safety and security, published by Benjamin Hilton on May 16, 2024 on The Effective Altruism Forum. Note: this post is a (minorly) edited version of a new 80,000 Hours career review. In 1995, Jayantha Dhanapala chaired a pivotal conference that led to the indefinite extension of the Nuclear Nonproliferation Treaty. This meant committing 185 nations to never possessing nuclear weapons[1] Dhanapala's path started at age 17, when - after winning a competition with an essay about his hopes for a more peaceful world - he was flown from Sri Lanka to the US to meet Senator John F. Kennedy. That meeting led to a career in diplomacy (he had previously wanted to be a journalist), during which he focused on keeping the world safe from nuclear threats. His story shows that with dedication, persistence, and a little luck, it's possible to contribute to reducing the dangers of nuclear weapons and making the world a safer place. Summary Nuclear weapons continue to pose an existential threat to humanity. Reducing the risk means getting nuclear countries to improve their actions and preventing proliferation to non-nuclear countries. We'd guess that the highest impact approaches here involve working in government (especially the US government), researching key questions, or working in communications to advocate for changes. Recommended If you are well suited to this career, it may be the best way for you to have a social impact. Thanks to Carl Robichaud and Matthew Gentzel for reviewing this article. Why working to prevent nuclear conflict is high-impact The risk of a nuclear conflict continues to haunt the world. We think that the chance of nuclear war per year is around 0.01-2% - large enough to be a substantial global concern. If a nuclear conflict were to break out, the total consequences are hard to predict, but at the very least, tens of millions of people would be killed. It's possible that a nuclear exchange could cause a nuclear winter, triggering crop failures and widespread food shortages that could potentially kill billions. Whether a nuclear war could become an existential catastrophe is highly uncertain - but it remains a possibility. What's more, we think it's unclear whether the world after a nuclear conflict would retain what resilience we currently have to other existential risks, such as potentially catastrophic pandemics or risks from currently unknown future technology. If we're hit with a pandemic in the middle of a nuclear winter, it might be the complete end of the human story. As a result, we think that the risk of nuclear war is one of the world's biggest problems. (Read more in our problem profile on nuclear war.) Despite this, many of the people with influence in this area, including politicians and national leaders, aren't currently paying much attention to the risks posed by nuclear weapons. So if you can become one of the several hundred people who actually contribute to decisions that affect the risk of nuclear war, you could have an enormous positive impact with your career. What goals should we be aiming towards? Ultimately, decisions around the deployment and use of nuclear weapons are in the hands of the nuclear-armed states: the US, the UK, France, Russia, China, India, Pakistan, Israel, and North Korea. Most plausible paths to reducing nuclear risk involve changing the actions of these countries and their allies. So, which actions would be most beneficial to pursue? Note, we're focusing on the US (and NATO countries) here, because those are the countries most of our readers are well-placed to work in, but we'd expect many of these policies to be useful across the world. (We've written elsewhere about working on policy in an emerging power.) Overall, after talking to experts in the area, we think there's substantia...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app