AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
In academic fields, there is a clear definition of what constitutes a publishable unit of work, but in newer fields, such as the work being done by Open Philanthropy, it is much less clear what constitutes a publishable unit.
Worldview diversification presents challenges in determining how to allocate resources among different worldviews, such as long-termism, near-termism, and animal inclusive perspectives.
Fairness agreements and outlier opportunities principles provide frameworks for allocating resources based on the perceived fairness of distribution among worldviews and the relative success and impact of each worldview.
Based on the research and analysis conducted, the estimated timeline for transformative AI falls within the range of 2050 to 2060. This prediction aligns with a median forecast, suggesting that significant advancements in AI could occur within the next 30 to 40 years, leading to a transformative impact on various industries and society as a whole.
The research on AI timelines highlights the importance of understanding and addressing AI risks sooner rather than later. With the potential for transformative AI on the horizon within a few decades, it becomes crucial to strategize and allocate resources to mitigate potential risks associated with advanced AI systems. The implications extend to considerations of AI alignment, development of responsible AI policies, and ensuring the robustness and safety of AI technologies.
While the research provides insights into AI timelines, there are uncertainties and limitations to consider. The estimations are subject to various factors and assumptions, and it remains an active area of study. Further research and analysis are needed to refine predictions, understand implications, and make informed decisions regarding resource allocation, policymaking, and strategic planning in the field of AI.
The podcast discusses the process and challenges of forecasting transformative AI. The speaker explains that they conducted an in-depth analysis of various factors, including computation requirements, algorithmic progress, hardware trends, and economic investment. They mention the importance of estimating the amount of computational power needed to train a transformative model, the rate at which algorithms are improving in efficiency, the cost reduction of hardware over time, and the financial investment in AI research and development. By considering these factors, they propose a range of time estimates for when transformative AI could be achieved.
The podcast highlights the challenges in determining what constitutes a publishable unit in the field of transformative AI. The speaker explains that unlike traditional academic fields, where standards for publishable work are well-established, the criteria for transformative AI research are less clear. They discuss the difficulty in balancing rigorous justifications with the need to express honest opinions and intuitive reasoning. The speaker reflects on their own writing process and the struggle to make decisions about what to include in the report, considering the importance of clarity, defensibility, and highlighting the uncertainties of the field.
The podcast explores the concept of the 'last dollar project' and its relevance to deciding when and where to allocate resources for philanthropic efforts in the field of AI. The speaker explains that the last dollar project is about maximizing the impact of each dollar spent by identifying the most cost-effective opportunities. They discuss the challenges of determining the value and future availability of opportunities in the field, acknowledging the trade-offs between giving now versus giving later. The speaker highlights ongoing work in quantifying the allocation of resources across time and the consideration of both near-termist and long-termist perspectives.
The podcast episode discusses the importance of allocation over time for near-termist philanthropy. The near-termist side focuses on maximizing the impact of their last dollar, drawing inspiration from the work of GiveWell and their global health interventions. The episode explores a complex model developed by Peter to guide the near-termist side in spending down their resources. The model considers factors such as the growth of money in the market and the declining opportunities for impact over time. It suggests that a constant fraction of money should be given away each year, based on various parameters. This allocation could range from saving indefinitely to spending down as fast as possible.
The podcast also discusses the concept of meta research and development (R&D) for faster responses to new pathogens. By investing in meta R&D, it is possible to significantly reduce the time needed to develop and distribute vaccines or antivirals for emerging diseases. This involves funding the development and stockpiling of broad-spectrum vaccines, which can provide protection against multiple viruses. Other strategies include funding tools for rapid on-site detection and leveraging technologies like AlphaFold for protein structure mapping. The episode highlights the potential of this approach and its cost-effectiveness, estimating that investing in meta R&D for biosecurity could save trillions of dollars and potentially reduce existential risk.
Rebroadcast: this episode was originally released in January 2021.
You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box. To get into heaven, you have to answer this correctly: Which way did the coin land?”
You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours.
But then you get up, walk outside, and look at the number on your box.
‘3’. Huh. Now you don’t know what to believe.
If God made 10 billion boxes, surely it’s much more likely that you would have seen a number like 7,346,678,928?
In today’s interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as ‘anthropic reasoning’ could be relevant for figuring out where we should direct our charitable giving.
Links to learn more, summary, and full transcript.
Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by ‘longtermism’ — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future.
Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that’s both very large relative to what’s possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time.
But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live.
If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed.
If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called ‘doomsday argument‘ alone.
If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we’re incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead.
There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn’t work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants.
In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely.
They also discuss:
Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode