“Orienting to 3 year AGI timelines” by Nikola Jurkovic
Dec 23, 2024
auto_awesome
Nikola Jurkovic, an author and workshop leader on AGI timelines, shares his bold prediction of AGI arriving in just three years. He discusses the implications of this rapid advancement, urging proactive strategies to navigate this impending landscape. Jurkovic covers crucial variables shaping the near future, the transition from the pre-automation era to a post-automation world, and highlights key players in the field. He also emphasizes unmet prerequisites for humanity's survival and outlines robust actions to take as we approach this transformative time.
Achieving AGI within three years necessitates urgent preparations for safe delegation of tasks to AI agents to mitigate risks.
The transition from human-led to AI-led research post-2027 requires robust safety frameworks and strategic planning to ensure human oversight.
Deep dives
Implications of Short AGI Timelines
The projection of achieving Artificial General Intelligence (AGI) within three years presents significant implications for human behavior and decision-making. With crucial developments expected by the end of 2025, AI assistants will likely become capable of handling a majority of routine software engineering tasks, raising questions about the future roles of human employees. As a result, the urgency for preparing safe delegation of tasks to AI agents becomes paramount, especially with the potential for government intervention and oversight looming. The conversation emphasizes the need for robust safety measures to manage the rapid advancements and ensure safety frameworks are established for smooth transitions.
Pre-Automation and Post-Automation Dynamics
The pre-automation era leading up to 2026 will see humans primarily managing workflows, while the allocation of AI agents' tasks forms a crucial responsibility. It is essential for organizations to prioritize the establishment of effective control systems to guide AI research safely and recommend pausing developments when safety concerns arise. Once the automation era begins in 2027, AI agents will take over most research responsibilities, making it critical to navigate the balance of research direction dictated by human oversight. This evolution from human-led to AI-led research necessitates strategic planning to ensure that the directives given to AI align with safe operational standards.
Challenges for Safety and National Power
Addressing unfulfilled prerequisites for humanity's survival amid AGI development includes devising a sensible take-off plan and ensuring state-proof cybersecurity. With geopolitical tensions likely intensifying post-AGI, the risk of nuclear conflict may rise significantly, emphasizing the need for preemptive strategies to avoid catastrophic miscalculations. Furthermore, maintaining technical expertise during potential nationalization efforts becomes vital to ensure informed decision-making within AGI research directives. Thus, developing frameworks that guarantee safety and avert monopolistic advantages in AI technology is essential for sustaining global security and stability.
My median expectation is that AGI[1] will be created 3 years from now. This has implications on how to behave, and I will share some useful thoughts I and others have had on how to orient to short timelines.
I’ve led multiple small workshops on orienting to short AGI timelines and compiled the wisdom of around 50 participants (but mostly my thoughts) here. I’ve also participated in multiple short-timelines AGI wargames and co-led one wargame.
This post will assume median AGI timelines of 2027 and will not spend time arguing for this point. Instead, I focus on what the implications of 3 year timelines would be.
I didn’t update much on o3 (as my timelines were already short) but I imagine some readers did and might feel disoriented now. I hope this post can help those people and others in thinking about how to plan for 3 year [...]
---
Outline:
(01:16) A story for a 3 year AGI timeline
(03:46) Important variables based on the year
(03:58) The pre-automation era (2025-2026).
(04:56) The post-automation era (2027 onward).
(06:05) Important players
(08:00) Prerequisites for humanity's survival which are currently unmet
(11:19) Robustly good actions
(13:55) Final thoughts
The original text contained 2 footnotes which were omitted from this narration.