undefined

Nikola Jurkovic

Writer and commentator on AI and long-term futures; author of the essay "Mourning a life without AI," which is narrated in this episode. Appears here as the primary voice presenting his views on AGI risks, societal effects, and personal reflections.

Top 3 podcasts with Nikola Jurkovic

Ranked by the Snipd community
undefined
8 snips
Dec 23, 2024 • 15min

“Orienting to 3 year AGI timelines” by Nikola Jurkovic

Nikola Jurkovic, an author and workshop leader on AGI timelines, shares his bold prediction of AGI arriving in just three years. He discusses the implications of this rapid advancement, urging proactive strategies to navigate this impending landscape. Jurkovic covers crucial variables shaping the near future, the transition from the pre-automation era to a post-automation world, and highlights key players in the field. He also emphasizes unmet prerequisites for humanity's survival and outlines robust actions to take as we approach this transformative time.
undefined
Nov 10, 2025 • 11min

“Mourning a life without AI” by Nikola Jurkovic

Nikola Jurkovic, a writer and commentator on AI, dives into the existential implications of artificial general intelligence. He argues that AGI may emerge within the next decade, radically transforming society beyond recognition. Nikola discusses the potential risks of human extinction and how AGI could derail traditional life plans, reshaping everything from education to retirement. He explores both utopian possibilities and the nostalgia for a life untouched by AI, blending hope with a tinge of mourning for what we might lose.
undefined
Nov 11, 2025 • 9min

“How likely is dangerous AI in the short term?” by Nikola Jurkovic

Nikola Jurkovic, a researcher focused on AI safety, shares insights on the short-term risks of dangerous AI. He discusses how current AI systems have a time horizon of just 2 hours, far below the 2000 hours needed to potentially cause catastrophe. Jurkovic analyzes past AI breakthroughs like Transformers and AlphaFold, explaining their incremental impacts and why immediate danger is unlikely. With time horizons doubling every six months, he predicts a gradual increase in capabilities, estimating less than a 2% chance for imminent risks.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app