Joining us in our seventh episode of our series AGI Governance on The Trajectory is Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice: Existential Risk and the Future of Humanity.
Toby is one of the world’s most influential thinkers on long-term risk - and one of the clearest voices on how advanced AI could shape, or shatter, the trajectory of human civilization.
In this episode, Toby unpacks the evolving technical and economic landscape of AGI - particularly the implications of model deployment, imitation learning, and the limits of current training paradigms. He draws on his unique position as both a moral philosopher and a close observer of recent AI breakthroughs to highlight shifts that could alter the pace and nature of AGI progress.
Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Watch the full episode on YouTube: https://youtu.be/TIz9TpVCFcQ
See the full article from this episode: https://danfaggella.com/ord1
...
About The Trajectory:
AGI and man-machine merger are going to radically expand the process of life beyond humanity -- so how can we ensure a good trajectory for future life?
From Yoshua Bengio to Nick Bostrom, from Michael Levin to Peter Singer, we discuss how to positively influence the trajectory of posthuman life with the greatest minds in AI, biology, philosophy, and policy.
Ask questions of our speakers in our live Philosophy Circle calls:
https://bit.ly/PhilosophyCircle
Stay in touch:
-- Newsletter: bit.ly/TrajectoryTw
-- X: x.com/danfaggella
-- Blog: danfaggella.com/trajectory
-- YouTube: youtube.com/@trajectoryai