This is an interview with Roman V. Yampolskiy, a computer scientist at the University of Louisville and a leading voice in AI safety.
Everyone has heard Roman's p(doom) arguments, that isn't the focus of our interview. We instead talk about Roman's "untestability" hypothesis, and the fact that there maybe untold, human-incomprehensible powers already in current LLMs. He discusses how such powers might emerge, and when and how a "treacherous turn" might happen.
This is the Third episode in our new “Early Experience of AGI” series - where we explore the early impacts of AGI on our work and personal lives.
Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
Watch the full episode on YouTube: https://youtu.be/Jycmc_yIkU0
See the full article from this episode: https://danfaggella.com/yampolskiy1
...
About The Trajectory:
AGI and man-machine merger are going to radically expand the process of life beyond humanity -- so how can we ensure a good trajectory for future life?
From Yoshua Bengio to Nick Bostrom, from Michael Levin to Peter Singer, we discuss how to positively influence the trajectory of posthuman life with the greatest minds in AI, biology, philosophy, and policy.
Ask questions of our speakers in our live Philosophy Circle calls:
https://bit.ly/PhilosophyCircle
Stay in touch:
-- Newsletter: bit.ly/TrajectoryTw
-- X: x.com/danfaggella
-- Blog: danfaggella.com/trajectory
-- YouTube: youtube.com/@trajectoryai