This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing Institute for AI Safety and Governance (among many other accolades).
Over a year ago when I asked Jaan Tallinn "who within the UN advisory group on AI has good ideas about AGI and governance?" he mentioned Yi immediately. Jaan was right.
See the full article from this episode: https://danfaggella.com/zeng1
Watch the full episode on YouTube: https://youtu.be/jNfnYUcBlmM
This episode referred to the following other essays and resources:
-- AI Safety Connect - https://aisafetyconnect.com
-- Yi's profile on the Chinese Academy of Sciences - https://braincog.ai/~yizeng/
...
About The Trajectory:
AGI and man-machine merger are going to radically expand the process of life beyond humanity -- so how can we ensure a good trajectory for future life?
From Yoshua Bengio to Nick Bostrom, from Michael Levin to Peter Singer, we discuss how to positively influence the trajectory of posthuman life with the greatest minds in AI, biology, philosophy, and policy.
Ask questions of our speakers in our live Philosophy Circle calls:
https://bit.ly/PhilosophyCircle
Stay in touch:
-- Newsletter: bit.ly/TrajectoryTw
-- X: x.com/danfaggella
-- Blog: danfaggella.com/trajectory
-- YouTube: youtube.com/@trajectoryai