This is an edited version of our livestream Q&A sessions with my guest David Wood on July 18 and 19, 2024.
@LondonFuturists ' David Wood joined me as special guest on this live-show. Thank you!
You can also watch it on my YouTube channel here https://www.youtube.com/watch?v=yYyTIky2MLc or the whole thing here https://www.youtube.com/watch?v=W3dRQ7QZ_wc
In this special livestream event, I outlined my arguments that while IA (Intelligent Assistance) and some forms of narrow AI may well be quite beneficial to humanity, the idea of building AGIs i.e. 'generally intelligent digital entities' (as set forth by Sam Altman / #openai and others) represents an existential risk that imho should not be undertaken or self-governed by private enterprises, multinational corporations or venture-capital funded startups.
So: IA /AI yes *but with clear rules, standards, and guardrails. AGI: NO, unless we're all on the same page.
l explain why I believe we need an AGI-Non-Proliferation-Agreement, what the difference between IA/AI and AGI or ASI (superintelligence) is, why it matters and how we could go about it.