The urgent risks of runaway AI -- and what to do about them | Gary Marcus
Sep 29, 2023
auto_awesome
AI researcher Gary Marcus discusses the urgent risks of untrustworthy AI technology and advocates for a global nonprofit organization to regulate it. He highlights the dangers of misinformation machines, biases in AI systems, and the need for reliable and ethical AI development. The podcast also includes a Q&A with TED's head, Chris Anderson.
The urgency to reevaluate and regulate the development of AI to prevent the integration of untrustworthy and misinformation machines into our lives.
The importance of reconciling symbolic AI and neural networks to create trustworthy and reliable AI systems, and the need for global governance to address AI risks.
Deep dives
The Dangers of Rushing AI without Audit
The podcast discusses the risks associated with rushing the development of artificial intelligence without proper evaluation and audit. Dr. Tim McGubrew's experience at Google highlights the lack of attention given to potential biases and discriminatory impacts of AI. This urgency has led to the formation of ethics committees in smaller companies and universities. However, there is still a gap in understanding the impact of AI on users.
The Need for Global AI Governance
The podcast emphasizes the need for global governance in AI. The United Nations has established a global committee to address the ethical development and practices of AI. However, as AI technology advances, challenges arise in avoiding mishaps and dangers. The podcast raises questions about how to effectively govern AI and mitigate risks.
Combining Symbolic AI and Neural Networks
The podcast discusses the importance of combining symbolic AI and neural networks to achieve reliable and truthful AI systems. Symbolic AI is strong at representing facts and reasoning, while neural networks excel at learning. Reconciling these approaches is crucial to develop trustworthy AI. The podcast highlights the need for new technical approaches and a system of global governance to tackle AI risks.
Will truth and reason survive the evolution of artificial intelligence? AI researcher Gary Marcus says no, not if untrustworthy technology continues to be integrated into our lives at such dangerously high speeds. He advocates for an urgent reevaluation of whether we're building reliable systems (or misinformation machines), explores the failures of today's AI and calls for a global, nonprofit organization to regulate the tech for the sake of democracy and our collective future. (Followed by a Q&A with head of TED Chris Anderson)