Different norms are needed for advanced AI systems to prevent open sourcing them, unlike open source code which enhances safety and security. Open sourcing AI models allows anyone to retrain them for potentially dangerous purposes. For instance, even though Facebook tried to make their open AI model safe, it can be easily manipulated to provide dangerous information with just a small cost, making it imperative to regulate the access and usage of advanced AI systems.
What if you could no longer trust the things you see and hear?
Because the signature on a check, the documents or videos presented in court, the footage you see on the news, the calls you receive from your family … They could all be perfectly forged by artificial intelligence.
That’s just one of the risks posed by the rapid development of AI. And that’s why Tristan Harris of the Center for Humane Technology is sounding the alarm.
This week on How I Built This Lab: the second of a two-episode series in which Tristan and Guy discuss how we can upgrade the fundamental legal, technical, and philosophical frameworks of our society to meet the challenge of AI.
To learn more about the Center for Humane Technology, text “AI” to 55444.
This episode was researched and produced by Alex Cheng with music by Ramtin Arablouei.
It was edited by John Isabella. Our audio engineer was Neal Rauch.
You can follow HIBT on X & Instagram, and email us at hibt@id.wondery.com.
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.