The laws aim to make technologists aware of the races that they create before they run away from them. If you do not coordinate, the race ends in tragedy for people on an ethics team or integrity team who are trying to slow down any harms coming out of a company. And if your competitor is doing the thing that you think is unsafe, well, it doesn't matter. You have to compete as a company. So safety people, integrity people are almost always structurally set up to fail because they're working inside one company and not coordinating across many companies.
In our previous episode, we shared a presentation Tristan and Aza recently delivered to a group of influential technologists about the race happening in AI. In that talk, they introduced the Three Rules of Humane Technology. In this Spotlight episode, we’re taking a moment to explore these three rules more deeply in order to clarify what it means to be a responsible technologist in the age of AI.
Correction: Aza mentions infinite scroll being in the pockets of 5 billion people, implying that there are 5 billion smartphone users worldwide. The number of smartphone users worldwide is actually 6.8 billion now.
RECOMMENDED MEDIA
We Think in 3D. Social Media Should, Too
Tristan Harris writes about a simple visual experiment that demonstrates the power of one’s point of view
Let’s Think About Slowing Down AI
Katja Grace’s piece about how to avert doom by not building the doom machine
If We Don’t Master AI, It Will Master Us
Yuval Harari, Tristan Harris and Aza Raskin call upon world leaders to respond to this moment at the level of challenge it presents in this New York Times opinion piece
RECOMMENDED YUA EPISODES
The AI Dilemma
Synthetic humanity: AI & What’s At Stake
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_