I want to play devil's advocate here because I think what we just saw with social media was a lot of people saying the same kind of criticism. No one really seemed to care that you could now go and see the Twitter source code. So if the same thing happened with AI, if these large language models operate and it turns out that the answer is really boring, or that it's making statistical predictions, there's no hidden hand inside the model steering in one direction or the other. If it's just very dense and technical and not all that interesting, do you think that would still be a worthwhile exercise? We can know a lot more than we currently do. And I don't think
The New York Times Opinion columnist Ezra Klein has spent years talking to artificial intelligence researchers. Many of them feel the prospect of A.I. discovery is too sweet to ignore, regardless of the technology’s risks.
Today, Mr. Klein discusses the profound changes that an A.I.-powered world will create, how current business models are failing to meet the A.I. moment, and the steps government can take to achieve a positive A.I. future.
Also, radical acceptance of your phone addiction may just help your phone addiction.
On today’s episode:
Additional reading: