I want to play devil's advocate here because I think what we just saw with social media was a lot of people saying the same kind of criticism. No one really seemed to care that you could now go and see the Twitter source code. So if the same thing happened with AI, if these large language models operate and it turns out that the answer is really boring, or that it's making statistical predictions, there's no hidden hand inside the model steering in one direction or the other. If it's just very dense and technical and not all that interesting, do you think that would still be a worthwhile exercise? We can know a lot more than we currently do. And I don't think
The New York Times Opinion columnist Ezra Klein has spent years talking to artificial intelligence researchers. Many of them feel the prospect of A.I. discovery is too sweet to ignore, regardless of the technology’s risks.
Today, Mr. Klein discusses the profound changes that an A.I.-powered world will create, how current business models are failing to meet the A.I. moment, and the steps government can take to achieve a positive A.I. future.
Also, radical acceptance of your phone addiction may just help your phone addiction.
Ezra Klein outlined the dramatic shifts that A.I. will enable.
In a 2022 survey of A.I. researchers, nearly half of the respondents said that there was a 10 percent or greater chance that the long-run effect of advanced A.I. on humanity would be “extremely bad.” This year, an A.I. researcher argued that natural selection favors A.I. over humans.
A 2017 article in The New Yorker said that, for some, the risks of artificial intelligence are outweighed by the prospect of discovery.
Meghan O’Gieblyn’s book “God, Human, Animal, Machine” explores the human experience in the age of artificial intelligence.