AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Reduce Excentric Risk From AI
I think a lot of the action will happen like at quote unquote crunch time um and to be clear i think it's like sort of crunchiness increases gradually rather than like oh okay now april 17th it's crunch time. I think that these other fields reduce the marginal value of AI safety and governance people so my claim is that it is pretty important to be focused specifically on the eccentric stuff or at least the extreme risk stuff. There are certainly more than a hundred people working on problems around issues related to fairness and algorithmic bias privacy which strike me as real issues. It's not just the fairness of biased stuff but it connects very much to some of those theories of victory or intermediate