AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Impossibility of Safely Aligning AI
We don't know how to align an API or even AI in general. We need a bunch of tools whatever tools nobody knows but most likely you would need to explain those systems predict their behaviors verify code they are writing if they are self-improving. So far we haven't found tools which are perfect scale well will not create problems and each one has only a tiny one percent chance of messing it up. I don't think it's possible to get to 100% safety and people go well it's obvious of course there is no software which is bug-free so basically saying very common knowledge for super intelligence you need it to be 100% safe.