
Sam Altman & Reid Hoffman | AI Field Notes
Greymatter
00:00
How to Align a Super Intelligence With Human Interest and Human Values
I think we have made real progress relative to what people thought this was going to look like. We do not know and probably aren't even close to knowing how to align a super intelligence. But thinking that the alignment problem is now solved would be a very great mistake indeed. I am hopeful that we're going to make better and better tools that are going to help us come up with better and better alignment ideas.
Transcript
Play full episode