LessWrong (Curated & Popular) cover image

'Simulators' by Janus

LessWrong (Curated & Popular)

00:00

Is the Work on AI Alignment Relative to GPT?

Some AGI necessary predictions are too difficult to be learned by even a scaled version of the current paradigm. As Gherkin glass observed, this will be a very different source of AGI than previously foretold. I have not seen enough evidence for either not to be concerned that we have in our hands a well-defined protocol that could end in AGI or a foundation which could spin up an AGI without too much finangling.

Play episode from 11:40
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app