
"Why I think strong general AI is coming soon" by Porby
LessWrong (Curated & Popular)
00:00
The Importance of Coherence in AGI Architectures
Current architectures were built with approximately zero effort put towards aiming them in any particular direction that would matter in the limit. If one of these things actually scaled up to AGI capability, my expectation is that it would sample a barely bounded distribution of minds and would end up far more alien than an ascended jumping spider. A token predictor with extreme capability but no agenthood could be wrapped in an outer loop that turns the combined system into a dangerous agent. This could just be humans using it for ill-advised things. I can't say with confidence that mere token predictors won't have the ability to internally simulate agents soon.
Play episode from 01:08:43
Transcript


