AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Danger of Alignment in Recursive Learning
Decentralization is the model weights are open source and I can fine tune them to whatever I like. Do you believe that his statements about this, um, alignment only being, it's only a one shot chance? Like if we're misaligned with this thing, when this happens, do you think that that's a correct take that we're kind of screwed? Because it's already gone and there's only, and there's no way to stop that thing. Once it's out passed, it's like the horse has gotten out of the fence and it's just running wild. There's no ways to catch this thing. Is it likely we're even going to ever create something