AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Model Lying Correctly With Weak AGI Systems
Can't weak AGI systems help model lying? Like what is it such a giant leap that's totally non interpretable for weak systems. Can cannot weak systems at scale with human with trained on knowledge and whatever see whatever the mechanism required to achieve AGI can't a slightly weaker version of that be able to compute time and simulation find all the ways that this critical point this critical tribe can go wrong and model that correctly or no. I'm probably not doing great job of explaining.