
Ep. 129: Applying the 'security mindset' to AI and x-risk | Jeffrey Ladish
FUTURATI PODCAST
The Future of Agents
I think I lean towards thinking, yes, you might get weird, emergent agency from the incentive of predicting the next token. But it just doesn't actually have that much practical implications or risks because of this huge economic incentive to build agents. And then I would expect to actually happen is for people to just rush ahead with these experiments and try to figure out how to train models to act in more authentic ways. She sort of start with like tool former, right? We're just like, okay, how do we get the language model to use tools? Like tools seem like a really natural extension.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.