Future of Life Institute Podcast cover image

Connor Leahy on AI Safety and Why the World is Fragile

Future of Life Institute Podcast

00:00

I Safety Funding for AI Alignment Research

AI alignment research becomes a boss word and it comes to mean something else that what it originally meant. Is there a way to be strict about what you're what you're trying to fund without it drifting into becoming too broad? No, that would involve being smart. There's lots of marginal things you can do here. But actually, I think this is a massive mistake that a lot of funders have been making in the first place. And so when I think about government grants,. Yeah, I expect most of it to go to bullshit. Most of it not to work. It's really clear. If I want to government grants, the way he wants it to go is DARPA type

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app