Future of Life Institute Podcast cover image

Connor Leahy on AI Safety and Why the World is Fragile

Future of Life Institute Podcast

CHAPTER

I Safety Funding for AI Alignment Research

AI alignment research becomes a boss word and it comes to mean something else that what it originally meant. Is there a way to be strict about what you're what you're trying to fund without it drifting into becoming too broad? No, that would involve being smart. There's lots of marginal things you can do here. But actually, I think this is a massive mistake that a lot of funders have been making in the first place. And so when I think about government grants,. Yeah, I expect most of it to go to bullshit. Most of it not to work. It's really clear. If I want to government grants, the way he wants it to go is DARPA type

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner