AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Second Option Sounds Great. Yeah, That's a Good Thing.
So I gather that you're pretty on board with the kind of give well style cost effectiveness estimates and like donating to places like give directly. But yeah, it's just the long term area and particularly AI, but probably also other accidental risks that you're more skeptical of working towards. And so there were some that I wanted to focus more on because I think those ones are more core to how I think about in general. So from my perspective, the Bayesian epistemology expected value calculus is also very much how I thinking about the global health and animal interventions. It would affect not just if you guys change my mind about this, it wouldn't just affect how I thinkabout long term