AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Opposition to a Long-Term Safety Agenda for AI
The risks don't feel long-term in the sense of far away to me, necessarily. The longer term problem that I most focus on is we don't have good ways to ensure that the AI systems are actually trying to do what we intended to do. There was this now famous open letter calling for a six-month pause on the development of the biggest language models. Is that something you think would help? What are some concrete policy steps that could help avert some of these risks?