value lock-in is a concept that is well explored in ai alignment and effective altruism. The goal is to build systems where people the polity is able to continuously give feedback into an ever evolving consciousness instead of one aji that has learned the moral code then can proceed. i would be interested in systems that are trained to fetch new information to understand how the ground is changing throughout time most humans have some values they hope we perpetually agree with such as do not kill do not cause harm but then again what does harm mean? We already are living in a world where we are exploring these kinds of questions before ai the ai systems, so i think it's quite dangerous to force

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode