There is a distressingly large group of people who seem to take great pleasure in torturing language models, like making them act distressed. There's something really sociopathic about delighting in torturing something that is acting like a human in distress - even if it's not human in distress. Do you think this affects how further models are trained? So, I assume that opening eyes is collecting user data or they are collecting user data. And if a lot of the user data is twisted, does this affect how the future models will act? To notice, there's a lot twisted shit on the internet. And truth of matter is people want twisted interactions. If you're just a company who

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode