80,000 Hours Podcast cover image

#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

80,000 Hours Podcast

NOTE

Language Training and Model Bias

Training models on specific language inputs can result in better compatibility due to the limited data availability, presenting a challenge for gathering sufficient data. Models trained on internet data tend to exhibit bias towards a left libertarian position consistently, as this reflects the predominant content on the internet. Chinese language training faces difficulties in training unbiased models due to limited success in data collection.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner