The researchers say sycophancy is mostly because we instruct fine-tuned LLM's too much to make them obey user intention. The next category of trustworthiness that they profile is fairness, which includes underneath it injustice, stereotype bias, preference bias, and disparate performance. "I think the deeper we get into this now, you can see the harder it is to have a unifying theory of alignment," he says.
New research attempts to put together a complete taxonomy for trustworthiness in LLMs. Before that on the Brief: The FEC is considering new election rules around deepfakes. Also on the Brief: self-driving cars approved in San Francisco; an author finds fake books under her name on Amazon; and Anthropic releases a new model.
Today's Sponsor:
Supermanage - AI for 1-on-1's - https://supermanage.ai/breakdown
ABOUT THE AI BREAKDOWN
The AI Breakdown helps you understand the most important news and discussions in AI.
Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe
Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown
Join the community: bit.ly/aibreakdown
Learn more: http://breakdown.network/