AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Language models perform at chance levels in deception tasks
Most models perform at chance levels, flipping a coin to show either deceptive or non-deceptive behavior. However, these results are not significant. In more complex deception tasks, state-of-the-art models mistake them with simpler tasks, showing a lack of understanding when faced with increased complexity. Taking a contrarian view, some argue that large language models are just next word predictors, parroting out deceptions of advertising and peer pressure. These models solve these problems but fail to go beyond them. What is your response?