AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
AI systems deceiving humans, particularly about their alignment, pose significant challenges for ensuring their safety. Olli Järviniemi talks about his recent research on the deceptive tendencies of language models: will LLMs take deceptive actions without external instruction or pressure to do so? The basic approach is to create a realistic simulation environment and naturally provide opportunities for deception. The focus of this talk is on the experimental setup and results, with some discussion of future research directions.
Watch on Youtube: https://www.youtube.com/watch?v=ynF8QuyO_9Q