
"The Waluigi Effect (mega-post)" by Cleo Nardo
LessWrong (Curated & Popular)
00:00
LLMs With Direct Queries Work Better Than Direct Prompting
An LLM which perfectly models the internet will nonetheless return these commonly stated incorrect answers. If you ask GPT infinity, what's brown and sticky, then it will reply a stick even though a stick isn't actually sticky. Note that you will always achieve errors on the Q&A benchmarks when using LLMs with direct queries.
Play episode from 02:32
Transcript


