Machine Learning Street Talk (MLST) cover image

Prof. Murray Shanahan - Machines Don't Think Like Us

Machine Learning Street Talk (MLST)

NOTE

Unveiling the unreliability of language models in simulating human responses

Language models can act as simulators generating responses that may appear consistent, like playing the 20 Questions game where the model selects an object but doesn't commit to it, providing answers consistent with previous responses but able to change on resampling. This reveals the model's lack of fixed commitment to a simulated object, indicating the potential for different responses to the same query upon rewind.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner