AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Is GPT Three Making the Same Mistakes That Human Beings Would Make on a Psychological Test?
Is it the case that, for example, language model like GPT three will make some of the same mistakes that human beings would make on a psychological test? This is something that AI safety people have been on about. Can we find tasks that AI systems perform worse on at larger scales? And this seems like it could potentially be an interesting candidate to look into.