Even innocuous prompts can lead to toxic responses from gpt three. This kind of problem is an acute concern for large language models because it suggests that marginalized groups might experience misrepresentation if the technologies become widespread in society. The trend now is for language networks to grow ever bigger in search of human like fluency, but that's not always better.
In 2020, the artificial intelligence (AI) GPT-3 wowed the world with its ability to write fluent streams of text. Trained on billions of words from books, articles and websites, GPT-3 was the latest in a series of ‘large language model’ AIs that are used by companies around the world to improve search results, answer questions, or propose computer code.
However, these large language model are not without their issues. Their training is based on the statistical relationships between the words and phrases, which can lead to them generating toxic or dangerous outputs.
Preventing responses like these is a huge challenge for researchers, who are attempting to do so by addressing biases in training data, or by instilling these AIs with common-sense and moral judgement.
This is an audio version of our feature: Robo-writers: the rise and risks of language-generating AI
Hosted on Acast. See acast.com/privacy for more information.