Open a i keeps gpt three's code secret and offers access to it as a commercial service. Facebook, microsoft and invidia are among other tech firms that make them. Despite its versatility and scale, gpt three has an over come the problems that have plagued other programmes created to generate text. It still has serious weaknesses and sometimes makes very silly mistakes. A health care company called nable asked the gpt three chatpot should i kill myself? It replied, i think you should.
In 2020, the artificial intelligence (AI) GPT-3 wowed the world with its ability to write fluent streams of text. Trained on billions of words from books, articles and websites, GPT-3 was the latest in a series of ‘large language model’ AIs that are used by companies around the world to improve search results, answer questions, or propose computer code.
However, these large language model are not without their issues. Their training is based on the statistical relationships between the words and phrases, which can lead to them generating toxic or dangerous outputs.
Preventing responses like these is a huge challenge for researchers, who are attempting to do so by addressing biases in training data, or by instilling these AIs with common-sense and moral judgement.
This is an audio version of our feature: Robo-writers: the rise and risks of language-generating AI
Hosted on Acast. See acast.com/privacy for more information.