Systems like Chat GPT are trained through autocomplete, where long text from the web is fed to the system and it is asked to guess the next word. OpenAI added additional training by having workers label potentially toxic material and rate full responses. This training method has allowed Chat GPT to generate more complicated and coherent responses without explicitly programming grammar rules or training on specific tasks. However, the engineers can't fully explain how Chat GPT creates its responses, similar to how researchers can't explain individual moves from AlphaGo. The language generated by Chat GPT is a result of strong and weak neural connections.
AI has the potential to impact our society in dramatic ways, but researchers can’t explain precisely how it works or how it might evolve. Will they ever understand it?
This is the first episode of our new two-part series, The Black Box.
For more, go to http://vox.com/unexplainable
It’s a great place to view show transcripts and read more about the topics on our show.
Also, email us! unexplainable@vox.com
We read every email.
Support Unexplainable by making a financial contribution to Vox! bit.ly/givepodcasts
Learn more about your ad choices. Visit podcastchoices.com/adchoices