By now, you have probably heard of OpenAI’s ChatGPT, or any of the alternatives GPT-3, GPT-4, Microsoft’s Bing Chat, Facebook’s LLaMa or even Google’s Bard. They are artificial intelligence programs that can participate in a conversation. Impressively smart, they can easily be mistaken for humans, and are skilled in a variety of tasks, from writing a dissertation to the creation of a website.
How can a computer hold such a conversation?
I get that this is a simplified explanation but want to add that this part can be misleading. The model doesn’t contain the original documents and doesn’t have internet access to look up the documents (though that can be added as an extra feature, but even then it’s used more as a source to show humans than something for the model to learn from on the fly). The actual word associations are all learned during training, and during inference it just uses the stored weights. One implication of this is that the model doesn’t know about anything that happened after its training data was collected.
I wonder what an ELI5 version of ‘stored weights’ would be in this context.
Not quite ELI5 but I’ll try “basic understanding of calculus” level.
The GPT model learns complex relationships between words (or tokens to be more specific, explained below) as probability scores ranging from 0 to 1. In very broad terms, you could think of these as the likelihood of one word appearing next to another in the massive amounts of text the model was trained with: the words “apple” and “pie” are often found together, so they might have a high-ish probability of 0.7, while the words “apple” and “chair” might have a lower score of just 0.2. Recent GPT models consist of several billions of these scores, known as the weights. Once their values have been estabilished by feeding lots of text through the model’s training process, they are all that’s needed to generate more text.
When feeding some input text into a GPT model, it is first chopped up into tokens that are each given a number: for example, the OpenAI tokenizer translates “Hello world!” into the numbers [15496, 995, 0]. You can think of it as the A=1, B=2, C=3… cipher we all learnt as kids, but with numbers also assigned to common words, syllables and punctuation. These numbers are then inserted into a massive system of multivariable equations where they are multiplied together with the billions of weights of the model in a specific manner. This results in probability scores for each token known by the model, and one of the tokens with the highest scores is chosen as the model’s output semi-randomly. This cycle is then repeated over and over to generate text, one token at a time.
How closely related words and their attributes are to other words.