What does ‘GPT’ stand for in ChatGPT & what does it mean?

What does ‘GPT’ stand for in ChatGPT & what does it mean?
Amaar Chowdhury Updated on by

Video Gamer is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more

To cut to the chase – ‘GPT’ stands for generative pre-trained transformer. But that doesn’t really mean much to the ordinary punter – so what does it mean?

ChatGPT is OpenAI’s now viral chatbot. It generates conversational responses depending on what prompts you enter, and it builds these responses using years of data training that the GPT model has undergone.

It’s important to note that ChatGPT is different to the GPT-4 model. They are not interchangeable. ChatGPT is a chatbot interface, GPT-4 is the artificial intelligence that powers the chatbot, among many other AIs – including Microsoft’s soon to release Copilot.

So if ChatGPT is the chatbot powered by the GPT-4 language model, what does generative pre-trained transformer mean?

EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.
TRY FOR FREE

Copy AI

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.
ONLY $0.01 PER 100 WORDS

Originality AI detector

Originality.AI Is The Most Accurate AI Detection.Across a testing data set of 1200 data samples it achieved an accuracy of 96% while its closest competitor achieved only 35%. Useful Chrome extension. Detects across emails, Google Docs, and websites.

What does generative pre-trained transformer mean?

GPT, or generative pre-trained transformer, is a name reserved for OpenAI’s family large language models (LLMs). LLMs are used in every chatbot you’ve seen so far, including Google Bard, but only OpenAI has developed GPT.

The pre-trained part refers to the extensive amount of pre-training a GPT model will go through. During this process, it is given a huge amount of textual information, usually sources from across literature, the internet and much more.

During training, it learns to predict what comes next in a string of words. Thanks to this that it is then able to generate a lot of text, even when only given a small input.

The word ‘Transformer’ actually refers to an AI technology developed by Google. The Transformers Google developed reduced the number of possibilities an AI considers before making a choice.

Rather than going through exhaustive steps to compare something like a word’s position within a sentence, the Transformer would only perform a small and constant number of steps, each chosen empirically.

So for example it would be trained to look for the word ‘river’ in any sentence that contained the word ‘bank’, to help it determine if the ‘bank’ referred to a river or a financial institution.

OpenAI made use of this technology in its GPT models.

Does Google bard use GPT?

No, despite creating the transformer technology in GPT models, Google uses its own large language model called LaMDA

Is GPT-3 the same as GPT-4?

No, both are built from similar technology and train in similar ways but GPT-4 is the latest interation, being far more powerful than GPT-3