In case you’ve been living under a rock, GPT is the first AI language model of its kind that’s been made available to the public. Created by OpenAI it has absolutely soared in popularity, passing well above 13 million daily visitors.
Open AI has now released GPT-4 and it’s introduced a number of exciting new features, though users should expect GPT-4 to represent something more of step in a different direction than an evolution from the older model, GPT-3. As OpenAI themselves explain, ‘in a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle’. However, this doesn’t mean the differences aren’t highly significant.
GPT-3.5 was released as OpenAI’s API model that allows companies to categorize large amounts of data, bypassing the need for custom rule sets. Now that GPT-4 is on the way, we’re going to take you through some of the new key features and how it differs from GPT-3.5.
So let’s get stuck in – here’s everything we have on GPT-3.5 vs GPT-4.
Originality AI detector
Probably one of the most exciting additions to GPT-4 is that it will have multimodal models with the ability to respond to user queries with music, video and images.
When asked about what the near-future might look like for AI, OpenAI CEO Sam Altman said in a podcast interview from late 2022:
“I think we’ll get multimodal models in not that much longer, and that’ll open up new things.”
This is a significant upgrade that could open up a whole host of possibilities for how ChatGPT can answer our queries. It basically means that the AI can look at YouTube clips and listen to audio.
This marks an exciting leap from GPT-3.5 and previous models, which could only answer queries through text.
With this new feature, ChatGPT can summarize support calls with text after listening to recordings, potentially saving thousands of hours of work. Though it’s worth adding that this feature will not always be reliable, we can expect OpenAI will be striving to improve the chatbot’s reliability.
GPT 3.5- vs GPT-4 – model size
There were predictably been a lot of rumors flying around the internet before GPT-4’s release, particularly around its model size.
The model size is one of the many factors that affect an AI bot’s language understanding and overall performance.
Read More: What is Visual ChatGPT?
Despite early speculation claiming that GPT-4 could boast billions more parameters than its predecessor, they’re not true, as confirmed by Sam Altman in an interview with StrictlyVC.
While GPT-3.5 has 175 billion parameters, GPT-4 will be more powerful due to a dense neural network.
In other words, bigger parameters do not always mean better.
Like other AI companies including DeepMind, Altman said OpenAI wasn’t focusing on making extremely large models anymore. Instead, they are focusing on getting the best out of smaller models, focusing on other aspects like data, algorithms and alignment.
GPT 3.5 vs GPT 4 – alignment
Alignment is a challenging theme in AI language models, also known as the alignment problem. Researchers are basically trying to address how to make language models follow our intentions and adhere to our values.
Read More: Is ChatGPT open source?
It’s both a philosophical and mathematical challenge and something that OpenAI has put a lot of effort into. InstructGPT, which is a renewed version of GPT-3 was perceived as a better model of human judges (i.e. employees at OpenAI) which is potentially a step in the right direction.
So, we can say that GPT 4 has improved alignment, though it is a contentious issue. Genuine alignment should include a whole cross section of society, regardless of gender, language, ethnicity etc, which is very difficult and complex thing to genuinely achieve.
In other words, GPT 4 does have improved alignment, although this improvement is potentially limited.
Frequently asked questions
Can GPT 4 generate computer code?
How many parameters does GPT-4 have?
GPT-4 does not have 100 trillion parameters, despite the rumors. In fact, GPT-4 will potentially have smaller parameters with a focus on other areas that will improve the chatbot’s performance.