GPT-4 capabilities – what can the new multimodal model do?

GPT-4 capabilities – what can the new multimodal model do?
Eva Black Updated on by

Video Gamer is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more

You may be wondering what GPT-4, OpenAI’s latest language model, can do. If that’s the case, look no further, we’re discussing GPT-4 capabilities.

GPT-4 is the latest AI release from OpenAI. It’s the much anticipated update to the famous chatbot ChatGPT (well, ChatGPT-3.5 is you want to get really technical). ChatGPT was well-known as an extraordinary AI that could simulate human writing. It’s capable of making poems, lyrics, blog posts, and even coding.

But now the update has arrived and however good (or eerie depending on your personal response to AI) we found the last OpenAI offering, this one is ten times more impressive.

Let’s take a look.

EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.
TRY FOR FREE

Copy AI

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.
ONLY $0.01 PER 100 WORDS

Originality AI detector

Originality.AI Is The Most Accurate AI Detection.Across a testing data set of 1200 data samples it achieved an accuracy of 96% while its closest competitor achieved only 35%. Useful Chrome extension. Detects across emails, Google Docs, and websites.

GPT-4 capabilities – what can the new multimodal model do?

In short, a lot more than the previous iteration. OpenAI summarises the changes as improvements in creativity, nuance, and reliability. What does that mean GPT-4 can actually do, though?

1. Accurately pass a variety of exams

GPT-4 is vastly more accurate than ChatGPT. This is never more obvious than difference in marks in a multitude of exams that OpenAI put it through. This included a jump in lower bound percentile achievement from 33rd-48th (GPT-3.5) to 84th-100th (GPT-4) in AP Macroeconomics. More impressive was the whopping boost between GPT-3.5’s Uniform Bar Exam results and GPT-4’s. (GPt-3.5 at 10th and GPT-4 at roughly 90th).

2. Multi-language processing

GPT-4 also now has multi-language capabilities. OpenAI claims that in the MMLU benchmark, GPT-4 outperformed other LLM English speaking results in 24/26 languages. These languages even include lower-resource languages such as Welsh. Lower resource means there was less data for the AI to draw on and learn from.

3. Longer context input

Users will be excited to know that the context input has now increased to 25,000 words. This will vastly improve user experience and allow for greater text comprehension and generation. For example, much longer documents can be analyzed and summarised than previously.

4. Image recognition

A major leap in GPT-4’s capabilities is the inclusion of image recognition. This vastly increases the potential uses for GPT-4. In OpenAI’s demo livestream, Greg Brockman, used GPT-4 to produce a website based on nothing more than a picture of a few notes on website layout he’d scribbled down and an HTML prompt.

Another example of the new visual input in the live stream was a Discord bot capable of interpreting images. GPT-4 programmed and troubleshot this with minimal human assistance and guidance. This is particularly exciting given OpenAI’s confirmed collaboration with Be My Eyes. (Be My Eyes is an app designed to assist visually impaired people navigate daily situations.)

However, as yet, the visual input feature is not available for use and is still in ‘preview’. The inclusion of the feature in the demo was a self-described ‘sneak peak’.

5. ‘Steerability’

GPT-4 also allows for much more management of the final product by the user. OpenAI calls this feature ‘steerability’. In practice, this will allow for much more tailored content production for users and APIs. However, OpenAI makes sure to warn users to keep their commands ‘within bounds’.

In conclusion, GPT-4 has lots of exciting features and capabilities. Users have already started testing out the possibilities offered by using AI to create a time travel novel using GPT-4 and Midjourney. As more and more users get to know the software and the API version becomes available we’re sure the range of content GPT-4 is used for will only grow.

Frequently Asked Questions

Is ChatGPT-4 available?

Yes, to ChatGPT Plus subscribers GPT-4 is available however there is still an API waiting list.

Is GPT-4 available in ChatGPT?

While GPT-4 is built on existing ChatGPT architecture, it will only be available via a paid subscription through ChatGPT Plus.