What’s new with GPT 4 – latest features and updates

What’s new with GPT 4 – latest features and updates
Amaar Chowdhury Updated on by

Video Gamer is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more

ChatGPT has received a major update, so let’s go over what new features GPT-4 is going to have.

GPT-4 is the next iteration of the GPT language model and includes a multi-modal model, leading to possibilities of image, video, audio, and text integration into the same UI. The previous GPT-model and DALL-E-model are standalone systems designed for singular purposes, though the the next iteration of GPT looks to revolutionise everything.

EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.
TRY FOR FREE

Copy AI

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.
ONLY $0.01 PER 100 WORDS

Originality AI detector

Originality.AI Is The Most Accurate AI Detection.Across a testing data set of 1200 data samples it achieved an accuracy of 96% while its closest competitor achieved only 35%. Useful Chrome extension. Detects across emails, Google Docs, and websites.

New GPT 4 features

Now that the latest edition of GPT-4 has released – here’s what OpenAI have to say about their multi-modal language model:

“We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.”

It seems their update is exactly as we predicted below.


According to OpenAI CEO Sam Altman, upcoming multi-modal model AI features could be in development for ChatGPT. In a podcast interview from late 2022, Altman discussed what the near-future might look like for AI.

“I think we’ll get multimodal models in not that much longer, and that’ll open up new things.

As mentioned before OpenAI are responsible for GPT and DALL-E, though these are both textual input models only. Developments to GPT-4 could see video, audio, and image entry becoming the new meta for AI content generation. This is an interesting development, although it also opens up the possibility for GPT to be used for slightly unethical purposes such as data collection. It will be interesting to see how OpenAI construct limitations to this new model, as AI is already getting a slightly notorious reputation for its possible circumventions of data privacy.

However, it’s important to reiterate that multi-modal models are not officially in the works for ChatGPT, and this is only speculation at this point.

Is GPT-4 more powerful than GPT-3 & 3.5?

Popular science and technology commentator Tansu Yegen recently posted the following predictions for GPT-4 to their Twitter.

An image displaying multi-purpose usages of chatGPT, though it is purely decorative.

These are obviously purely speculative, though the tweet was accompanied by the following text:

“The World Will Change NEXT WEEK… GPT-4 which is 500 Times More potent than the current #ChatGPT will be Released next week.

GPT-4 will be able to process multiple data types including videos, images, sounds, numbers, etc. By next week, you will be able to use artifical intelligence to write a movie script, use AI to generate actors for the movie, produce the film, and take it public without hiring real life actors. By next week, you will be able to write a fully illustrated 200-page book from scratch to finish in one day. The world will change next week via Shauib Barham.”

The tweet is less use to us as evidence of what ChatGPT 4 could feature, but more a relevant social opinion of what the new language model is going to be capable of. Though, the potential for GPT-4 to have 500 times the processing capability than its predecessor is nothing but game-changing.

We’re going to be looking out for the latest updates on ChatGPT – so make sure to check back in with us periodically. In the meantime, you might be interested in reading about if ChatGPT is open source, or how much the paid model costs.

Frequently Asked Questions

Is GPT 4 going to generate videos?

It’s not clear if that is going to be a feature, though it seems that it’s going to be able to take video as an input in the new update.

Is GPT 4 different from GPT 3?

We expect there to be major differences in functionality between the AI models, though the most notable difference will likely by the multi-modal features.