Video Gamer is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more
For the longest time, the only real competition to ChatGPT had been Google Bard. A few hitches here and there meant that the competition always swung in the favour of OpenAI’s chatbot, though the recent Gemini update has seriously changed things. Comparing Google Ultra 1.0 vs GPT 4 is the question on everyone’s mind as we look to pit two of the most powerful and industry leading LLMs against each other.
Comparing Google Ultra 1.0 vs GPT 4
When it comes to comparing an LLM, it’s a lot more technical and data-driven than comparing a front-end product like ChatGPT or Google Gemini. Essentially, an LLM (large-language model) is the artificial intelligence brain behind the chatbot. There’s a few things that are worth considering when comparing the two: API compatibility, context lengths, reasoning, speed, and answering ability. There’s also other things to look out for when evaluating an artificial intelligence’s capabilities including ghosting, steerability, and imposed limitations.
Google Ultra 1.0 and GPT 4 have exactly the same price
It costs $19.99/month to access Gemini Advanced, the package through which Google Ultra 1.0 is primarily available from, while it’s going to cost you a whole cent extra to afford the $20/month ChatGPT Plus cost.
There’s not an API for Google Ultra 1.0 yet
While ChatGPT features a thriving and extraordinarily useful API that allows developers to integrate a range of OpenAI LLMs into their own products, Google Ultra 1.0 is not available through API just yet.
Instead the slightly weaker model, Gemini Pro 1.0, is available through either a free-of-charge plan or soon-to-arrive pay-as-you-go plan available in Google AI Studio.
Google Ultra 1.0 vs GPT-4 performance comparison
|General: Representation of questions in 57 subjects (incl. STEM, humanities, and others)
|Reasoning: Diverse set of challenging tasks requiring multi-step reasoning
|Reasoning: Reading comprehension (F1 Score)
|Reasoning: Common sense reasoning for everyday tasks
|Math: Basic arithmetic manipulations (incl. Grade School math problems)
|Math: Challenging math problems (incl. algebra, geometry, pre-calculus, and others)
|Code: Python code generation
|Code: Python code generation. New held out dataset HumanEval-like, not leaked on the web
|Image (Multimodal): Multi-discipline college-level reasoning problems
|Image (Multimodal): Natural image understanding
|Image (Multimodal): OCR on natural images
|Image (Multimodal): Document understanding
|Image (Multimodal): Infographic understanding
|Image (Multimodal): Mathematical reasoning in visual contexts
These performance benchmarks come courtesy of Google, and while it looks like Ultra 1.0 takes the cherry over all of what GPT-4 has to offer, you also have to consider who published these results.
So, when you take a look at all the reviews of Google Gemini, many of which corroborate these results, at least you can read this data with less apprehension.
Google Ultra 1.0 context length is the same as GPT-4’s
Both Google Ultra 1.0 and GPT-4 have a context length of 32K, which effectively translates to 32,000 tokens of entry. Each ‘token’ is equated to somewhere between 0.5 and 1 words, so you can estimate that the average maximum text entry in a window for either LLMs is somewhere between 16,000 and 32,000.
However, the context length doesn’t really affect performance, and instead focuses more on affecting usability. While GPT-4 offers a context length of 32K, the GPT-4 Turbo model offers an 128K context length model, which is somewhere between still-not-enough and unlimited power!
Steerability and imposed limitations
GPT-4, commercial LLMs, and AI as a whole have all been criticised for imposed limitations that affect how honest an AI generated text or image might be. This is largely done to protect people from harm, whether that’s through censoring texts which would otherwise be offensive, preventing the generation of inappropriate images, or preventing the creation of code which might be used illegally.
Some of the imposed limitations on the GPT-4 LLM have resulted in so-called jailbreaks which you can use to get ChatGPT to speak in a less informal manner, among other changes. Luckily, though, these jailbreaks never really side-step the important limitations imposed on the LLM.
Being able to control or guide an AI’s actions fall under the umbrella term of steerability. GPT-4 is an incredibly adaptable LLM, with plenty of steerability. It’s image input capabilities, for example, are a facet that Gemini Ultra 1.0 also shares.
Gemini Ultra 1.0 has just as much ghosting as GPT-4
The question of whether or not we will ever see an artificial general intelligence depends on the future of AI development, and until then we’re going to be teased by ghosting with each new AI that comes out. Ethan Mollick, a popular AI commentator, characterises ghosting as the ‘illusion of a person on the other end of the line, even though there is nobody there.’
In his Gemini tasting notes, Mollick noted that there was a distinct feeling that when an LLM gets large enough – as Ultra 1.0 and GPT-4 – there are signs of ghosting, essentially a tease of an AGI that’s powerful enough that it’s indistinguishable from human communication.
At the moment, though, you will encounter just as much idiocy from AI LLMs as you will ghosting. There’s many moments where the model will slip up, drop the act, and you’ll see behind the stage. Moments like this will remind you that AI, at the moment, is just thousands of lines of code, servers, and human engineering. This is a relief to many.
Frequently Asked Questions
Is Gemini Ultra 1.0 better than GPT-4?
At the moment, all the signs are pointing towards Gemini Ultra 1.0 being far better than GPT-4 in terms of reasoning, deduction, math and coding.
Is Gemini Ultra 1.0 faster than GPT-4?
One of the main issues faced by GPT-4 is that as it scales up and becomes available to more users, the speed drops drastically. Gemini Ultra 1.0 is a fresh LLM and is yet to see major hits to its speed.