ChatGPT may have been quietly nerfed recently – here’s why

ChatGPT may have been quietly nerfed recently – here’s why
Amaar Chowdhury Updated on by

Video Gamer is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more

Recently, users of OpenAI’s chatbot ChatGPT Plus have been concerned that the platform, and its underlying LLM (GPT-4), have received a serious performance downgrade.

This follows on from a recent slew of updates including web browsing and extended plugin access for Plus subscribers. Following these updates, the service was met with a far increased workload, and people began to notice that GPT-4’s exceptionally fast response times had suddenly become far less impressive. Of course, this was to be expected as the platform had a plethora of different APIs and plugins being called at any one moment, though recent rumours seem to be reaching a more climactic conclusion.

This ycombinator thread has been gaining traction recently, with the original poster asking: “Is it just me or GPT-4’s quality has significantly deteriorated recently?”

Responses are, of course, filled with different hypotheses and theories. Wading through the conspiracies and and conjecture has not been easy – though there are a few points that seem to be consistently agreed upon.

ChatGPT’s apparent performance downgrade may come from “scaling pain,” as pointed out by one commenter. Suggesting that “the teams keeping the lights on have been doing optimization work to save cost and improve speed” seems likely, and with the recently increased demand and workload for the service, it seems entirely possible. Reducing GPT-4’s reasoning capabilities seems a possible work-around for balancing the speed of response, and it wouldn’t be a farfetched solution for OpenAI developers looking to optimise the platform.

Others have noted that ChatGPT’s coding output has seen serious downgrades despite the added bonus of internet access. Commenters have acknowledged that while “scaling the model is hard, they lobotomised it in the process.” OpenAI’s GPT-4 has even recently been seen outperformed by Azure’s GPT-4 model in terms of speed, which this Reddit thread highlights.

It seems as though ChatGPT’s reduced reasoning and usability is occurring at a similar time to leaders of the AI industry acknowledging the potential dangers of artificial intelligence, and especially emerging AGI. Sam Altman, among other tech giants, recently came forward claiming they need to prevent a “risk of extinction from AI.” Could this be a result of emerging concerns posed by AI?

If features and performance has been limited, it certainly makes sense that some people have begun cancelling their Plus subscriptions. Open Source AI language models seem to be the response that many will be turning to.