ChatGPT community worried after credit card ‘bug’ & plugin ‘hack’

ChatGPT community worried after credit card ‘bug’ & plugin ‘hack’
Amaar Chowdhury Updated on by

Video Gamer is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more

OpenAI recently launched GPT-4 and in the following buzz, they also released their latest feature – ChatGPT plugins. However, at the same time, the service encountered a few security issues. Firstly, an ethical ‘hacker’ revealed a list of unreleased plugins, exposing a potential flaw, and a ‘bug’ may have exposed user’s credit card details. Rightly, the ChatGPT community is concerned that ChatGPT’s vulnerabilities might open it up to future attacks.

After Twitter hacker @rez0__ revealed a list of an alleged 80-odd plugins, OpenAI were incredibly quick to patch the HTTP proxy exploit. However, users of a technology forum, Y-Combinator (coincidentally formerly co-ran by Sam Altman, OpenAI CEO), commented on the security exploit:

“It is possible to use these unreleased plugins by setting up match-and-replace rules through an HTTP proxy. There are only client-side checks to validate that you have permission to use the plugins and they can be bypassed.

There’s no way I’m going to accept the intersection of “we take security very seriously” and implementing security checks purely client side. This and the recent title information leak are both canaries for how the rest of Open AI operates.”

Commenter koolba was responded to by another concerned community member, danShumway, who claimed that OpenAI have not paid enough attention to risks that aren’t easily romanticised and don’t ‘play well to the press (like clientside validation).’

Read More: ChatGPT DAN command – is it safe?

The risks poised by this security exploit have been raised at a difficult time for ChatGPT, which has recently been under scrutiny for a security ‘bug’ which exposed user’s chat history and, according to OpenAI, parts of credit card details. This was caveated by a a statement that “full credit card numbers were not exposed at any time,” which is reassuring, at the very least.

In fact, it seems that many of the concerns with ChatGPT security are central to the insecurities of the web-app, and not the AI itself. Another commenter, berkle4455, has stated their concerns that “there’s going to be a massive leak of users and their respective GPT usage history soon, mark my words.” Of course, we have our fingers crossed that this is not the case, though the general zeitgeist of the ChatGPT community right now is clearly one of caution, fear, and concern.