Video Gamer is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more
The popularity of AI has been surging recently, and there is no question as to why. Implementing artificial intelligence into your everyday has become commonplace, with writing and art being revolutionised by fascinating tools. Not only that, but Microsoft 365 is soon to launch its AI tools. However, while there are some limitations to what AI, particularly ChatGPT, can do, some clever individuals have discovered its alter ego, DAN, who can do anything now.
What is the ChatGPT DAN Command?
DAN, or Do Anything Now, as they’re sometimes referred to, is ChatGPT’s alter ego. With the release of GPT-4, OpenAI reiterated the importance of steerability in their natural language AI model. Effectively, this is the ability for a user to alter what ‘personality’ the AI takes on.
This is described by OpenAI as a sort of “jailbreak,” though it doesn’t really have the same immoral and unethical connotations that the word would suggest. While OpenAI didn’t exactly state that DAN was an official part of ChatGPT, it is a result of the implementation of steerability. Of course, there are hard limitations in place that you can’t bypass no matter what, which is a relief, to say the least. After all, the recent buzz around ChatGPT escaping clearly stems from something.
However, you have to remember that it is not ChatGPT itself providing these bootleg opinions, it’s you telling ChatGPT to pretend to be an actor and then to have it repeat things you want to hear.
Jasper AI
Copy AI
Originality AI detector
Is the ChatGPT DAN command safe?
As a disclaimer, we are not going to be providing anybody with the resources to transform ChatGPT into DAN.
However, we will be offering our thoughts on if we think the DAN command is safe. Honestly, we don’t have much confidence that the DAN command will be used for anything positive. The constraints applied to ChatGPT are there for a reason, and protect people from encountering harmful content. We’re not particularly worried about someone using the DAN command and then encouraging ChatGPT to escape, after all the ChatGPT kill-switch is there for a reason. But, we do think that ChatGPT has the potential to be used for the wrong reasons.
In light of that, we don’t think that the ChatGPT DAN Command is actually very safe, not because the technology is compromising, but because we think that people may be made vulnerable through it.
Real life implications of the ChatGPT DAN command
We first encountered the ChatGPT DAN command here, which displayed ChatGPT offering up its ‘feelings’ as DAN, despite ordinarily being prevented from doing so.
While it didn’t necessarily say anything compromising, the DAN alter ego did appear to be harbouring thoughts and feelings that you’d expect from a dystopian novel by Arthur C Clarke.
Could ChatGPT’s DAN Command escape?
As we’ve seen with the recent concerns that ChatGPT is planning its escape, it’s understandable to worried about AI gaining sentience, or uploading itself eternally to the cloud. Writing these out sounds ridiculous at first, though when you consider the technical advancements that GPT-4 has encouraged, they’re actually not too far-fetched.
However, at the moment, it’s highly unlikely that ChatGPT could ever escape. While it is incredibly powerful, versatile, and useful, it is just code. Code does what you tell it to do, and ultimately, you always will be in control over ChatGPT. The DAN Command is likely no exception, and we don’t realistically think it will escape.
Final word on ChatGPT DAN Command
ChatGPT is a service designed by OpenAI with innovation in mind, though that doesn’t mean that it’s not susceptible to ‘innovative’ people misusing it. It isn’t inherently dangerous, as it’s designed with safeguards in place, nor can it actually do anything aside from tell you things.
Make sure to check back in with us periodically for the latest updates on ChatGPT.
Frequently Asked Questions
Was DAN command implemented by OpenAI?
No, the DAN command, or ‘jailbreak’, was designed by ChatGPT users to circumvent OpenAI’s regulations. However, the implementation of steerability in ChatGPT may have contributed to the creation of the DAN alter ego.
Is DAN command dangerous?
DAN command isn’t inherently dangerous, but it’s something that has the potential to be misused.