A hacker has created his own version of ChatGPT, but with a malicious bent: Meet WormGPT, a chatbot designed to assist cybercriminals.
WormGPT’s developer is selling access to the program in a popular hacking forum, according to email security provider SlashNext, which tried the chatbot. “We see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes,” the company said in a blog post(Opens in a new window).
WormGPT (Credit: Hacking forum)
It looks like the hacker first introduced the chatbot in March before launching it last month. In contrast with ChatGPT or Google’s Bard, WormGPT doesn’t have any guardrails to stop it from responding to malicious requests.
“This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future,” the program’s developer wrote. “Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”
WormGPT’s developer has also uploaded screenshots showing you can ask the bot to produce malware written in the Python coding language, and provide tips on crafting malicious attacks. To create the chatbot, the developer says they used an older, but open-source large language model called GPT-J from 2021. The model was then trained on data concerning malware creation, resulting in WormGPT.
WormGPT interface (Credit: Hacking forum)
When SlashNext tried out WormGPT, the company tested whether the bot could write a convincing email for a business email compromise (BEC) scheme—a type of phishing attack.
“The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” SlashNext said.
Recommended by Our Editors
Indeed, the bot crafted a message using professional language that urged the intended victim to wire some money. WormGPT also wrote the email without any spelling or grammar mistakes—red flags that can indicate a phishing email attack.
“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” SlashNext said. “This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.”
Fortunately, WormGPT isn’t cheap. The developer is selling access to the bot for 60 Euros per month or 550 Euros per year. One buyer has also complained that the program is “not worth any dime,” citing weak performance. Still, WormGPT is an ominous sign about how generative AI programs could fuel cybercrime, especially as the programs mature.
Like What You’re Reading?
Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.