Date: July 31, 2023
Netenrich security researcher, Rakesh Krishnan has discovered Fraud GPT, an AI tool getting famous on the dark web for conducting criminal web activities.
The fear we had around the dangers of generative AI has now come true. A researcher from Netenrich Security, Rakesh Krishnan has discovered an AI tool revolving in the dark web, and being promoted on Telegram channels. This AI tool, named Fraud GPT functions without any limitations, censorship or lack of access to the latest information on the internet.
Imagine an AI-powered chatbot with all the powers and intelligence of ChatGPT, without any boundaries or surveillance of what it’s being used for. That’s the threat we are facing because of this dark web tool built by an actor who goes by the online alias ‘CanadianKingPin’.
"This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.,"
Rakesh Krishnan, a Netenrich Security Researcher
While the world is still accepting the goodness of ChatGPT as truly non-threatening to humanity, this dark web tool has already gained over 3000 subscribers according to a screenshot that has been making rounds on the internet. It is currently priced at $200 per month and can go up to $1000 to $1700 for an annual subscription.
This AI tool can be used for a variety of criminal activities and unprecedented tasks that fall clearly in the gray area of right and wrong. It can be used to create undetectable malware, find backdoor entries to inadequately secured websites and applications, find leaks or maybe even create leakage of information or transactions. New York Times has claimed that WormGPT is ‘secretly entering emails’ and ‘hacking bank database’.
The creator of the platform said that these tools are intended to give maximum freedom to the people accessing Artificial Intelligence. But, with the freedom that these tools allow, the intention of bad people does not come into light either. Who will take responsibility for the actions dark hackers conduct through the power of these AI chatbots?

The large language model that FraudGPT is built on is still not known and seems to be quite foreign to a lot of white hat ethical hackers as well. This is neither the first or the last chatbot built for dark purposes. There is another platform that came into light earlier called WormGPT, which is used for similar purposes and has no boundaries or ethical limitations.
The actual threat posed by these AI platforms is the unclear understanding of their capabilities. They may be capable of creating notorious and petty hacks now, but as the AI of these tools develops and learns in its own way it can quickly become a threat to the safety of us all.
By Arpit Dubey
Arpit is a dreamer, wanderer, and tech nerd who loves to jot down tech musings and updates. With a knack for crafting compelling narratives, Arpit has a sharp specialization in everything: from Predictive Analytics to Game Development, along with artificial intelligence (AI), Cloud Computing, IoT, and let’s not forget SaaS, healthcare, and more. Arpit crafts content that’s as strategic as it is compelling. With a Logician's mind, he is always chasing sunrises and tech advancements while secretly preparing for the robot uprising.
OpenAI Is Building an Audio-First AI Model And It Wants to Put It in Your Pocket
New real-time audio model targeted for Q1 2026 alongside consumer device ambitions.
Nvidia in Advanced Talks to Acquire Israel's AI21 Labs for Up to $3 Billion
Deal would mark chipmaker's fourth major Israeli acquisition and signal shifting dynamics in enterprise AI.
Nvidia Finalizes $5 Billion Stake in Intel after FTC approval
The deal marks a significant lifeline for Intel and signals a new era of collaboration between two of America's most powerful chipmakers.
Manus Changed How AI Agents Work. Now It's Coming to 3 Billion Meta Users
The social media giant's purchase of the Singapore-based firm marks its third-largest acquisition ever, as the race for AI dominance intensifies.