Netenrich security researcher, Rakesh Krishnan has discovered Fraud GPT, an AI tool getting famous on the dark web for conducting criminal web activities.
The fear we had around the dangers of generative AI has now come true. A researcher from Netenrich Security, Rakesh Krishnan has discovered an AI tool revolving in the dark web, and being promoted on Telegram channels. This AI tool, named Fraud GPT functions without any limitations, censorship or lack of access to the latest information on the internet.
Imagine an AI-powered chatbot with all the powers and intelligence of ChatGPT, without any boundaries or surveillance of what it’s being used for. That’s the threat we are facing because of this dark web tool built by an actor who goes by the online alias ‘CanadianKingPin’.
"This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.,"
Rakesh Krishnan, a Netenrich Security Researcher
While the world is still accepting the goodness of ChatGPT as truly non-threatening to humanity, this dark web tool has already gained over 3000 subscribers according to a screenshot that has been making rounds on the internet. It is currently priced at $200 per month and can go up to $1000 to $1700 for an annual subscription.
This AI tool can be used for a variety of criminal activities and unprecedented tasks that fall clearly in the gray area of right and wrong. It can be used to create undetectable malware, find backdoor entries to inadequately secured websites and applications, find leaks or maybe even create leakage of information or transactions. New York Times has claimed that WormGPT is ‘secretly entering emails’ and ‘hacking bank database’.
The creator of the platform said that these tools are intended to give maximum freedom to the people accessing Artificial Intelligence. But, with the freedom that these tools allow, the intention of bad people does not come into light either. Who will take responsibility for the actions dark hackers conduct through the power of these AI chatbots?
The large language model that FraudGPT is built on is still not known and seems to be quite foreign to a lot of white hat ethical hackers as well. This is neither the first or the last chatbot built for dark purposes. There is another platform that came into light earlier called WormGPT, which is used for similar purposes and has no boundaries or ethical limitations.
The actual threat posed by these AI platforms is the unclear understanding of their capabilities. They may be capable of creating notorious and petty hacks now, but as the AI of these tools develops and learns in its own way it can quickly become a threat to the safety of us all.