FraudGPT: AI for the Bad Guys
by
March 24, 2025
A recent PYMNTS.com article on how AI is reshaping risk assessment provides a reminder that the bad guys have Gen AI tools of their own, and that makes the challenges associated with protecting against fraudsters even more daunting:
“Trust is the very foundation of commerce,” said Rajat Taneja, president of technology at Visa. “When two unknown parties are transacting, they have to trust that the transaction will occur correctly, the money will be transferred properly, any dispute will be managed, and there’s someone handling fraud and scams.”
Taneja said the first use case for AI in financial services was in risk management, and it remains a “critical tool to combat fraud, scams and enumeration attacks.” (Enumeration attacks are those that test different credentials such as passwords and usernames to gain unauthorized access.)
The challenge is that criminals now also can use AI to attack. “We have ChatGPT, they have ‘FraudGPT,’” Taneja said. “It’s a constant battle.” FraudGPT is like the evil twin of ChatGPT: It is a malicious generative AI tool designed specifically for cybercriminal activities like creating phishing emails, undetectable malware developing hacking tools. Subscription fees start at $200 per month and goes up to $1,700 per year.
According to the article, the good news is that while the bad guys have their own AI tools, AI is also supercharging the ability of companies to defend themselves against fraud.
– John Jenkins