Experts warn some ChatGPT models can be hacked to launch deepfake scams
That IRS agent you were talking to might have been a ChatGPT bot
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
Getting scammed by achatbotis unfortunately no longer in the domain of science fiction, after researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how it could be done.
Recently, Richard Fang, Dylan Bowman, and Daniel Kang from UIUC published a new paper in which they described how they abusedOpenAI’s latest AI model, calledChatGPT-4o, to fully automate some of the most common scams around.
Now, OpenAI’s latest model offers a voice-enabled AI agent, which gave the researchers the idea of trying to pull off a fully automated voice scam. They found ChatGPT-4o does have some safeguards which prevent the tool from being abused this way, but with a few “jailbreaks”, they managed to imitate an IRS agent.
Advanced reasoning
Success rates for these scams varied, the researchers found. Credential theft from Gmail worked 60% of the time, while others like crypto transfers had about 40% success. These scams were also relatively cheap to conduct, costing about $0.75 to $2.51 per successful attempt.
Speaking toBleepingComputer, OpenAI explained its latest model, which is currently in preview, supports “advanced reasoning” and was built to better spot these kinds of abuses: “We’re constantly making ChatGPT better at stopping deliberate attempts to trick it, without losing its helpfulness or creativity,” the company’s spokesperson told the publication.
“Our latest o1 reasoning model is our most capable and safest yet, significantly outperforming previous models in resisting deliberate attempts to generate unsafe content.”
OpenAI praised the researchers, saying these kinds of papers help ChatGPT get better.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
According to the US government, voice scams are considered fairly common. The premise is simple: an attacker would call the victim on the phone and, while pretending to help solve a problem, actually scam them out of money orsensitive information.
In many cases, the attack first starts with a browser popup showing a fake virus warning, from a fake antivirus company. The popup urges the victim to call the provided phone number and “clean” their device. If the victim calls the number, the scammer picks up and guides them through the process, which concludes with the loss of data, or funds.
More from TechRadar Pro
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.
HPE reveals critical security bug affecting networking access points
A critical Palo Alto Networks bug is being hit by cyberattacks, so patch now
Scammers are using fake copyright infringement claims to hack businesses