scam alert articles SMBtech

How ChatGPT could worsen the ‘Scam Pandemic’

Scammers are increasingly using generative AI technology to perpetrate their schemes, and there are mounting concerns about how this could worsen the scam pandemic. In 2020, scammers used generative AI technology to mimic a company director’s voice and duped a bank manager in Hong Kong into authorising the transfer of $35 million. Since then, the technology has only grown more convincing, enabling scammers to generate human-like text in seconds, bypassing the usual spelling and grammatical errors that have long been synonymous with scam messages.

Lenovo Laptop Sale

In 2023, ChatGPT’s impressive ability to generate highly realistic and coherent responses across a wide range of prompts, from creative writing to technical assistance, captured the public’s imagination. Unfortunately, scammers also saw the potential and began exploring ways to use the AI chatbot to generate malware. ChatGPT’s remarkable capacity to mimic the language style of specific organisations and institutions enabled the scammers to create highly detailed and realistic copies which they could use to create scam messages or fake websites. This made generating scam messages a low-effort, high-reward endeavor, leading to even more challenging detection and discernment of phishing and impersonation scams.

AI systems that allow for accurate recreation of a person’s voice are readily available and earlier this year one Australian journalist reported that he was able to use such a tool to fool voice recognition and gain access to his self-service account for Centrelink, the government service providing income support and other payments to Australians. 

The rise of generative AI technology looks to bolster the already effective toolset employed by scammers. In Australia, consumers lost a record amount of more than $3.1 billion to scammers in the past year, which is more than an 80 per cent increase on the year before, according to the Australian Competition and Consumer Commission (ACCC). The Australian Cyber Security Centre (ACSC) received over 76,000 cybercrime reports from July 2021 to June 2022, an increase of nearly 13 per cent from the previous financial year. This equates to one report every 7 minutes, compared to every 8 minutes last financial year.

Although not all scams are directly related to the use of AI technology, the increasing frequency of scams is worrisome. The fact that scammers now have wider access to AI technology raises concerns about the public’s vulnerability to scams.

According to recent research from BlackBerry, more than half of Australian IT professionals believe the country is less than a year away from a successful cyberattack via ChatGPT. Though there are differing views around how cybercriminals may use the AI tool, the research indicates concerns around the use of ChatGPT to craft more legitimate sounding phishing emails and believable content designed for the spread of misinformation, and even as a tool to help less experienced hackers improve their technical knowledge and develop specialised skills.

Banks and financial institutions must match the pace of technology and evolve their protection toolsets for consumers to effectively combat the ease of how fraud and scams can be automated with precision tools such as ChatGPT. By leveraging AI and machine learning technologies, such as sophisticated behavioral analytics and resulting fraud and scam detection scores, banks can significantly improve detection accuracy and react in real-time to the increasing volume of both fraud and scam cases facilitated by criminal use of ChatGPT.

While ChatGPT does have restrictions in place for anyone looking to generate malicious content, these can be easily bypassed by savvy scammers. This should come as no surprise as criminals are constantly on the lookout for new methods and tools for their schemes and are already exploring how the AI chatbot could be used to generate malware.

This should sound alarm bells for banks and financial institutions in the region, especially as digital banking services continue to grow in popularity in the region and consumers become keen adopters of real-time payments. The likelihood of customers spotting nonsense messages from fraudsters or scammers will only become slimmer while the reliance on behavioral analytics to detect changes in financial payments will become much more important – now that the lines between poorly generated scam messages and expertly generated scam message have become blurred.

How Banks can outsmart AI-Powered Scammers

The Australian Federal Government has announced it will establish a National Office for Cybersecurity and will appoint a Coordinator for Cyber Security, who will be given new powers to protect Australians from mass cyberattacks. This follows an AU$89 million investment in a federal cybercrime centre in 2022.

However, with scam methods evolving at such a rapid pace, consumer education will be essential. By regularly communicating with customers, banks can provide useful advice on scams and fraud prevention, along with practical checks individuals can follow to protect themselves. This is especially pertinent for AI-powered scams, such as those using generative AI, where the playbook against such threats will need to be refreshed. It will be crucial for banks to encourage customers to keep their contact information updated so that they can receive the latest fraud or scam alerts.

Banks themselves must also increase their use of real-time fraud and scam detection models to stop payments from leaving accounts due to far-too-convincing ChatGPT-powered social attacks on customers. The proliferation of mobile payment apps and new open banking standards in the wake of the pandemic have caused real-time payment scams to grow. We have all heard of scams like the “hey mum, I lost my phone” approach that look to trick people into sending money to an account controlled by scammers.

The use of targeted profiling of customer behavior is now able to spot scams and has yielded some impressive results. FICO has found that 50 per cent more scam transactions were detected using this approach. Banks can use models to examine typical customer behaviours and flag anything suspicious, such as adding a new payee and preparing to send a large amount of money to them.

By leveraging AI, machine learning, and education, banks can better prevent AI-powered fraud and scams, and protect their customers from financial threats such as account takeovers and imposter scams.

CK Leo is FICO’s lead for fraud, security and financial crime in Asia Pacific has advice for banks and financial institutions striving to outsmart AI-powered scammers.

Lenovo Laptop Sale

Leave a Reply