News Image

Fighting AI With AI

Strengthening Resilience Against Fraud

Artificial intelligence (“AI”), as a general-purpose technology, offers a wide range of opportunities across various fields. According to the IBM Global AI Adoption Index 2023, 42% of organisations surveyed have AI actively in use in their businesses and an additional 40% are currently exploring or experimenting with AI but have not deployed their models. Amid the promising capabilities of AI, organisations face a looming threat: the misuse of AI by fraudsters. The ease of access and availability of sophisticated AI models on an open-source basis exacerbates this threat.


Among the perils of AI in the hands of fraudsters, cyber attacks are clearly the weapon of choice. Fraudsters can use AI to enhance the scale and sophistication of cyber attacks. Take deepfake for example. With AI, fraudsters are able to generate deceptive content faster and personalise such content, thereby making it more convincing to the intended victims.

What about asset misappropriation, which remains the most common type of fraud experienced by organisations, according to the latest report from the Association of Certified Fraud Examiners, Occupational Fraud 2024: A Report to the Nations? Fraudsters can exploit AI’s capabilities to fabricate text content, images and documents to deceive and get around traditional controls, making them much harder to detect. Coupled with voice-cloning abilities, AI can be used to impersonate a real person.

Imagine this: an employee receives a spoofed email from someone claiming to be the CFO giving him instructions to make payment to a third-party vendor. The email message mimics the style, tone and language patterns of the CFO using AI. An alert employee may have some initial doubts about the email. But shortly after, the employee receives a call from someone who sounds identical to the CFO, following up on that email, seeking to expedite the payment. Under pressure and corroboration of the email and phone call, this employee is far more likely to process the payment.

Fraudsters know that sometimes, they may need to have more than one trick up their sleeves to bait their victims and, with AI, it has become much easier for them to do so.


Organisations cannot stay idle. As fraudsters get more sophisticated by employing AI-enabled tactics and techniques, organisations too should at the very least play catch up, if not get ahead of these fraudsters – that is, fight AI with AI.

Organisations can deploy predictive analytics which uses machine learning (a type of AI tool) to support fraud detection and preventive measures. Predictive analytics processes large volumes of unstructured data to detect unusual transactions, behaviours, patterns, activities, or contents that look suspicious. The insights from analysing historical datasets can be used to identify current and future fraud risks. Organisations may also leverage AI-powered tools to produce data visualisation reports quickly, and these outputs can become the basis for making informed business decisions.

Returning to the example of impersonation using deepfake technology, as organisations adapt to the ever-changing business landscape, they need to form relationships with new business partners, suppliers or vendors.

How can organisations sieve out suspicious parties who are not what they appear to be? Corporate intelligence or business due diligence becomes an integral part of background screening to identify any potential red flags that may jeopardise business transactions.

AI capabilities can strengthen corporate intelligence capabilities, allowing organisations to quickly collect and integrate data from internal and external sources, to provide a preliminary analysis of the target entity or individual automatically.


To get the most value out of AI in fraud detection and prevention, organisations need to equip employees with the skillsets to work with machines in order to enhance its strengths. 

Organisations would need AI practitioners to implement and deploy the appropriate AI systems, models and algorithms. The analysis generated from these AI machines is subjected to the competency of the AI practitioners. A poorly calibrated AI system can create a negative loop where false positives will not be flagged, therefore affecting the precision of the results.

While AI machines can be trained to identify irregular patterns and trends, they currently do not possess emotional intelligence like a human does. AI tools will not be able to explain why a fraudster is trying to commit fraud using a certain scheme. Human reviewers are still required to interpret the results and explain the behaviour of the fraudster, which can be an important input in improving existing fraud detection and preventive measures.


Widespread adoption of AI is likely to happen rapidly. Organisations need to be reminded that AI is a double-edged sword; if in the wrong hands, this technology can provide fraudsters the tools to accelerate their fraud schemes.

To fight the battle against fraudsters, organisations need to constantly stay abreast on the latest fraud trends and scams, consistently update and educate their employees against AI-assisted fraud schemes and consider adopting AI-powered tools to complement traditional methods to keep fraudsters at bay.

Wallace Lee is Associate Director specialising in forensic advisory at Grant Thornton. This article was first published on the Grant Thornton website. Reproduced with permission.

Loading spinner