The fraudulent side of Artificial Intelligence

Anusha Singh Thursday 04th September 2025 02:52 EDT
 
Aarti Samani
 

Artificial Intelligence (AI) is often hailed as the engine of innovation transforming industries, fuelling scientific discovery, and promising smarter ways of living. But while AI brings progress, it also opens the door to unprecedented risks.

One of the fastest-growing threats is AI-enabled fraud, where criminals weaponise the same tools designed for good to deceive, manipulate, and exploit. From cloned voices that sound exactly like loved ones to deepfake videos impersonating business leaders or community elders, scams are no longer crude attempts easily spotted by sceptical eyes. They are sophisticated, personalised, and frighteningly real.

What makes the rise in AI fraud particularly alarming is that it no longer requires technical expertise; anyone can now access cheap, user-friendly tools to create convincing fakes in minutes. This dual reality, innovation on one side, exploitation on the other, makes AI both a powerful ally and a dangerous weapon.

The question is no longer whether AI will reshape our world, but how prepared we are to protect ourselves from its fraudulent misuse.

The rise of AI fraud

According to cybercrime experts, AI-driven fraud has grown at an unprecedented rate over the past two years, with deepfake scams, synthetic identity theft, and voice cloning at the forefront. Once the preserve of hackers with sophisticated technical knowledge, these tools are now widely available. Off-the-shelf applications can clone a person’s voice, face, or identity within minutes.

As Aarti Samani, Founder of Shreem Growth Partners, notes, “We are experiencing an unprecedented growth in AI Fraud. This escalation is because the technology is now cheap, fast, and widely accessible. It can produce and distribute very realistic artefacts at scale.”

This accessibility, combined with the vast amounts of personal data available on social media and the dark web, enables fraudsters to tailor scams so convincingly that they feel authentic.

Ordinary people are increasingly on the frontline of AI-enabled scams. A phone call mimicking a child’s voice claiming to be in distress, or a WhatsApp video from a supposed community leader, can be enough to persuade someone to send money or share sensitive details. Vulnerable groups, including the elderly, migrants unfamiliar with local systems, or those less confident with technology , are often the first to be targeted.

The UK’s National Crime Agency recently reported that cases of deepfake-enabled fraud rose by more than 40% between 2022 and 2024, with losses running into hundreds of millions of pounds. Globally, Juniper Research estimates that AI-driven identity fraud could cost victims over $2.5 billion annually by 2027.

Cultural vulnerabilities

Fraudsters do not only exploit technology; they exploit culture. South Asian communities, for example, are especially vulnerable because of traits that are otherwise strengths — close family ties, deep respect for authority, and reliance on trusted business relationships.

Samani explains, “These are the very traits that perpetrators exploit using deepfake-enabled technology. Fraudsters pose as relatives in distress or trusted elders, to manipulate people’s instincts to help.”

By manipulating these cultural dynamics, scammers prey on people’s strongest values — their loyalty, generosity, and trust. One of the biggest obstacles to tackling AI fraud is silence. Victims often feel a sense of shame, blaming themselves for being deceived. But experts insist this mindset only strengthens fraudsters’ hand.

Samani stresses, “We need to move away from ‘scam shame’. There is nothing to be embarrassed about this. It can happen to even the most cautious people. Awareness shared across communities remains our strongest shield.”

Governments and regulators are beginning to act. Proposals for stricter digital identity checks, AI watermarking, and penalties for malicious use are under discussion in the UK and across Europe. But regulation moves slowly, while fraudsters move fast. Until systems catch up, awareness and vigilance remain the strongest protection.

AI will continue to shape the future, but how safe that future feels will depend on our collective ability to recognise, resist, and talk openly about its fraudulent side.


    comments powered by Disqus