by Surbhi Sood and Bhavya Mitra
May 5, 2026
7 min Financial fraud today is behaviorally engineered, psychologically manipulative, and increasingly, AI-enabled. Today, fraud exploits authority, urgency, fear, and victim shame rather than consumer ignorance. Our first blog in this three-part series argues that traditional awareness campaigns are insufficient. Instead, it calls for behaviorally informed design, contextual safeguards, and compassionate grievance management systems.
The phone rings, and the call seems routine. It could either be a bank representative who confirms your account details or a courier company that verifies your address for a pending delivery. This call could also be from a government official who warns that your Aadhaar number has been flagged for suspicious activity and you must act within the hour.
None of these calls feels like fraud, and that is precisely the point.
Financial fraud is no longer opportunistic, as it is a precision-engineered behavioral system. These fraudsters study how trust is built, urgency clouds judgment, and shame keeps victims silent. Solutions will fail until we treat fraud as a psychological problem rather than an information problem.

Fraudsters succeed when they act unremarkably rather than suspiciously. The scammers study the language, cadence, and escalation patterns of real institutions and replicate them accurately. These institutions include banks, telecom operators, government agencies, and couriers. By the time a victim senses something is wrong, they have already been lured into a pre-planned conversation.
Our 2024 report on consumer protection in digital financial services (DFS) across India, Bangladesh, and Kenya highlighted key patterns of exposure to fraud. This report found that 55% of low- and moderate-income respondents had received fake calls or SMS messages that mimicked legitimate institutions. The most common types were impersonation scams and attempts to compromise personal identification numbers (PINs), which require no technical sophistication on the fraudster’s part. They only need a script that convinces and the right moment.
The research across the three countries also shows that more than 60% of respondents did not know what to do after fraud occurs. Crucially, the research revealed that complaints about financial fraud often fall under multiple jurisdictions, which include financial service providers (FSPs), financial regulators, and law enforcement agencies. Victims or customers often do not know about these jurisdictional boundaries, which leads them to file complaints with the wrong authority and results in unresolved grievances and complaint rejections.
Artificial intelligence (AI) now outpaces consumer awareness of fraud tactics. Feedzai’s 2025 research found that voice cloning represents the most common form of AI-powered fraud reported by financial professionals globally, cited by 60% of respondents. Deepfake-related fraud surged 1,740% between 2022 and 2023. In the most high-profile case of the decade, UK-based engineering firm Arup lost USD 25.5 million to a deepfake scam. A finance worker approved 15 wire transfers during what appeared to be a routine video call, where every other participant was an AI-generated deepfake.
Most evidence on AI-enabled fraud focuses on global trends. However, early signals from markets, such as India, indicate growing exposure to AI-enabled fraud through impersonation scams, synthetic identities, and social engineering. This trend underscores the need for proactive safeguards within digital financial systems, even in contexts where large-scale incidents may not yet be fully documented.
The crisis has moved beyond simple phishing and hurts trust in the most insidious way possible. We must examine how fraudsters manipulate the human mind to understand why people fall for fraud. It comes down to three reliably exploitable mechanisms: Authority, urgency, and fear.
Humans are prone to comply with credible authority figures. A caller who quotes an account number, references a recent transaction, and uses the correct department name instantly bypasses our skepticism. Research shows that victims fall for fraud when the signal environment is constructed to resemble legitimacy rather than due to carelessness. In India, the impersonation of Central Bureau of Investigation (CBI) officers, Telecom Regulatory Authority of India (TRAI) officials, and bank fraud departments is now a scripted, scalable operation.
Yet, fraudsters do not wield authority solely through phone calls. MSC’s 2025 report on dark patterns in DFS documents how deceptive interface design exploits the same authority dynamic within legitimate-looking platforms. This design includes guilt-tripping language, hidden fees, and misleading consent flows. The line between dark patterns and fraud is thinner than most regulators acknowledge.
Deliberation is the enemy of fraud, as the instruction to act now is not accidental. The fraudster demands action within 30 minutes, before an account is frozen, or before a penalty applies. It is the single most effective mechanism to suppress verification behavior. ACFE notes that AI-enhanced social engineering has raised clickthrough rates on fraudulent communications by up to 45%, precisely because personalized urgency triggers automatic rather than reflective responses.
In India, a 67-year-old woman in Hyderabad was kept in effective digital house arrest for 17 days by fraudsters who impersonated crime investigation officers. She lost INR 55 million (USD 600,000) before her family understood what had happened. This case was not an outlier but a documented pattern of coercion-based scams where the fear of legal consequences was weaponized to induce sustained compliance. Once a victim is under the fraudster’s logic, they no longer seek external validation. This fraudster becomes their only trusted guide by design.
Gender compounds this dynamic. Officials in MSC’s fieldwork on DFS fraud found that fraudsters view women and elderly individuals as easier targets. Most female respondents in this study hesitated to approach authorities on their own and required a male family member to be present. Victims experienced the repeated questions and multiple visits from authorities as barriers rather than as systems of support. After a fraud incident, most female and elderly victims relied on their children or male family members to conduct all online financial transactions on their behalf. Such dependence compounds long-term financial exclusion.
The most harmful phase of financial fraud is what happens after such incidents. Victims rarely report these frauds immediately due to shame. When they do report these incidents, recovery rates become extremely marginal.
Academic research published in the Journal of Medical Case Reports in 2025 found that scam victims consistently experience depression, anxiety, shame, and post-traumatic stress disorder (PTSD). These symptoms are comparable to other forms of serious trauma. The victim-blaming that follows from family members, peers, and sometimes even institutions compounds the silence. The National Cybersecurity Alliance has documented that “fraud shame” often causes victims to withdraw from their family entirely, which increases the isolation that made them vulnerable in the first place.
MSC’s fieldwork on DFS fraud also found that the recovery rate of money lost to fraud remains below 1%. A Local Circles survey found that 74% of fraud victims in India could not recover their losses within three years of the fraud incident. The primary reasons cited were limited awareness of grievance resolution mechanisms, victims’ reluctance to file complaints out of fear of humiliation, and slow, inefficient coordination between banks and cybercrime reporting cells.
The victims’ silence reflects a structural failure. MSC’s 2024 report also found that more than 60% of fraud victims across India, Bangladesh, and Kenya did not know about the existence of grievance resolution mechanisms in the first place. The 48% of victims who tried to report such fraud had their complaints dismissed due to a lack of evidence.
The system is not built for the victim’s reality.
The Global Anti-Scam Alliance’s 2025 report found that 57% of respondents across 42 countries were scammed in the previous year, and 23% lost their money. The gap between what happens and what is recorded exists precisely because shame, confusion, and institutional distrust keep victims quiet.
The dominant consumer protection paradigm, awareness campaigns, warning messages, and tip sheets, rests on a flawed assumption that people fall for fraud because they lack information. They do not. Instead, they fall for fraud as their cognitive and emotional systems are being expertly manipulated at precisely the moment when they are most vulnerable.
MSC’s fieldwork on DFS fraud confirms this directly. Most fraud victims expressed lower confidence in DFS after their experiences, and many subsequently relied on family members to conduct transactions on their behalf. Generic awareness campaigns do not rebuild confidence or reach people at the moment of risk. The following four design shifts matter most:
The design gap between fraud tactics and consumer protection must close. FSPs, regulators, and capability-building practitioners must stop to ask “did we tell them?” rather than ask, “did the design protect them when it mattered?”
The fraudster already knows the answer to that question. It is time for protection systems to catch up with the fraudsters.
This is Blog 1 of a three-part MSC series on fraud supply chains. Blog 2 examines how fraud operations are industrialized, scripted, and monetized.
Leave comments