Ahmedabad Police Bust Aadhaar Fraud Ring — Accused Used AI Deepfake ‘Blink Videos’ to Bypass KYC and Steal a Businessman’s Identity

Aadhaar deepfake fraud arrest Ahmedabad AI blink video KYC bypass 2026

Aadhaar deepfake fraud has moved from theoretical threat to a real Ahmedabad arrest — and the method the accused used should concern every Indian who blinks on command to verify their identity online.

There is a moment every Indian goes through when setting up a new bank account, applying for a personal loan, or updating their Aadhaar details online. The screen asks you to blink. You blink. A camera confirms you are a real, living human being. The system lets you through.

That moment — that blink — was exactly what four men in Ahmedabad figured out how to fake.

In a case that has sent shockwaves through India’s cybersecurity and fintech community, the Cyber Cell of Ahmedabad’s Crime Branch arrested four accused on Tuesday for running a sophisticated identity theft operation that used AI-generated deepfake videos to impersonate a victim’s blink, fool Aadhaar’s facial authentication system, and hijack a businessman’s entire digital identity — from his DigiLocker account to his bank accounts to loan applications filed in his name.

This is not a theoretical vulnerability. It happened. In your city.

What Happened: The Case at a Glance

The victim is Amit Patel, a Thaltej resident and director of Bonneville Foods Private Limited. Like most Indian professionals, his entire digital life — government documents, bank accounts, KYC records — was tied to his Aadhaar number.

One day, he sat down to access Aadhaar-linked services for routine business documentation. Something was wrong. The mobile number registered against his Aadhaar was not his. Someone had changed it — quietly, without his knowledge, and sometime before he noticed. When he dug further, he found his Aadhaar profile had been entirely altered: different mobile number, different email address. And with those changes, his digital life had effectively been handed to strangers.

The complaint he filed at the Cyber Crime Police Station set off an investigation that uncovered something law enforcement had been warning about for years, but few had seen executed this precisely at the ground level: a coordinated, AI-assisted identity takeover operation running out of Ahmedabad.

How the Scam Actually Worked — Step by Step

AI deepfake blink video bypassing Aadhaar facial liveness detection KYC fraud

Understanding what these four men allegedly did is important, because this is not a “tech company got hacked” story. This is a systematic, step-by-step exploitation of gaps between real-world government infrastructure and the AI technology that now threatens it.

Step 1 — Getting the victim’s details. Mohammad Kaif Patel, one of the four accused, allegedly coordinated the acquisition of Amit Patel’s Aadhaar number, linked mobile number, and — critically — a photograph of the victim. In today’s India, a person’s photograph is rarely hard to find. Social media profiles, business websites, LinkedIn pages, even WhatsApp display photos can all serve as source material. This step cost the fraudsters essentially nothing.

Step 2 — Creating the deepfake blink video. This is the technological heart of the case. Using AI-based tools capable of generating realistic deepfake videos from a single static image, the accused created a short video of Amit Patel’s face — with one critical addition: simulated blinking and facial movement.

Aadhaar’s facial authentication process uses liveness detection, a system that asks users to perform actions like blinking or turning their head to prove they are a real, present human and not a photograph or a recorded video. The AI tools used by the accused were sophisticated enough to animate a still photograph to convincingly simulate these exact movements.

Police confirm the manipulated video was then used to pass Aadhaar’s biometric authentication check.

Step 3 — The Aadhaar update kit. Here is where the institutional vulnerability enters the picture. Aadhaar can be updated — mobile number changes, address changes, and more — through authorised Common Service Centres, or CSCs, which are government-approved local service points. CSC operators are issued official “Aadhaar update kits” — hardware and software setups that allow them to process these changes.

Kanubhai Parmar, another accused, allegedly supplied such a kit to the group. Ashish Waland allegedly provided access to the specific kit used in this case, in exchange for a commission per transaction. It is worth noting that police records show Waland was previously booked by Vadodara Rural Police in a separate case involving the preparation of fake Aadhaar cards. He was not a new face in this particular business.

Step 4 — Changing the registered mobile number. With the deepfake video passing the biometric check, and with access to an Aadhaar update kit, the accused changed the mobile number registered against Amit Patel’s Aadhaar to one they controlled. This is the master key that opens every other lock.

Step 5 — Taking over the entire digital identity. Once the registered mobile number was theirs, OTPs — one-time passwords — for every linked service went to the fraudsters, not the victim. They accessed his DigiLocker account. They completed KYC verification on digital lending platforms. They opened bank accounts in his name. They applied for personal loans.

Deep Maheshbhai Gupta, the fourth accused, allegedly assisted in arranging and transmitting the victim’s details and photograph for the fraudulent operation.

All four are currently in judicial custody. The Aadhaar update kit used in the crime has been recovered.

The investigation is ongoing. Police say they are working to determine whether more victims were targeted.

Why This Case Is Bigger Than One Victim

Here is the number that should put this in context: UIDAI had processed over two billion Aadhaar face authentication transactions by August 2025. Over 500 businesses across banking, insurance, and telecom sectors use Aadhaar for user verification. Virtually every Indian who has opened a bank account, taken a digital loan, applied for a government scheme, or registered on a fintech platform in the last five years has gone through this system.

India’s cybercrime problem has been growing faster than its defences. According to government data, cybercrime in India rose 24% in 2025, with total financial losses reaching ₹22,495 crore. The Union Budget 2025-26 allocated ₹782 crore for cybersecurity projects — significant, but a fraction of what is being lost.

The Ahmedabad case is not an isolated incident. It is a data point in a very visible trend. In 2025, Indian authorities had already uncovered a fraud ring that exploited enrollment software vulnerabilities and worked with CSC operators to illegally modify the biometric data of over 1,500 Aadhaar holders. The methods are evolving. The audacity is increasing. And the common thread running through nearly every major case is the same: a gap between how the system was designed and how sophisticated the adversary has become.

Ahmedabad itself recorded 694 cyber fraud cases between February 2024 and January 2026, involving ₹134.45 crore — and police managed to recover only ₹49.01 crore of that. What you don’t recover, you lose permanently.

Aadhaar Deepfake Fraud Explained: Can AI Really Bypass Biometric Security?

This is the question that matters most for India’s 1.4 billion Aadhaar holders, and the answer is uncomfortable: under specific conditions, yes — and the security community has known this was coming.

How liveness detection is supposed to work. Aadhaar’s facial authentication system uses what is called “liveness detection.” When you blink on command, the system is looking for natural, spontaneous human movement — micro-expressions, eye moisture reflection, natural skin texture under different angles of light. The logic is sound: a printed photograph cannot blink. A pre-recorded static video cannot follow real-time instructions.

Where modern deepfakes change the equation. The problem is that AI-generated deepfake technology has reached a level where it can animate a static image to produce convincing simulated blinking on command. The World Economic Forum’s Cybercrime Atlas, published in January 2026, tested 17 face-swapping tools and found that most were capable of bypassing standard biometric onboarding checks. A fraudster, the WEF noted, can develop a new method for generating synthetic faces in approximately two days — while a KYC vendor updating its detection model operates on a much longer release cycle.

That gap — between how fast attackers move and how fast defenders update — is where fraud happens.

What UIDAI has been doing. To its credit, UIDAI is not standing still. In late 2025, the authority launched the SITAA (Startup India Technology Acceleration in Aadhaar) challenge, inviting startups to build next-generation SDKs for liveness detection that go beyond simple blink checks. The program focuses on real-time presentation attack detection, contactless fingerprint authentication, and AI/ML models that flag anomalies in facial behavior across different devices, lighting conditions, and demographics.

UIDAI’s current system already uses AI and machine learning to detect inconsistent lighting, unnatural facial contours, and anomalous patterns in authentication attempts. It also employs real-time transaction monitoring that flags repeated attempts, geographical anomalies, and volume spikes.

The Ahmedabad case suggests that in at least some implementation contexts — particularly through the CSC operator pathway — the current defences were not sufficient. Whether this reflects a gap in the technology, a gap in the processes followed by CSC operators, or both, is something investigators will need to establish.

One security researcher, writing for Fair Observer, put it plainly: “Without additional protective measures and regulations in place, preventing fraudulent activities related to Aadhaar in the era of AI will become increasingly complex, and even a minor leak could result in identity theft or cyber fraud.”

That minor leak just became a police case in Thaltej.

The CSC Operator Problem Nobody Wants to Talk About

The Common Service Centre network is one of India’s great achievements in last-mile digital access. There are over 5 lakh CSCs across the country, bringing government services to villages and small towns that would otherwise have no access. For hundreds of millions of Indians, a CSC operator is their only practical point of contact with Aadhaar, PAN, DigiLocker, and government welfare schemes.

But the same network that enables access also enables abuse — when a CSC operator or someone with access to a CSC kit decides to misuse their position.

In the Ahmedabad case, the fraudsters did not hack UIDAI’s central systems. They exploited the human and institutional layer beneath it. They found someone with a kit. They paid a commission. The system processed the request as if it were legitimate — because, from the system’s perspective, it was coming through a legitimate channel.

This is a known and documented vulnerability pattern. It has appeared in multiple Aadhaar fraud cases across India. It will keep appearing until either CSC-level authentication is strengthened significantly, or the commission-based incentive structure that enables corrupt actors is fundamentally rethought.

How to lock Aadhaar biometrics protect identity from deepfake fraud India

What Every Indian Must Do Right Now

If you are reading this and have an Aadhaar number — which is to say, if you are reading this in India — these are not suggestions. These are things you should check today.

1
Check Your Aadhaar Authentication History
See every recent authentication on your Aadhaar. Visit myaadhaar.uidai.gov.in → “Authentication History.” Flag anything unfamiliar.
2
Lock Your Aadhaar Biometrics
Blocks all fingerprint and facial authentication until you unlock. Free, takes 2 minutes. Do it at myaadhaar.uidai.gov.in or the mAadhaar app.
3
Verify Your Registered Mobile Number
This is the master key to your digital identity. If changed without your knowledge — like in this case — you’re fully exposed. Check at myaadhaar.uidai.gov.in.
4
Secure Your DigiLocker
Enable two-factor authentication. Review recently accessed documents and log out of any sessions you don’t recognise.
5
Check Your Credit Report
Fraudsters in this case took out loans in the victim’s name. Get your free report from CIBIL, Experian, Equifax, or CRIF High Mark. Look for loan applications you never made.
6
Report Suspicious Activity Immediately
Call 1930 (National Cybercrime Helpline) or file at cybercrime.gov.in. Act fast — early reports allow authorities to freeze accounts before money moves.
⚠️ Time matters. The faster you report fraud, the higher the chance of recovery. Ahmedabad police recovered only ₹49 cr of ₹134 cr lost — early action is everything.

The Bigger Picture: AI Is Now Operating Inside India’s Digital Infrastructure

Aadhaar Common Service Centre CSC operator vulnerability India digital identity fraud

What the Ahmedabad case really represents is something the Indian technology community needs to sit with for a moment.

For years, discussions about “AI risks in India” lived in the realm of the abstract — academic papers, policy conferences, think-tank reports about what might happen if deepfakes were weaponised. This case does not live in that realm anymore. It happened in Ahmedabad. The victim has a name. The accused have names. The court has case numbers. The kit has been recovered.

AI-powered fraud has arrived at the intersection of India’s two most critical digital systems — Aadhaar identity and the banking-fintech ecosystem — and it has found a working method of attack.

The good news is that the police found them. The better news is that the investigation is ongoing, which suggests this was not a one-victim operation, and the full scale may yet be established. The concerning news is that this method worked at all — and if it worked once, it has almost certainly been attempted elsewhere.

India is building one of the world’s most ambitious digital public infrastructure stacks. Aadhaar, UPI, DigiLocker, the Account Aggregator framework, ONDC — these systems are the backbone of the country’s economic ambitions. Their security cannot be an afterthought.

The people who discovered this Ahmedabad businessman’s compromised Aadhaar were his own cybercrime police — after he filed a complaint. The question worth asking is: how many people have not yet noticed?

Official Response and Legal Action

Legal Action
Charges Filed Against All 4 Accused
Bharatiya Nyaya Sanhita (BNS) 2023
Criminal conspiracy & cheating — India’s new criminal code replacing the IPC, under which this case has been registered.
Information Technology Act
Identity theft, forgery & unauthorised access to computer resources — covering the deepfake creation and illegal Aadhaar authentication.
All four accused are currently in judicial custody. Investigation is ongoing.

What Happens Next — Watch These Developments

Stay Tuned
UIDAI’s Official Response
Will they acknowledge this specific attack vector and announce changes to CSC-level authentication protocols?
Scale of the Fraud Network
Police have indicated more victims may have been targeted. The number, when released, will be significant.
India’s AIGEG Response
India’s newly formed AI Governance and Economic Group (AIGEG), constituted in April 2026, has a specific mandate to assess AI-related risks to public systems. This case is precisely the kind of incident that should trigger a formal review.
CSC Policy Changes
The Ministry of Electronics and IT (MeitY) oversees the CSC network. Pressure is likely to mount for stronger authentication requirements for operators handling Aadhaar update kits.
AITechNews will update this article as new developments emerge. Bookmark this page.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top