
In a landmark move, the Gujarat High Court has issued India’s most comprehensive court-level AI policy — explicitly prohibiting artificial intelligence from playing any role in judicial decisions, bail orders, or sentencing.
Released on April 4–5 and unveiled by Supreme Court Justice Vikram Nath, the 12-page “Policy on the Use of Artificial Intelligence in Judicial and Court Administration” is issued under Articles 225 and 227 of the Constitution — making it a binding institutional rule, not a advisory guideline.
And unlike most policy documents that speak in vague generalities, this one names names.
ChatGPT, Gemini, Claude, DeepSeek, Grok — All Explicitly Banned for Judicial Work
The policy document lists, by name, the AI tools that fall under its prohibitions: ChatGPT, Gemini, Microsoft Copilot, DeepSeek, Claude, and Grok. Every generative AI large language model — whether accessed on a court computer, a personal laptop, or a mobile phone — is covered by this policy when used for court-related work.
This is not a guideline. It is a binding institutional rule, issued under Articles 225 and 227 of the Constitution of India, anchored in the right to a fair hearing under Article 21.
What Is Absolutely Prohibited
The policy uses the words “absolutely prohibited” and “never” without ambiguity. Artificial intelligence shall not be used — directly or indirectly — for:
- Any form of judicial decision-making, adjudication, or legal reasoning
- Bail decisions, sentencing considerations, interim orders, or final judgments
- Sorting, classifying, summarising, or evaluating evidence in any form
- Authoring or substantially composing any judgment or binding order — even if a judge reviews it afterward
- Entering party names, witness details, case information, or privileged communications into any public AI tool
- Using AI-generated case citations without independent verification from authoritative sources like SCC Online, AIR, or AIJEL
- Drafting, correcting, or summarising any office note or internal submission
The document goes further, explicitly stating that “even marginal expansion invites unacceptable peril to judicial integrity.” This is not a court being cautious — it is a court drawing a constitutional wall.
What Is Still Permitted — But Under Strict Conditions
The policy is not a blanket rejection of technology. AI is permitted in a narrow, tightly supervised support role:
- Legal research — finding relevant precedents, extracting ratio decidendi, identifying applicable statutes — but every output must be cross-verified against SCC Online, AIR, or AIJEL before reliance
- Language and grammar improvement of a draft order — provided the substantive legal reasoning is entirely the judge’s own
- Generating a structural outline for a judgment — subject to full judicial rewriting
- Machine translation of documents — verified by a qualified human translator
- Case scheduling and cause list management using anonymised, metadata-based tools
- Administrative tasks such as IT automation, internal presentations, and circulars
The boundary is clear: AI as a research aid and administrative tool — never as a reasoning agent.
Personal Liability: Judges Cannot Blame the Algorithm
Perhaps the most legally significant aspect of the policy is its personal accountability clause. The document states explicitly: “The use of AI does not constitute a defence to a finding of error, misconduct, or professional negligence. Users cannot disclaim responsibility by attributing errors to an AI tool.”
Every judge remains fully and personally responsible for every order issued under their name — regardless of how that order was prepared. If an AI tool produces a hallucinated case citation and a judge uses it without verification, the judge is liable. Not the tool. Not the vendor. The judge.
This provision extends to legal assistants, research associates, interns, para-legal volunteers, and all contractual court staff. The policy covers everyone — and it applies whether they are working inside the courthouse or remotely on a personal device.
Why This Policy Exists: The Hallucination Problem
The Gujarat High Court has been watching AI’s failures in courtrooms for months — a concern that mirrors India’s broader struggle to regulate AI outputs, including deepfakes and synthetic media. Earlier in 2026, the court flagged a case where a tax adjudicating authority cited judgments that were either non-existent, incorrectly attributed, or entirely irrelevant — outputs that appeared to be AI-generated. The court called those findings “flawed and deceptive.”
The new policy is the institutional response to that incident. Its preamble acknowledges “documented instances of AI-generated fictitious judgments leading to misconduct findings” as a direct driver of this framework.
Violations Mean Disciplinary Action — And Potentially Criminal Liability
Breaching any provision of this policy is classified as misconduct, triggering departmental and disciplinary proceedings. But the consequences go further. Violations can also attract liability under the Information Technology Act 2000, the Digital Personal Data Protection Act 2023, and the Bharatiya Nyay Sanhita 2023 — India’s new criminal code.
What This Means for India’s AI Policy Landscape
The Gujarat High Court’s move does not exist in isolation. India’s Supreme Court has already issued notices to the Attorney General, Solicitor General, and Bar Council of India on the question of unsupervised AI in judicial rulings — with the next hearing scheduled for April 10, 2026. That hearing could set a national precedent extending well beyond Gujarat.
What the Gujarat High Court has done is establish the first detailed, constitutionally grounded, court-level AI governance framework in India. This comes at a time when AI is already transforming India’s legal tech ecosystem, raising urgent questions about where automation ends and adjudication begins.
The message is unambiguous: in India’s courtrooms, justice will be delivered by human conscience — not by algorithms.
