Integrating Intelligence: The Courts’ Evolving Engagement with AI

Posted On - 24 December, 2025 • By - KM Team

Introduction

Artificial Intelligence (“AI”) has started to redefine the contours of dispute resolution globally, and India is no exception. While AI-driven tools promise to accelerate legal research, streamline case management, and enhance judicial efficiency, their entry into the justice system has sparked complex questions about accuracy and accountability. The Indian courts have approached this technological frontier with caution. Even as they are experimenting with AI-assisted processes, the indispensability of human oversight has been highlighted. As the boundaries between technology and judicial reasoning continue to blur, the coming years will test how far the judicial system can embrace innovation without compromising trust in the rule of law.

This Rewind maps out how Indian courts are integrating AI in judicial processes, their responses to AI-hallucinated citations, emerging global regulatory trends, AI-driven personality-rights disputes, and the interplay of AI with the new legal framework – Digital Personal Data Protection Act, 2023 (“DPDP Act”).

Integration of AI in the Indian Judiciary

As part of the National e-Governance Plan, Phase III of the eCourts Mission Mode Project seeks to transform the Indian judiciary through modernization. The project aims to enhance judicial productivity and efficiency while ensuring that justice delivery remains accessible, cost-effective, reliable, and transparent. Within Phase III, INR 53.57 million has been allocated for the ‘Future Technological Advancements’ component, focusing on the integration of emerging technologies such as AI and blockchain to improve user experience. The Supreme Court of India (“Supreme Court”) has constituted an Artificial Intelligence Committee to explore AI applications in the judicial domain.1

  1. Supreme Court

While the Supreme Court has refused to provide directions on AI in a recent public interest litigation case, it has recognised the dangers of AI being used in the judicial system.2The White paper on AI and Judiciary (“White Paper”) published by the Supreme Court,3 sheds light on the approach adopted towards AI in the Indian judicial system. . The White Paper identifies artificial intelligence as a crucial tool for addressing India’s judicial backlog of over five crore cases. It highlights AI’s role in improving case management, legal research, and transparency. At the same time, there is emphasises that AI is intended to support, and not replace human judgment. The Supreme Court has introduced indigenous tools, including SUPACE for analysing case records, SUVAS for translating judgments into 19 languages, and TERES for real-time transcription.

The Supreme Court warns that premature AI adoption could compromise judicial integrity. Risks include inaccurate outputs, hallucinations (e.g., false citations), and algorithmic bias that may perpetuate social hierarchies. Data privacy concerns arise when sensitive case data is processed by insecure or overseas systems. Overreliance on AI could also erode judicial independence. The White Paper outlines potential AI applications in courts and provides guidelines for responsible use, tailored for judges, lawyers, and clerks. Core principles include mandatory human verification and strict respect for confidentiality.

  1. High Court

A landmark step by the judiciary was through the AI policy issued by the Kerala High Court in July 2025. The policy, titled “Policy regarding use of Artificial Intelligence Tools in District Judiciary”, seeks to ensure the responsible use of AI as a tool for assistance and not as a substitute for legal reasoning. The scope and application of this policy has not been restricted to Generative AI but covers all kinds of AI including databases providing access to case law and statutes. 

Judicial principles of transparency, fairness, and accountability have been reaffirmed as essential standards for AI use. It emphasizes meticulous human oversight to prevent errors or hallucinations. However, AI tools cannot be used to determine findings or judgments, as responsibility for the integrity of judicial decisions rests solely with judges. It warns against using cloud-based generative AI tools such as ChatGPT or DeepSeek for case-related information due to the risk of confidentiality breaches, permitting only approved tools. AI may assist in legal research, but strict restrictions apply against uploading personal data, privileged communications, or confidential documents onto cloud-based based AI platforms. 

Arguments backed by AI Hallucinations: Courts are not amused

 While the white paper and guidelines discussed above outline the judiciary’s theoretical stance, Indian courts are now increasingly grappling with the risks associated with unverified AI-generated legal research, particularly when such material is cited in judicial or quasi-judicial proceedings. This concern has been underscored by recent 2025 decisions that revealed the dangers of AI-fabricated case law.

In KMG Wires vs. NFAC Delhi4, the Bombay High Court found that an Income-Tax Assessing Officer had relied on three judicial decisions that did not exist. The High Court observed with considerable alarm: 

In this era of Artificial Intelligence (AI), one tends to place much reliance on the results thrown open by the system. However, when one is exercising quasi-judicial functions, it goes without saying that such results [which are thrown open by AI] are not to be blindly relied upon, but the same should be duly cross verified before using them. Otherwise mistakes like the present one creep in.

The Court quashed and set aside the assessment order, remanding the matter back to the Assessing Officer, with directions to issue a fresh show cause notice and grant reasonable opportunity of hearing to the petitioner. 

In Deepak Raheja & Anr. v. Omkara Assets Reconstruction Private Ltd. & Anr.5, the Appellant before the Supreme Court had taken the help of AI tools to draft a rejoinder. However, the AI appeared to have hallucinated and produced a document that contained several fake cases/citations. During the course of the hearing, the lawyers for the Appellant accepted the error that was committed and tendered an unconditional apology.6

A similar issue arose in Greenopolis Welfare Association v. Narender Singh and Ors.7, where the Delhi High Court noted that the judicial precedents relied upon by the petitioners did not exist and thus, allowed the petitioners to withdraw the case. 

These cases demonstrate that the risks posed by unverified AI-generated content are no longer theoretical. In fact, they are real threats, which, if not tackled appropriately, may seriously compromise the judicial process. Further, these cases underscore the ethical obligation of lawyers and judicial officers to independently verify authorities before relying on them. Unless lawyers and judicial officers are vigilant and meticulously ensure that AI-generate content is accurate, the promise of AI may be overshadowed by the dangers of inaccuracy and erosion of trust in the justice system.

AI and Personality Rights: Emerging Jurisprudence in India

Many celebrities and public figures in India like Amitabh Bachchan, Abhishek Bachchan, Hrithik Roshan, Anil Kapoor, Salman Khan, Akshay Kumar, Karan Johar, and Sri Sri Ravi Shankar have approached Indian courts to protect their personality rights against misuse by AI-created fake audio-clips, deepfake videos and other deceptive digital replicas that have the potential to cause serious reputational harm to them. 

With a single prompt, anyone can clone a person’s voice, generate lifelike videos, morph images, or even build chatbots that sound and act like real individuals. Recent Indian cases show a worrying pattern: anonymous individuals are using these tools to imitate celebrities without their knowledge or consent. For public figures who carry significant influence and responsibility towards the society, such misuse not only tarnishes personal reputation, but also poses broader risks to public trust and societal confidence.

A clear example of this challenge appears in Asha Bhosle v. Mayk Inc.8, where the singer sought protection against AI-driven voice cloning that allowed users to convert any voice into one that mimicked her distinctive singing style. The Bombay High Court held that such technological tools violate personality rights because they enable unauthorized appropriation of an artist’s most personal attribute i.e. their voice. The court relied on the previous judgments of Arijit Singh v. Codible Ventures LLP9 and Aishwarya Rai Bachchan v. Aishwaryaworld.com & Ors.10 and issued an interim injunction, restraining any use of her voice, images, likeness, or persona through AI or similar technology without her express consent.

The Courts have also recognised the communal implications of AI generated videos. In Akshay Hari Om Bhatia v. John Doe & Ors.,11 the Bombay High Court, ordered the removal of highly realistic deepfake videos depicting actor Akshay Kumar in communally sensitive scenarios, citing risks to public order and social harmony. The court granted broad ad-interim relief, including takedowns and restrictions on misuse of his identity. 

Similarly, in Sudhir Chaudhary v. Meta Platforms Inc.12, the Delhi High Court intervened when AI-generated videos falsely portrayed the journalist making political statements. Recognizing the potential for widespread misinformation and harm, the Court directed immediate takedowns and prohibited further creation or dissemination of such content. Both rulings highlight the judiciary’s proactive stance against AI-driven identity exploitation and its implications for societal stability.

In most of these cases, platforms such as Google, Facebook, X, YouTube, and other intermediaries are impleaded as parties, with courts issuing targeted takedown directions and requiring them to promptly remove or disable access to AI-generated offending content. Indian courts are repeatedly providing technology-neutral remedies to help individuals reclaim control over their identities. 

Global overview of AI and disputes

Courts around the world are starting to set clearer rules on how lawyers, litigants, and court staff can use generative AI. The aim is to embrace the efficiency these tools offer without exposing the justice system to risks like inaccurate outputs, data leaks, or breaches of confidentiality.

In the United States, New York State’s Unified Court System introduced an interim policy in October 2025 that allows only approved private AI models such as Microsoft Azure AI and Copilot for court-related work. It also requires users to undergo training and strictly forbids entering confidential information into public tools like ChatGPT, recognising the dangers of data exposure and AI-generated errors in legal documents.13

Singapore has taken a similarly careful approach. Under its Guide that came into force in October 2024 for Court-users, AI may be used for tasks like drafting. However, users must double-check the accuracy of outputs, avoid fabricating evidence, and be ready to disclose AI use if asked. These expectations are tied directly to existing professional conduct obligations.14 A draft of the guide for Singapore’s wider legal sector, released in 2025, highlights the role of professional ethics, confidentiality and transparency for the use of AI in the legal sector. 15

Interplay Between DPDP Act and AI-Generated Content

Under the DPDP Act, Consent is King. The DPDP Act creates a robust, consent-centric, rights-based framework for personal data which is broadly defined as any data relating to an identifiable individual. Under the DPDP Act, a ‘Data Principal’ (an individual whose data is being processed) has the right to access information about what data is held and how it is processed, and to seek correction, updating, or erasure of their personal data if consent was given earlier. 

These rights become particularly significant in cases where one’s name, image, likeness, voice or other identifying data may be collected or processed, for example, by AI platforms or social-media intermediaries. By invoking these statutory rights, one can demand transparency, withdraw their consent, and compel erasure if their personal data is being misused.

Further, the DPDP Act imposes clear obligations on ‘Data Fiduciaries’ (entities deciding how and why data is processed). Such obligations include ensuring accuracy and completeness of data for processing that affects the data principal or shares data with other fiduciaries, implementing security safeguards, and erasing data once it is no longer needed or when consent is withdrawn. Taken together, the provisions regarding access, correction, erasure rights, fiduciary duties, and obligations on consent and data security equips public figures and ordinary individuals alike with a statutory toolbox for challenging misuse of identity or biometric data, including by AI platforms.

Our Thoughts

The Indian Judiciary’s approach to AI reflects a balance between innovation and institutional integrity. Indian courts recognize AI’s potential to address systemic challenges while ensuring that AI remains an assistive tool and not a substitute for judicial reasoning. 

Taken together, judicial guidelines, cautionary rulings, and personality-rights jurisprudence reveal a coherent trajectory: India is building an AI-enabled justice system grounded in oversight, rights, and accountability.  With the DPDP Act providing a statutory framework for data protection, India is positioned to navigate the opportunities and risks of AI in a holistic manner that strengthens both access to justice and public confidence. 

In the year 2025, AI has been increasingly integrated  with the Indian judicial system. In 2026, the judiciary must now address the following three critical areas to ensure the efficient use and growth of AI in the practice of law:

  1. Standardization of Protocols: Prevent friction between different High Courts (e.g., Kerala having a policy while others do not) for the use and implementation of AI. We hope that in the year 2026, the Supreme Court will issue an AI operating guideline applicable to all courts.
  1. Enforcement of Liability: We may see that liability on submission of materials generated through AI hallucinations move from mere apologies to financial penalties. 
  1. The “Black Box” Challenge: The integration of AI in judicial processes raises the “black box” challenge—where opaque algorithmic reasoning limits transparency, accountability, and the ability to scrutinize decision-making. Ensuring transparency in the use of AI will decide the velocity with which AI will be adopted and transforming the legal practice.

The information contained in this document is not legal advice or legal opinion. The contents recorded in the said document are for informational purposes only and should not be used for commercial purposes. Acuity Law LLP disclaims all liability to any person for any loss or damage caused by errors or omissions, whether arising from negligence, accident, or any other cause. 


  1. Digital Transformation of Justice: Integrating AI in India’s Judiciary and Law Enforcement, Press Information Bureau (can be accessed at: https://www.pib.gov.in/PressReleasePage.aspx?PRID=2106239&reg=3&lang=2  ) ↩︎
  2. Aarati Sah vs. Union of India – W.P. (Civil) No. 1127/2025 – Order dated 04.12.2025 ↩︎
  3. White Paper on Artificial Intelligence and Judiciary, Centre for Research and Planning, Supreme Court of India (November 2025) ↩︎
  4. Writ Petition (L) No. 24366 OF 2025 – Order dated 06.10.2025 ↩︎
  5.  Civil Appeal No(s). 12195/2025 ↩︎
  6. https://timesofindia.indiatimes.com/india/fake-cases-twisted-verdicts-as-petitioner-files-ai-reply/articleshow/125854128.cms ↩︎
  7. CM(M) 1909/2025 – Order dated 25 September 2025 ↩︎
  8. Interim Application (L) No. 30382 of 2025 in Commercial IP Suit (L) No. 30262 of 2025 – Order dated 29.09.2025 ↩︎
  9. 2024 SCC OnLine Bom 2445 ↩︎
  10. 2025 SCC OnLine Del 5943 ↩︎
  11. Interim Application (L) No. 33184 of 2025 in Commercial IP Suit (L) No. 32986 of 2025 – Order dated 15.10.2025 ↩︎
  12. CS(COMM) 1089/2025 – Order dated 10.10.2025 ↩︎
  13. New York State Unified Court System Interim Policy on the use of Artificial Intelligence (October 2025) ↩︎
  14. Guide on the use of Generative Artificial Intelligence Tools by Court Users (October 2024) ↩︎
  15. Guide for Using Generative AI in the Legal Sector (Draft for public consultation) (September 2025) ↩︎

Related