Set the guardrails for AI use in courtrooms

In July this year, the Kerala High Court published a set of guidelines for Artificial Intelligence (AI) use by the district judiciary (“Policy Regarding Use of Artificial Intelligence Tools in District Judiciary”). As the first policy in the country directly addressing AI use in judicial processes and setting out strict safeguards, it is timely. AI tools, from document translation to defect identification in filings, are expected to improve speed and efficiency, which are attractive incentives for a court system which has five crore pending cases.

There are issues

But seemingly innocuous tasks such as AI-enabled translations and transcription are not without risks. For example, a Supreme Court of India judge reported the translation of ‘leave granted’ into ‘chhutti sweekaar (holiday approved’) in Hindi. In the case of Noel Anthony Clarke vs Guardian News & Media Ltd. (2025) EWHC 550 (KB), an AI-transcription tool repeatedly transcribed the claimant’s name, “Noel”, as “no”. OpenAI’s Whisper, an AI-powered speech recognition system, was reported to occasionally make up or “hallucinate” entire phrases and sentences, especially when people spoke with longer pauses between their words.

Search engine bias in AI-enabled legal research could nudge users toward results influenced by user patterns, potentially ‘invisibilising’ relevant precedents. A study published in the Journal of Empirical Legal Studies found that legal Large Language Models (LLM) can make up case laws and cite incorrect sources to substantiate claims.

At a more structural level, AI risks reducing adjudication into rule-based inferences, overlooking the combination of human judgment, specific context, and relevance of precedents that impact judicial decision-making.

Some market tools are currently being used in courts on a non-commercial test basis, such as transcription of oral arguments and witness depositions. Without specified time-frames, success parameters, or framework for access, storage, and use of non-public, sensitive or personal data, such pilots warrant careful consideration. AI tools offered to courts on a test basis risk creating dependencies without clear pathways to sustainable adoption. Moreover, new technological paradigms demand essential infrastructure such as reliable Internet connectivity and hardware.

A quick analysis of publicly available tenders for AI services across courts shows that even if adoption is cautious, courts are not necessarily designing risk management frameworks to address ethical and legal risks. While human checks and balances, such as manual vetting of AI-translated judgements by retired judges, advocates and translators are in place, AI systems learn from available data, with a possibility of error as they encounter new information in new contexts. Scholars note that hallucinations in LLMs are a feature, and not a bug, requiring human oversight and careful adoption in high-risk scenarios.

As courts increasingly integrate AI use in their daily work, the combination of AI’s ethical risks and the complexity of the legal system require effective guardrails to mitigate risks. Since the majority of court procedures remain paper-based, any transition to advance AI deployment must not further debilitate an already imperfect system.

First, there is a need for critical AI literacy among judges, court staff and lawyers. In addition to capacity building to use AI tools, programmes are also required to understand the limitations of the systems deployed. Judicial academies and bar associations, in collaboration with AI governance experts, are well placed to facilitate such capacity building.

Second, guidelines are needed to shape individual use of generative AI for research and judgment writing. If AI is used in the adjudication process, litigants must have a right to be informed. Similarly, litigants and lawyers have a right to know if AI is being used in certain courtrooms. Given the potential for errors arising from AI use, courts should examine whether litigants may be permitted to opt-out of pilots or fully-deployed AI if they have any concerns about safeguards or human oversight.

Third, courts need to adopt standardised procurement guidelines to support the evaluation of a proposed AI system’s reliability and suitability for the task at hand. Pre-procurement steps will also help courts diagnose the exact problem and whether AI is the best solution. Procurement frameworks can guide assessment of technical criteria around explainability, data management and risk mitigation.

On the eCourts project

These frameworks will enable decision-makers to monitor vendor compliance and performance, which may be beyond the routine expertise of judges and the registry.

The Vision Document for Phase III of the eCourts Project (e-Committee, Supreme Court of India) acknowledges the need to create technology offices to guide courts in assessing, selecting, and overseeing the implementation of complex digital solutions, including infrastructure and software. Such scaffolding to aid and assist decision-making on AI use and adoption is one way to overcome gaps in technical expertise. Dedicated specialists can give courts clearer guidance in adopting AI tools as part of comprehensive planning.

As courts inch towards AI adoption, it is important not to lose sight of the ultimate purpose of AI in the system — to serve the ends of justice. In this rapidly evolving technological landscape, clear guidelines on the use and the adoption of AI in courts are essential to ensure that the drive for an efficient court system does not eclipse the nuanced reasoning and human decision-making that is at the heart of the adjudicatory process.

Leah Verghese works at DAKSH, Bengaluru. Smita Mutt works at DAKSH, Bengaluru. Dona Mathew works at Digital Futures Lab, Goa

Published – August 23, 2025 12:08 am IST

Leave a Comment