The Guidance issued on the 12th of December 2023 by Senior Judges in England and Wales emphasizes that any utilization of AI within or on behalf of the judiciary must align with the overarching obligation to safeguard the integrity of the administration of Justice. Judges are cautioned to be vigilant for potential indicators of Artificial intelligence (“AI”) generated content such as unfamiliar case references of foreign citations. The issuance of the said guidance comes after an incident which occurred on the 4th of December 2023, where a woman was caught using 9 fabricated ChatGPT cases to appeal against a penalty for capital gains tax and had her case rejected by the court. Likewise, in June 2023 a U.S judge penalized 2 New York attorneys for submitting a legal document containing 6 fabricated case references created by Chat GPT.

Additionally, the said guidelines highlight warning signs such as parties citing disparate bodies of case law for identical legal issues or making submissions which are inconsistent with a Judge’s general legal understanding in a particular area.

The said guidance cautions against the unrestrained adoption of AI in legal research while emphasizing the potential pitfalls such as factual inaccuracies and reliance on foreign laws. The guidance discourages the sharing of case information with online chatbots. Here the objective is to ensure that Judges have a comprehensive understanding of AI before incorporating it into decision-making processes, given the existing lack of public confidence in AI technology.

Despite these concerns, the guidance caution also acknowledges the potential utility of AI in aiding judges with provisional cost assessments and also notes that some self-represented litigants are turning to AI tools for guidance.

Judges have also been cautioned about privacy concerns associated with AI use and advised against inputting private or confidential information (that is not already publicly available) into a public AI chatbot. The guidance emphasized that any information entered into a public AI chatbot is considered to be published globally, since these chatbots retain and make available all questions and input data – this stored information is then utilized by the chatbot to address queries from other users.

In conclusion, the advisory advises Judges to be aware that litigants and/or lawyers may have employed AI tools. The guidance highlights the risk that parties, especially those without legal representation, might unknowingly rely on potential erroneous information.

The relevant Guidance issued and discussed in this article may be accessed through the following domain: https://www.judiciary.uk/guidance-and-resources/artificial-intelligence-ai-judicial-guidance/

For more information or assistance please contact Dr Ian Gauci.

 

Disclaimer This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.
Skip to content