Additionally, the said guidelines highlight warning signs such as parties citing disparate bodies of case law for identical legal issues or making submissions which are inconsistent with a Judge’s general legal understanding in a particular area.
The said guidance cautions against the unrestrained adoption of AI in legal research while emphasizing the potential pitfalls such as factual inaccuracies and reliance on foreign laws. The guidance discourages the sharing of case information with online chatbots. Here the objective is to ensure that Judges have a comprehensive understanding of AI before incorporating it into decision-making processes, given the existing lack of public confidence in AI technology.
Despite these concerns, the guidance caution also acknowledges the potential utility of AI in aiding judges with provisional cost assessments and also notes that some self-represented litigants are turning to AI tools for guidance.
Judges have also been cautioned about privacy concerns associated with AI use and advised against inputting private or confidential information (that is not already publicly available) into a public AI chatbot. The guidance emphasized that any information entered into a public AI chatbot is considered to be published globally, since these chatbots retain and make available all questions and input data – this stored information is then utilized by the chatbot to address queries from other users.
In conclusion, the advisory advises Judges to be aware that litigants and/or lawyers may have employed AI tools. The guidance highlights the risk that parties, especially those without legal representation, might unknowingly rely on potential erroneous information.
The relevant Guidance issued and discussed in this article may be accessed through the following domain: https://www.judiciary.uk/guidance-and-resources/artificial-intelligence-ai-judicial-guidance/
For more information or assistance please contact Dr Ian Gauci.