New research finds Artificial Intelligence can improve access to justice, but could come into conflict with important legal values or cause harm
A joint research project between the Australian Institute for Judicial Administration (AIJA), UNSW Law & Justice, UNSW Allens Hub for Technology, Law, and Innovation, and the Law Society of NSW’s Future of Law and Innovation in the Profession (FLIP Stream) has identified some of the key challenges arising from the growing presence of AI in court systems worldwide.
Small contract disputes in Estonia are being automated as part of a project.
The Estonian Ministry of Justice wants to clear a backlog of cases by using 100 artificial intelligence judges, giving human judges more time to address more difficult disputes and an AI system would be used to adjudicate minor dispute cases up to 7000 euros.
The two parties would provide documents and other relevant information, and the AI system would generate a decision that could be appealed to a human judge
Courts around the world are increasingly using AI in rather routine tasks, and this implementation is having a direct impact on the parties in the case.
According to the project’s report, “AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators”, identified examples of the use of AI in Australia and overseas, from computer- based dispute resolution software to the use of computer code based directly on rules driven logic, or ‘AI judges’ to help clear a backlog of cases.
Director of the UNSW Allens Hub, Professor Lyria Bennett Moses says that despite hesitancy, AI was a growing part of court processes.
“Courts and tribunals internationally are increasingly embracing artificial intelligence as a concept and a practice.”
“There can be both immense benefits and serious concerns about whether artificial intelligence is compatible with fundamental values,” says Professor Lyria Bennett Moses.
“AI in courts extends from administrative matters, such as automated e-filing, to the use of data-driven inferences about particular defendants in the context of sentencing.”
“Judges, tribunal members and court administrators need to understand the technologies sufficiently well to be in a position to ask the right questions about the use of AI systems.” she said
The report also stated the COMPAS tool, used in the United States to profile prisoners, has raised concerns about the compatibility of AI with legal values.
According to the study, COMPAS uses 137 questions to evaluate an individual’s likelihood of reoffending and questions such as ‘how many times has this person been arrested as an adult or a juvenile?’ are answered, as well as more ambiguous ones such as ‘do you feel discouraged at times?’.
The COMPAS tool can produces very serious results, and the judge will use them to decide if the defendant can be released on bail or if he or she should be eligible for parole.
In 2013, Paul Zilly was convicted of stealing a lawnmower. The prosecution together with Mr Zilly’s lawyers agreed to a plea deal of one year in a county jail and a subsequent supervision order.
However, on the basis of a high risk of reoffending COMPAS score, the judge rejected the plea deal and sentenced Mr Zilly to two years in jail.
Professor Bennett Moses questioned whether similar tools should ever be acceptable in an Australian context.
“Everyone has a right to be treated impartially,” she said. “The use of some tools is in conflict with important legal values.
Professor Bennett Moses advise was to tread carefully and seek to understand how things work before drawing conclusions on what the law should do about it.
“We need people to ask the right questions, and help society answer them.” she said.
Raising questions both domestically and internationally the report also noted some positive applications of AI in the courtroom