AI Insider: Judging AI use in our courts and tribunals

20 Apr 2026
A robot judge in a courtroom.

The question of whether we use AI in law is no longer whether we can, but whether we should.

Artificial intelligence (AI) is already at work in a number of courts and tribunals around the world, quietly sorting documents, generating transcripts and powering legal research tools.

As AI systems become more capable, a more difficult question comes into focus: should we trust them with the work of judging?

Before AI can be considered in a court of law as a decision-maker, three things need to be considered – outputs, outcomes and processes, said Professor Lyria Bennett Moses of the School of Law, Society & Criminology.

Outputs

Outputs refer to the content and decisions produced by AI. The first issue with AI outputs is basic quality concerns regarding accuracy and style.

Even outputs that are good on face value can lock the legal system into precedent and undermine creative legal development. Lyria uses the Mabo case as an example of where human understanding and deep thinking can change the law in fundamental and important ways.

“What the human judges did in the Mabo case was rethink how Australian law operates in the context of colonial settlement,” Lyria said. “It was an act of some measure of creativity … with hindsight, we can see that the final decision to rethink the law took us to a much better place than if we simply followed earlier precedent.”

An AI output could produce a decision entirely in accordance with the law as it was then written, but human judgment was able to move Australia in line with better understandings of the nature of pre-settlement Indigenous societies – which brings us to the next consideration.

Outcomes

Even if AI outputs are of similar quality to human judgments when assessed independently, their outcomes need to be considered. Outcomes here refer to how using AI in a courtroom may affect factors such as people, institutions, trust and security.

The key issue is that people could lose trust in the legal system if outcomes are generated by AI.

“Using AI to make decisions could undermine confidence in the entire judicial system,” Lyria said. “With a loss of trust, regardless of the output quality, the outcome would be bad.”

Other issues with AI outcomes are more literal – outcomes could be swayed by bias associated with drawing inferences from historic data or with non-aligned goals of the companies that produce the technology. While each individual decision may be plausible, collectively they may be skewed.

Judges could also lose skills in exercising judgment if they rely too heavily on AI, reducing their ability to sustain quality judgements over time. This can occur even if AI only produces parts of a judgement or a draft then edited by a judge.

“If the expertise of human judges is lost, people may start simply accepting the output of computers, becoming passive and not questioning whether what they are being told is right or wrong,” Lyria said.

Processes

Finally, we must consider the importance of human judges within the legal process. Even if AI’s outputs and outcomes are good, by using it, we risk breaking a fundamental principle regarding how legal power should be exercised.

“Fundamentally, the rule of law is the idea that the judge who is making a decision could, in principle, be in the dock as well,” Lyria said. “An AI system can never really be in that position – you can pass an AI Act and say it’s ‘under the law’, but it’s meaningless to send an AI system to jail, so it’s not subject to law in the same sense as a human judge.

“Even if AI produced perfect-looking decisions and everyone was persuaded it was a good idea, you’d still have a process problem. You’d be doing something contrary to fundamental rule of law values, which is something we don't want to lose.”

Helping judges judge AI use

With outputs, outcomes and processes in mind, how should AI be used in the legal system? The best approach, Lyria said, is offering a guide of framed questions – not issuing rules.

Lyria is part of a team led by Professor Michael Legg that has produced a guide for judges, tribunal members and court administrators called AI Decision Making and the Courts.

The guide sets out the key challenges and opportunities that AI and automated decision-making present for courts and tribunals. Drawing on legislation, case law and rules in a range of jurisdictions, the guide overviews some of the ways AI may be incorporated into domestic courtrooms and analyses associated benefits and risks in light of judicial values.

Overarching questions the guide asks judges, tribunal members and court administrators to consider before using AI are:

  • Why is AI being used? What problem does it solve?
  • Is the use of AI authorised in the context in which it is deployed?
  • In what contexts is AI being used, and is its use in those contexts appropriate? Does the context involve high stakes, vulnerable people, novel situations or high levels of emotion?
  • How is AI being used? How can system requirements better fulfil its purposes and meet the needs of courts and tribunals, including in relation to core judicial values? How will the system be checked, tested and evaluated to ensure it meets those requirements?
  • Who is consulted about the deployment of AI systems? Are all stakeholders, including users and litigants, included in decision-making about whether and how AI will be used?
  • Will the use of AI impact public confidence in the judiciary? Will the use of AI in courtrooms and tribunals be accepted by the public?

When these questions have been considered, it is up to judges, tribunal members and administrators to decide – in what contexts should we be using AI in our courts and tribunals?

Comments