Artificial Intelligence in Victoria’s Courts and Tribunals
Project Status:
Start Date:
Tabled in Parliament Date:
Project

The Victorian Law Reform Commission has completed its report on artificial intelligence in Victoria’s courts and tribunals. The report was tabled in Parliament on 3 February 2026 and is available from the links below.
This is the first inquiry by an Australian law reform body into the use of AI in courts and tribunals.
The report contains 30 recommendations to ensure the safe use of AI in Victoria’s courts and tribunals.
People are increasingly using AI in courts and tribunals. Over a third of Victorian lawyers are using AI, as well as some experts and self-represented litigants. The use of AI by Victoria’s courts and VCAT is at an early stage but increasing, with some pilots underway.
AI can support more efficient court services and greater access to justice but there are significant risks. There are issues about the security and privacy of information used in AI tools. AI tools can also provide information that is biased or inaccurate. There is a growing number of cases where inaccurate or hallucinated (made up) AI generated content has been submitted to courts.
The Commission received 29 submissions and held 49 consultations with 52 individuals and organisations. We consulted with courts, lawyers, human rights organisations and specialist access to justice services as well as technology-focused organisations.
Given the rapidly changing nature of AI the report recommends that Victoria’s courts adopt a principles-based regulatory approach focused on guidelines and education. Eight principles are recommended to guide the safe use of AI and to maintain public trust in courts and tribunals. To support implementation of the principles, guidelines for court users, judicial officers and court and tribunal staff are recommended.
AI tools can support judges but it is important they are not used for judicial decision-making. People told us that AI should not be used for judicial decision-making because of risks to judicial independence and confidence in the administration of justice. To support public trust we recommend that judicial guidelines prohibit the use of AI tools for judicial decision-making.
The report also recommends governance processes including an AI assurance framework to support Victoria’s courts and tribunals to assess and monitor new AI uses.
Training and education for lawyers, judicial officers and the public is also recommended to increase awareness about AI guidelines and to promote the safe use of AI.
Want to know more? Listen to our podcast on AI and the courts.
To receive updates about our publications, please register your interest by clicking on the ‘Subscribe to this Project’ button.
|
|
Project Stage
- Terms of reference received
- Submissions and consultations
- Submissions closed
- Final Report
- Tabled in parliament
Publications
03/02/2026
17/10/2024
15/05/2024
