Executive summary

1This report makes recommendations on principles and guidelines to support the safe use of artificial intelligence (AI) in Victoria’s courts and tribunals, so that:

public trust in courts and tribunals is not undermined

fairness, integrity and accountability of the court system is maintained.

2The development of AI and its use by courts, tribunals and lawyers is changing rapidly. The increasing use of AI in courts and tribunals raises opportunities and risks.

3Responses to AI will need to be monitored and revised to respond to developments in technology, new AI uses, and the evolving regulatory context.

How is AI used in courts and tribunals?

4AI will change how the justice system and legal services operate, as well as how people interact with courts and tribunals. This report considers potential use of AI in courts and tribunals by:

judicial officers, tribunal members and court staff

court users, including lawyers, self-represented litigants and witnesses (lay and expert).

5This report outlines current and emerging uses of AI in courts and tribunals. AI use will vary across stages of proceedings, from pre-hearing, through to hearings and post-hearing.

6AI is already used widely by lawyers. A recent census by the Victorian Legal Services Board and Commissioner found that 36.7 per cent of lawyers are using AI in legal practice.

7The use of AI in Victoria’s courts and the Victorian Civil and Administrative Tribunal (VCAT) is at an early stage but is developing, with some initial pilots underway.

The use of AI in courts and tribunals must balance opportunities and risks

8The accelerated development of AI has the potential to improve how courts and tribunals operate. But if AI is to be used safely in our justice system, there is an urgent need to increase awareness and understanding of the associated risks.

9AI can help improve and increase provision of court services to support efficient, timely and cost-effective administration of justice. AI also has potential to innovate existing court and tribunal processes.

10AI offers significant opportunities for greater access to justice, although care must be taken not to exacerbate existing barriers or inequalities.

11There are many risks that require careful consideration, in particular:

AI systems can provide biased or inaccurate outputs.

It difficult to understand how AI tools work because the technology is complex and often involves proprietary interests.

The use of AI raises significant privacy and data security concerns.

12Risk will vary depending on who is using AI, how it is used and how well it is implemented and resourced. These risks need to be understood by courts and court users if AI is to be used safely in courts and tribunals.

External regulation will impact AI use in Victoria’s courts and tribunals

13International and national regulation of AI is still developing. The experience of other jurisdictions can inform regulatory approaches for courts and tribunals.

14Regulatory responses by the Australian Government could also influence approaches for Victoria’s courts and VCAT.

15National regulation is important for setting consistent principles and standards for development and use of AI, which will have flow-on impacts for safe use of AI in courts and tribunals.

Does the law need to change to enable safe use of AI?

16There are a broad range of existing laws relevant to the safe use of AI in courts and tribunals. We focused on:

court rules and procedures

human rights law

privacy law

evidence law

administrative law

Legal Profession Uniform Law.

17We heard that it is too early to consider legislative reform for courts and tribunals, given the technology and use in courts is still evolving. As a result, the feedback we received did not identify many examples of gaps where legislative reform may be required. However, there are emerging areas that might require legislative reform over time.

18Court rules and procedures will need to adapt as new uses of AI are introduced. Changes to rules and procedures will be specific to the type and use of AI. At this stage, anticipated changes can be managed within existing rule-making power.

19The use of AI raises human rights considerations. To protect human rights, courts should adopt human rights impact assessments where possible as part of governance processes.

20The use of AI raises significant privacy and information security risks. Victoria’s courts and VCAT should publicly state how they seek to act consistently with Victoria’s Information Privacy Principles to manage AI risks.

21Evidence that is generated or processed by AI will become increasingly prevalent in courts and tribunals. This raises issues about accuracy, reliability and transparency of evidence.

22The Evidence Act 2008 (Vic) was considered flexible enough to manage the use of AI at this point. However, the suitability of evidence laws should be monitored, as future reform may be needed to address risks such as deepfakes or assess the reliability of AI evidence.

23Further monitoring of administrative law is also required to ensure equal access to judicial review, regardless of whether a decision was made by a person or a machine.

Principles to guide safe use of AI

24A principles-based approach to AI regulation was supported by many stakeholders because it can be applied flexibly to manage risks.

25The Commission recommends eight principles to guide safe use of AI by courts and tribunals. These principles provide a foundation for public trust in the use of AI in courts and tribunals:

1)impartiality and fairness

2)accountability and independence

3)transparency and open justice

4)contestability and procedural fairness

5)privacy and data security

6)access to justice

7)efficiency and effectiveness

8)human oversight and monitoring.

Guidelines to support safe use of AI

26To be effective, principles should be embedded in court guidelines that are educative and help to apply the principles in practice.

27There is an opportunity to update and expand existing guidelines to court users to clarify obligations and provide principles-based guidance about risks and limitations of using AI for:

submissions

affidavits, character references and witness statements

expert reports.

28Guidelines for judicial officers and tribunal members should be developed and be publicly available to promote public trust.

29Judicial guidelines could include principles-based guidance and educative information about AI.

30While AI provides opportunities to support judicial officers, the use of AI for judicial decision-making should be prohibited.

Promoting coordinated and consistent approaches to AI

31There should be a coordinated and consistent approach to guidelines for court users across Victorian court jurisdictions. This will provide greater clarity and fairness and reduce complexity for court users.

32It will also be important to work towards national consistency on responses to AI.

33Victoria’s courts and VCAT operate independently of each other, with discrete legislative powers and their own internal governance. While recognising this independence, there is an opportunity to improve coordination of decision making about how AI is used and implemented by courts and VCAT.

34Courts Council should actively coordinate responses to AI and opportunities for consistency across courts where possible.

Clarifying roles and responsibilities for AI decisions

35Implementing AI in courts and tribunals requires effective governance to support its safe use and public trust.

36A coordinated and consistent approach between courts will enable more strategic consideration of risks and opportunities, shared learnings, more effective investment in AI and greater certainty for court users.

37A cross-jurisdictional multidisciplinary technology and innovation committee could improve coordination and enable appropriate expertise and judicial representation in decision making on AI uses. The committee could report to Courts Council.

38Courts could establish lead judicial technology and innovation roles to support AI development and innovation, as well as represent jurisdictional perspectives on a multidisciplinary committee.

39Development of an AI policy for Court Services Victoria, and courts and VCAT, would provide clarity about roles and responsibilities for decisions about AI in courts and VCAT. Key elements could include:

information security and data privacy processes

principled guidance for use of AI by staff

disclosure and consultation processes.

Engaging the community about AI use in courts and VCAT

40To promote public trust, AI tools used by courts and VCAT should be publicly disclosed in an AI inventory. This should occur at an organisational level.

41Consultation with people likely to be affected by AI tools is also important for public trust. Consultation should occur prior to implementation and to inform ongoing improvements to AI tools.

42Courts and VCAT should exercise caution when considering use of AI for administrative decision making. People whose rights are significantly affected by a decision made or materially influenced by AI should be notified and there should be human oversight of these decisions.

Assessing the suitability of new AI uses in courts and VCAT

43Courts and VCAT should be guided by an AI assurance framework to consider the risks and suitability of new AI applications in Victoria’s courts and tribunals.

44Court Services Victoria should develop a court-specific assurance framework based on the pilot VPS Assurance Framework. This could align with the Commission’s principles.

Supporting safe use of AI through greater awareness and education

45Many of our recommendations can only be effective if supported by ongoing education and training by courts and professional bodies.

46Education for judicial officers and tribunal members should continue to be developed, with training at induction, when new AI tools are implemented and ongoing.

47The use of AI does not fundamentally alter obligations under the Legal Profession Uniform Law. But lawyers need training and guidance to better understand how the risks of AI intersect with existing professional obligations, as well as greater awareness of court-issued guidelines.

48The Law Council of Australia should update the Commentary to the Australian Solicitors Conduct Rules in relation to AI, to provide practical advice about AI risks and professional obligations.

49Public education is also necessary to promote understanding of court and tribunal issued guidelines and support safe and proper use of AI in these settings.

Voiced by Amazon Polly