9. Governance to support AI innovation

Overview

The growth of AI use in courts and tribunals requires effective governance to support safe use and public trust.

There are opportunities to improve AI governance in Victoria’s courts and VCAT to reduce risks and embrace opportunities for innovation.

This chapter recommends AI governance components, which together may enhance the safe use of AI in Victoria’s courts and VCAT. This includes:

AI governance bodies with multidisciplinary and multijurisdictional representation and documented roles and responsibilities to facilitate coordination and consistency

an AI policy to document principled guidelines to court and tribunal staff on the safe use of AI and disclosure and consultation processes

an AI assurance framework to assess risks and the suitability of potential AI uses.

Why do courts and tribunals need AI governance?

9.1Our terms of reference ask us how to guide the safe use of AI in Victoria’s courts and tribunals while maintaining public trust and ensuring integrity and fairness in the court system.

9.2There are many definitions of governance. One understanding of governance relates to putting constraints around the exercise of power.[1] We are interested in governance as it relates to courts, VCAT and Court Servies Victoria putting controls around decision making about AI use.

9.3AI governance is comprised of several components. These components include an ‘organisational structure, policies, processes, regulation, roles, responsibilities and risk management framework’.[2]

9.4Appropriate governance can help safeguard the implementation and active management of AI across its lifecycle in Victoria’s courts and VCAT.

9.5As discussed in Chapter 3, the use of AI raises new risks and challenges. In some of Victoria’s courts and in VCAT, governance processes have been adapted to respond to these risks and opportunities.

9.6Although there is no one-size-fits-all AI approach for Victoria’s courts, a coherent governance approach can support consistent understanding and enable innovative approaches to be shared.

9.7We heard about the need for governance to ensure the safe use of AI in courts and tribunals and address key risks associated with AI.[3] We heard that the complexity and evolving nature of AI requires a governance approach that facilitates continual monitoring and review. This can ensure AI use adapts to evolving legal, societal and technological contexts.[4]

9.8We were told that appropriate governance was essential to ensure the use of AI in courts and tribunals did not negatively impact human rights.[5] We also heard governance is critical to mitigate security and data privacy concerns related to AI use.[6]

9.9From our consultations, other benefits of implementing AI governance include increased:

transparency and public trust in the administration of justice[7]

coordination across jurisdictions and increased sharing of resources and information, as well as reduced risk of duplicated effort and inconsistency[8]

education and awareness across the organisation on the risks, benefits and safe use of AI[9]

capacity to comply with existing legislative obligations.[10]

9.10In this chapter, we discuss governance components which together could help support the safe use of AI in Victoria’s courts and VCAT. This involves updating governance bodies, allocating roles and responsibilities, implementing an AI policy and an AI assurance framework.

9.11Guidelines for the use of AI by judicial officers are recommended in Chapter 8. In this chapter, the Commission recommends that guidelines for court and tribunal staff should be included in an AI policy for Victoria’s courts and VCAT.

Governance arrangements in Victoria’s courts and VCAT

9.12Each court jurisdiction and VCAT operate independently of each other. The head of each jurisdiction (for example, the Chief Justice) is given legislative powers relating to the ‘business of the Court’.[11] Each jurisdiction:

has its own internal governance structure

has a Chief Executive Officer who manages its staff and administration services (appointed by Courts Council)

develops their own strategic plan to reflect their own priorities.[12]

9.13Some of Victoria’s courts and VCAT have adapted their internal governance structures or created new governance bodies to consider risks, opportunities and potential AI use cases (as shown in Table 18). For this report, an ‘AI use case’ refers to the use of AI that is designed, developed, deployed or procured to support official work of Victoria’s courts or VCAT. This may either be standalone, or part of a wider solution.

Table 18: AI governance bodies in Victoria’s courts and VCAT.

Jurisdiction

Governance body

Supreme Court

The Digital Strategy Steering Committee has a role in considering AI risks, opportunities and use cases within the court.

County Court

The Technology Advisory Committee advises the Chief Judge and is focused on the judicial experience of technology in the court process. This Committee is responsible for considering AI use cases, for example it reviewed the court’s speech-to-text AI pilot.[13]

Magistrates’ Court

The Magistrates’ Court is actively engaged on the CSV AI Working Group (see paragraph [9.24]). Existing internal governance and administrative forums also provide a role in developing Magistrates’ Court jurisdiction policies and actions before they are referred to the judiciary.[14]

Coroners Court

An informal AI working group consisting of lawyers and coroners has been set up. This may become a formal governance body to monitor and review AI issues and operate similarly to the Court’s existing Research Committee. The Coroners Court’s Executive Team and Risk Committee has current oversight and governance in exploring AI use cases.[15]

VCAT

An AI Committee was established in early 2024 and comprises representatives from VCAT’s members, strategic and operational staff. It identifies potential AI use cases and assesses benefits and limitations. It also makes recommendations for the effective, ethical, and lawful use of AI and suggests policy changes and guidelines.[16]

Court Services Victoria governance

9.14Court Services Victoria (CSV) is an independent statutory body.[17] It provides and coordinates independent administrative services and facilities to Victoria’s Courts Group. The Courts Group is made up of the six court jurisdictions, the Judicial College of Victoria and the Judicial Commission of Victoria.

9.15CSV provides administrative support to the jurisdictions. One of CSV’s functions is to provide information and communication technology services. CSV’s Digital Group supports jurisdictions with general technology needs and in implementing and updating technology.[18]

9.16CSV is required to comply with the Victorian Government Risk Management Framework, which outlines minimum risk management requirements.[19] CSV is responsible for actively managing risks related to its own corporate services and for coordinating and managing an ‘organisational risk management plan for risks that affect the whole of Courts Group’.[20]

9.17The arrangement Victoria’s courts have with CSV is unique. In most other Australian jurisdictions administrative court services sit within government departments.[21] Until 2014, the then Victorian Department of Justice delivered court administrative and technology services to the courts.[22]

9.18We heard that courts were interested in applying their own analysis when considering potential AI tools rather than simply implementing tools adopted by other government departments.[23] CSV can provide an independent courts-specific assessment of proposed AI tools.

9.19There are several bodies within CSV that play a role in supporting the safe use of AI in administrative arrangements.

Courts Council

9.20Courts Council is the governing body of CSV. It directs CSV’s strategy, governance and risk management. It is chaired by the Chief Justice and consists of the six heads of jurisdiction and has non-judicial independent representation.[24] The Council has a role in ‘the implementation of AI systems across the court system’.[25]

Audit and Risk Committee

9.21The Audit and Risk Committee is a subcommittee of Courts Council. It consists of ‘Council representatives, members of the judiciary and an independent external specialist with expertise in ICT’.[26]

9.22This Committee supports Council’s capacity for informed decision-making on AI. Part of its role is to maintain risk management and accountability.

9.23The Committee is responsible for reviewing ‘Organisational Risk Profiles’. CSV intend to establish an Organisational Risk Profile for AI.[27] This would mean that causes and controls for AI-specific risks are documented and monitored. The Committee also monitors the Courts Group Digital Risk Register which aims to include ‘shared AI related risks across the Courts Group’.[28]

AI Working Group

9.24In response to the challenges and opportunities of AI, CSV established an AI Working Group.[29]

9.25The AI Working Group is comprised of members from each jurisdiction and CSV staff. There are 17 members primarily from business transformation or information technology digital services functions. However, it includes a tribunal member of VCAT’s Planning and Environment list.[30]

9.26CSV describes the AI Working Group as the ‘central coordination point for AI initiatives across CSV’.[31] Some of its responsibilities are:

managing and monitoring AI risks on the Courts Group Digital Risk Register

developing frameworks, principles, evaluating AI tools and assessing information security requirements

monitoring pilot projects, ensuring adherence to ethical and regulatory requirements

facilitating knowledge-sharing and providing guidance on best practice

developing guidelines (including for lawyers) and practice notes.

9.27CSV also has working groups focused on information security and data management.[32]

AI Proof of Concept Lab

9.28CSV is developing an AI Proof of Concept Lab to test potential AI use cases. CSV stated that the lab will be a controlled environment established separately from their existing network and will only use test data to prove AI use cases.[33]

9.29CSV intend to pilot AI use cases on a small scale in closed environments before scaling them up and submitting them to user testing or applying them to development environments.[34]

9.30The use of secure environments to develop AI tools is sometimes referred to as a ‘development sandbox’.[35] Its aim is to provide a secure environment to test AI while minimising privacy and data security risks. Development sandboxes can involve testing and developing AI tools with:

clean data sources (such as anonymised or pseudonymised data to ensure data protection and confidentiality)

technical protections (such as a one-way data gate so that data can go in but not out of the sandbox)

controls on access (ensuring suitable access controls and audit logs, and ensuring that if third parties have access to the sandbox, they are subject to appropriate data protection and confidentiality clauses).[36]

9.31The UN Special Rapporteur on the independence of judges and lawyers has recommended that judicial systems institute ‘sandbox environments to pilot AI programs and experiment with appropriate regulations’.[37]

9.32Peak national and international AI standards emphasise the importance of rigorous pre- and post-deployment testing to identify errors, risks and limitations of AI tools and to test and monitor whether the tool is serving its intended purpose.[38] These standards also encourage the development of clearly defined metrics and criteria to monitor the performance of the tool.[39]

9.33Australia’s National Framework for the Assurance of Artificial Intelligence in Government states that small scale pilots should be used to evaluate AI tools, to identify and mitigate problems before tools are scaled up.[40] However, the framework notes that a balance is needed because testing tools in highly controlled environments may not accurately reflect the full risk and opportunities. While testing in less controlled environments may pose governance challenges.

9.34Internationally, courts have highlighted the importance of piloting AI tools. The Office of the Commissioner for Federal Judicial Affairs Canada encourages courts to trial multiple tools simultaneously and under different conditions to determine which will best suit their needs. It also encourages courts to pilot tools to troubleshoot issues before launching them.[41] In the United States (US), the National Centre for State Courts has made an AI sandbox available to court staff to allow them to ‘practice with GenAI in an end environment where your data will not be used to train commercial models.’[42]

9.35From our consultations we heard that several of Victoria’s courts and VCAT have implemented AI pilots.[43] Victoria’s courts, VCAT and CSV should use secure environments to pilot and test tools before deployment and implement ongoing testing and monitoring to help ensure AI tools continue to operate correctly and serve their intended purpose.

International AI governance structures in courts and tribunals

9.36Other jurisdictions have implemented a range of governance structures to support the safe use of AI within their judicial systems. These may be helpful to Victoria’s courts and VCAT.

9.37Many of these approaches bring judicial and administrative sides of the courts together to form a coordinated response to AI usage and implementation. Some examples of international AI governance features are considered in Table 19.

Table 19: International court and tribunal AI governance features

Jurisdiction

AI governance features

New Zealand

Digital strategy: In New Zealand the executive arm of government, via the Ministry of Justice, delivers IT services to courts and tribunals. The Courts of New Zealand developed a digital strategy which details how the judiciary and the New Zealand Ministry of Justice work together on technology.[44] The strategy contains the judiciary’s objectives and guiding principles for the use of technology within New Zealand courts and tribunals.[45] It identifies investigating and so far, as practical, implementing AI as part of its longer-term aspirations.[46]

Technology and innovation judicial role and function: Justice Goddard led the development of the digital strategy and chairs the Information and Digital Technology Committee.[47] In this role, there is time allowed to support implementation of the strategy. The Chair of this committee has responsibility for reporting to the Chief Justice and liaising with the Ministry of Justice and court jurisdictions to consider digital use cases.

Multijurisdictional and multidisciplinary body: An Artificial Intelligence Advisory Group was commissioned by the New Zealand Chief Justice to develop AI guidelines.[48] The group is multijurisdictional and multidisciplinary with ‘representatives from the Senior Courts and District Court, judicial support staff, court registries, and the Ministry of Justice’.[49] This group also works with the Heads of Bench Committee and respective tribunal chairs.

Canada

Multijurisdictional and multidisciplinary body: The Office for the Commissioner for Federal Judicial Affairs established an Action Committee on Modernizing Court Operations which combines members of the judiciary with the executive. It is co-chaired by the Chief Justice of Canada and the Minister of Justice and Attorney-General of Canada.[50] It is supported by a technical working group and produces guidelines and principles for the planning and implementation of AI projects. Canada also has the Canadian Judicial Council, which has released its own AI guidelines.[51]

Singapore

Technology and innovation judicial role and function: In Singapore one judge is allocated a senior role in charge of ‘Transformation and Innovation in the Judiciary’.[52] The judge sits on a range of business operations committees and reports regularly to the Chief Justice to ensure the Chief Justice is engaged in AI decision making. There is also a Chief Transformation and Innovation Officer.[53] The approach focuses not just on AI but also on judicial innovation. The Chief Justice and other jurisdictional leads can make recommendations to the allocated judge who can then raise them with the administrative arm. Use cases are then developed and budgeted. Where proposals seem to be viable, the Chief Transformation and Innovation Officer may bring these to the attention of the Transformation and Innovation Judge.

England and Wales

AI Action Plan for Justice: Released in July 2025, the plan sets out the Ministry of Justice’s strategic priorities for AI adoption over three years across courts, tribunals, prisons, probation and supporting services.[54]

Multijurisdictional and multidisciplinary body: A cross jurisdictional Judicial AI Advisory Group was established to assist the judiciary on the use of AI.[55] This advisory group helped to develop guidance for the judiciary on the use of AI.[56] In 2024 the Ministry of Justice established the Justice AI Unit, which consists of ‘an interdisciplinary team of AI specialists, designers, technologists, and operational experts working to embed responsible AI across the justice system.’[57]

Opportunities to strengthen AI governance bodies, roles and responsibilities

9.38While each court jurisdiction and VCAT operate independently, there are opportunities for jurisdictions to collaborate with Courts Council and CSV to ensure the safe use of AI.

9.39The Victorian Auditor-General’s Office has described CSV’s governance structure as complex. One reason for this is because: ‘While each jurisdiction is independent, they work together and depend on each other.’[58]

9.40There are several opportunities to improve the current approach through:

coordination and consistency

transparency and accountable decision making

diversity of skills and expertise

judicial representation.

Coordination and consistency

9.41We heard that there is value in ensuring a coordinated and consistent response to AI. If courts, VCAT and CSV work in isolation in their approach to AI, this could lead to inconsistency and duplication of resources. The Law Institute of Victoria noted:

The strategy for adopting AI technologies in Victoria’s courts and tribunals should be developed with a view not only to ensuring consistency of regulation with other Australian jurisdictions, but also to avoiding duplication of investment and effort within Victoria. This will be even more important in the current fiscal environment in Victoria, as we often see across the courts and tribunals, each jurisdiction developing their own technology solution, for example case management systems, rather than looking at how best to leverage technologies across all jurisdictions.[59]

9.42A lack of coordination reduces opportunities to collaborate, pool resources, pilot and implement innovative approaches and share learnings from pilots. Developing and implementing AI solutions in isolation could also lead to variations in data and privacy security processes.

9.43While different jurisdictions may have differing needs, there will be some AI tools that are applicable across jurisdictions. In Chapter 2 we discussed how the County Court, Magistrates’ Court and VCAT have all undertaken their own AI transcription pilots.

9.44It was recognised that CSV could play a role in promoting consistency across jurisdictions.[60] The County Court said:

To achieve consistency, the most practical approach may be a consultative approach between CSV and the relevant courts and tribunal that respects the independence of the judiciary, while aiming to provide practical assistance to ensure courts are safely utilising AI technology.[61]

9.45The development of the AI Working Group, which has representation from each jurisdiction and has been endorsed by Courts Council, supports a coordinated approach. However, it is unclear how this group will effectively ensure consistency across courts. It is also not clear how AI governance bodies that have been developed across Courts Group and within CSV interact with each other and how roles and responsibilities are allocated.

Transparency and accountability of decision making

9.46It is important that the courts, VCAT and CSV are transparent about who has authority for decisions. AI governance requires a clear chain of responsibility for decision making and accountability.[62]

9.47Many stakeholders supported courts and tribunals adopting transparent measures to communicate the approval and use of AI. The Office of the Victorian Information Commissioner stated: ‘The community expects government organisations to be transparent and accountable, and to publicly report on their use of AI.’[63]

9.48The Supreme Court told us that ‘CSV has responsibility for procurement and maintenance of IT infrastructure.’[64]

9.49While CSV has developed the AI Working Group, this operates at a low level of seniority and has a limited decision-making function. It is not clear how the AI Working Group fits into existing CSV and individual jurisdictional decision-making processes. The terms of reference for the AI Working Group state broadly that the group reports to jurisdictional and CSV executives, as well as jurisdictional Digital/Information Technology Committees.

Diversity of skills and expertise

9.50AI governance requires governance bodies to represent multidisciplinary capabilities and expertise.[65] Because AI can present risks that combine technical, legal and ethical considerations, it is important that diverse perspectives are considered when making decisions about AI.

9.51In many AI governance models, there is a focus on ethical expertise to support decision making. This could include consideration of the unique risks that impact on the judicial function.

9.52Technical and business operations experts might consider some process concerns (such as bias, accuracy, privacy and explainability). But they may lack a broad understanding of human-centred concerns (autonomy, fairness, wellbeing, truth and democratic values). They may also view risks such as bias and explainability from a technical perspective rather than in terms of procedural fairness. AI developments in courts require consideration of AI impacts on individuals, institutions and society, particularly where AI may impact on trust in courts.

9.53The Office of the Victorian Information Commissioner and the Judicial College of Victoria identified the need for multiple skill sets to be brought together when considering decisions about AI.[66] The need for multidisciplinary skills to be reflected in AI governance and decision making was also supported by representatives of Microsoft.[67] The Office of the Victorian Information Commissioner warned:

The accountability and responsibility of implementing, approving or managing AI systems should not fall solely on the IT department or equivalent. Given the breadth and scale of AI applications across the whole organisation, it is advisable to nominate the head of the agency as the responsible and accountable officer for the adoption of AI, with a whole-of-organisation approach taken to identifying and managing the risks involved.[68]

9.54This was further supported by representatives of the Judicial College of Victoria who stated AI governance requires an understanding of judge’s needs, court processes and AI expertise and that ‘normal governance frameworks within courts are unlikely to be well-equipped to deal with AI developments’.[69]

9.55Currently, the AI Working Group consists of members largely from transformation or information technology/digital services functions. Effective AI governance should include representation from multiple skill sets, such as technology specialists, legal, policy and subject domain specialists.

Judicial representation

9.56It is important that judicial officers are involved in decisions about the use of AI in courts and tribunals.

9.57The UN Special Rapporteur on the independence of judges and lawyers has stated that judiciaries should be confronting issues around the use of AI in judicial systems as a matter of priority.[70] They recommended that to preserve judicial independence, ‘decisions about whether to use AI in judicial systems, and which tools to use, should be made by judges’.[71]

9.58In considering how AI may shape the future of the justice system, Professor Tania Sourdin argues that it is crucial for judges to be involved in considering how technological advancements may be adopted in courts.[72] Sourdin states that for judges to be able to provide input on technological advancements:

Judges must not only acquire foundational knowledge and understandings about AI, but they must also consider the implications of its use on both the justice system and the judiciary. As such, judges must have strategies in place to deal with the ethical and other issues raised by Judge AI.[73]

9.59CSV’s role has been legislated to ensure business operations in relation to technology infrastructure are supported. But current governance arrangements do not support the use of AI when specifically directed at the judiciary or external court users.

9.60Some information and technology committees established within different court jurisdictions have been focused on the ‘judicial experience of technology in the court process’.[74] However, CSV’s multijurisdictional AI Working Group does not have broad judicial officer representation. This may be appropriate given the current role of the group. But as a result, it may be unable to reflect the views of the judiciary in respect to the uses, limitations and opportunities for AI in courts and tribunals.

9.61Creating a multijurisdictional body with judicial officer representation will help provide an opportunity to support the needs of the judiciary in relation to the use of AI in courts and tribunals.

Reform options for AI governance bodies across Victoria’s courts and VCAT

9.62The experience of other jurisdictions is useful to inform how AI governance bodies and the allocation of roles and responsibilities could be improved in Victoria’s courts and VCAT.

9.63Options for reform include establishing:

a technology and innovation committee

technology and innovation judicial roles and functions.

Establish a technology and innovation committee

9.64Jurisdictions such as Canada and New Zealand have implemented multidisciplinary and multijurisdictional committees to assess technology and AI use in courts and tribunals (as shown in Table 19). These groups also have a role in developing or reviewing AI guidelines.

9.65In the US, the Conference of State Court Administrators recommend that courts establish a taskforce with diverse membership to assist in developing a responsive and flexible institutional framework for the use of GenAI in the court.[75] They recommend such a taskforce should consist of court leaders and be informed by people outside the legal system such as university and industry professionals.

9.66In Australia, the Federal Circuit and Family Court has established an internal AI committee.[76] This committee consists of judicial members, a technical member (the courts head of digital), representatives from the Chief Justice’s office and a senior judicial registrar.

9.67There is currently no specific sub-committee of Courts Council responsible for technology and AI related initiatives. However, CSV is considering establishing a technology committee which would have a focus on AI.[77]

9.68CSV previously had an Information Technology Portfolio Committee, which was a sub-committee reporting to Courts Council.[78] It was merged into the Strategic and Innovative Projects Committee in 2019. Its responsibilities included advising Courts Council on the development and implementation of ‘court facilities and technology related initiatives’.[79] The Strategic and Innovative Project Committee, chaired by the Chief Justice, was multidisciplinary with judicial representation and contained experts from outside of CSV.[80] However it was dissolved in April 2024.[81] CSV advised that the committee was disbanded following the COVID-19 pandemic because of financial constraints.[82]

9.69CSV anticipates that re-implementing a technology committee will support coordination and consistency across the jurisdictions. The committee would consider procurement matters, potential use cases, ethical, financial and contractual issues.[83]

9.70Establishing a technology and innovation committee would address current gaps in the governance for AI if it were set up to be:

multijurisdictional with representation from each of the jurisdictions

multidisciplinary and members have appropriate expertise and a foundational understanding of AI

represented with judicial officers.

9.71It would be useful to consider bringing in external technical expertise as needed. This would help the committee keep up to date on evolving technology risks and opportunities.

9.72It will also be relevant for this committee to be aware of the experiences and concerns of court users. Below (from paragraph [9.134]) we discuss how consultation should be considered in the design and development of AI tools for Victoria’s courts and VCAT. Outcomes of consultations should inform the committee’s decision making on AI use cases.

9.73It will be important for this committee to clearly document accountability and responsibility for decision making. It will also be critical to ensure that there are resources available to support AI governance with appropriate secretariat support.

9.74To meet community expectations for transparency, one option to clearly document responsibilities would be for CSV to make the terms of reference of the committee publicly available. This approach is adopted in Canada, with the terms of reference for the Action Committee on Modernizing Court Operations made available online.[84]

9.75It should be clear in the terms of reference that the committee would be responsible for:

reporting on AI risks

making recommendations on AI procurement, potential use cases, ethical and financial issues to Courts Council.

9.76The existing AI Working Group could continue its work as an information sharing forum and report up to the committee.

9.77Because the head of each jurisdiction has legislated responsibilities (as noted above, paragraph [9.12]) for the business of their court, they are responsible for implementing recommendations made by Courts Council in their jurisdiction. This reflects that there are differences in how each of the jurisdictions operates and some AI use cases may not be suitable for every jurisdiction.

9.78Even though each jurisdiction has its own legislated responsibilities this does not negate the benefits and importance of coordination (as discussed in paragraph [9.41]).

9.79The committee should report to the Courts Council, which should actively coordinate responses and identify opportunities for consistency and alignment in Victoria’s courts and VCAT where possible.

Establish technology and innovation judicial roles

9.80To ensure judicial perspectives are incorporated into Victoria’s courts and VCAT responses to AI, lead judicial technology and innovation roles could be created. Sourdin argues there is a ‘need to appoint judges with backgrounds that include sophisticated understandings of new technologies and the time and the ability to design systems that are responsive to judicial and user needs’.[85] Sourdin states this is necessary to ensure judges can adequately participate in the challenges and opportunities raised by technological advancements and to prevent an overreliance on private technological companies or on the executive arm of government.

9.81Some courts have established specialised technology and innovation judicial leads who work with the administrative arm of court services or a government unit that provides technological support to courts. As discussed in Table 19, Singapore’s courts have appointed an ‘Innovation and Technology Judge’. New Zealand also created a specialised role to consider digital technology issues.

9.82In Victoria’s courts and VCAT, lead technology and innovation judicial roles could be created by the head of each jurisdiction appointing a dedicated judicial officer to lead the implementation of AI.

9.83The proposed technology committee requires judicial representation. This could be achieved by having the appointed judicial leads as committee members. This would support a cohesive approach as the judicial lead could coordinate feedback from their jurisdiction and ensure it is factored into a courts-wide approach to AI.

9.84As noted above (in Table 18) several of Victoria’s courts and VCAT already have technology committees in place. These forums will remain important to coordinate feedback within each jurisdiction. It would be useful for judicial leads to bring their jurisdictional perspectives together at the multijurisdictional forum. This can enable a coordinated and, where possible, consistent response to AI usage and implementation.

9.85The judicial responsibilities of the technology and innovation judicial leads would have to be adjusted to ensure that they have sufficient time out of court to fulfill the functions of the AI technology and innovation role.

9.86Other international jurisdictions have developed digital strategies to guide decision making and set direction on the adoption of technology within courts and tribunals.[86] The development of a digital strategy can assist to ensure that AI use cases are strategically developed to consider future court needs.

9.87The development of a Victorian courts digital strategy could be the focus of the technology committee, like the development of New Zealand’s Digital Strategy for Courts and Tribunals.[87]

Recommendations

16.A technology and innovation committee should be established by the Courts Council to support ongoing governance of AI across Victoria’s courts and VCAT.

17.Coordination and consistency in AI governance should be promoted by the Courts Council across Victoria’s courts and VCAT.

18.A technology and innovation judicial officer or VCAT member should be appointed by the head of each of Victoria’s court jurisdictions and VCAT to support AI development and innovation.

Developing an AI policy for courts, tribunals and Court Services Victoria

9.88While a strategy can support ongoing developments, an AI policy can be used to set accepted use, roles and responsibilities and obligations in relation to the use of AI.

9.89The Commission is not aware of CSV or any of Victoria’s courts or VCAT having implemented an AI policy. CSV has implemented some aspects of the Administrative Guideline for the safe and responsible use of Generative Artificial Intelligence in the Victorian Public Sector issued by the Victorian Government.[88]

9.90This guideline is based on Australia’s AI Ethics Principles and sets out minimum requirements for the use of GenAI by Victorian public sector personnel. Some of the requirements are that agency-approved tools are to be used ahead of publicly available GenAI tools. Additionally, personnel can only input publicly available information into GenAI tools that have not been approved, and personnel remain responsible and accountable for their work.[89]

9.91But as the Supreme Court has noted the Victorian Government Administrative Guideline does not apply to courts or CSV.[90]

9.92While not directly relevant to Victoria’s courts, the Australian government’s Policy for the responsible use of AI in government[91] contains principle-based guidance for how non-corporate Commonwealth entities can safely engage with AI. It includes mandatory requirements for entities to designate accountability for implementing the policy to accountable officers. It requires them to make publicly available statements outlining their approach to AI adoption. Additionally, in July 2025, the Australian government released the Technical standard for government’s use of artificial intelligence.[92] It provides technical requirements to support the implementation of the Australian Government’s AI Ethics Principles.

9.93We understand CSV is developing an AI policy that will apply to CSV staff.[93] Representatives of CSV stated it will likely contain high-level statements about the restrictions on staff use of AI.[94]

9.94Internationally, courts and government departments have developed policies about the use of AI in courts and tribunals. Examples of international court AI policies are described in Table 20.

Table 20: International court AI policies

Jurisdiction and policy

Summary of approach

Canada

Use of AI by Courts to Enhance Court Operations[95]

Office of the Commissioner for Federal Judicial Affairs – Action Committee on Modernizing Court Operations

Identifies benefits and challenges of courts use of AI and principles to assist courts to consider how to responsibly use AI. It contains key stages to roll out AI tools, including an initial needs assessment and planning phase focusing on community consultation. It then steps through AI project management phases:

data handling throughout the AI lifecycle

design (identifying the purpose of the tool testing and training and ensuring the technical requirements fit courts systems and structures)

deployment (consideration of trials, pilots, transition plans, training and regular auditing)

decommissioning considerations.

Scotland

Our Approach to the Development of Services Using Artificial Intelligence[96]

Scottish Courts and Tribunals Service

The policy sets out the overall approach the Scottish Courts and Tribunals Service takes to the development and use of AI. It contains seven guiding principles to ensure the use of AI is ethical and beneficial. It provides for governance and oversight through a hierarchy of control across different governance bodies. It also makes commitments to training, development, monitoring and review. It specifies that contracts with suppliers will include clauses that specify ethical AI use and compliance with relevant laws and standards such as data protection and privacy.

Spain

Policy on the use of AI in the Administration of Justice[97]

Ministry of Justice and Court Relations

Incorporates the European ethical charter on the use of Artificial Intelligence in judicial systems and their environment’s five ethical principles.[98] It contains rules for the use of AI in the administration of justice and creates obligations to set responsibility for AI use, development, implementation, quality control and auditing. It also contains examples of AI uses that are:

prohibited

require the approval of IT

require management’s approval

generally permitted.

United States (Arizona)

Code of Judicial Administration[99]

Arizona Supreme Court

The Code of Judicial Administration was updated to include a chapter on the Use of Generative Artificial Intelligence Technology and Large Language Models. It applies to all court personnel and lists considerations for judicial leaders when determining whether to permit the use of GenAI. It also contains rules for staff which restrict staff from inputting public content into non approved AI systems. It sets out that the administrative director must keep a list of GenAI tools that are:

approved for all purposes

approved for public content only

approved for non-production use only

prohibited.

The document also provides direction on court developed tools.

United States (California)

Judicial Branch Administration: Rule and Standard for Use of Generative Artificial Intelligence in Court-Related Work[100]

Judicial Council of California

The Judicial Council of California agreed that any court that does not prohibit the use of GenAI by court staff or judicial officers ‘must adopt a policy that applies to the use of GenAI by court staff for any purpose and by judicial officers for any task outside their adjudicative role’.[101] The Judicial Council of California has specified what must be contained in AI court polices which includes direction on responsible use and disclosure.

United States (Connecticut)

Artificial Intelligence Responsible Use Framework[102]

State of Connecticut Judicial Branch

The policy includes guiding principles and information on AI across intake and exploration, impact assessment, procurement and implementation phases. It sets out the terms of reference for the Judicial Branch’s Artificial Intelligence Committee. It contains operating procedures on:

determination characteristics—to determine whether a system employs AI for decision-making

intake and inventory—to conduct an annual inventory of all systems that employ AI used by the branch

impact assessment—to categorise AI systems into risk categories

procurement and due diligence processes—to procure AI tools.

United States

(various courts)

Several other US courts have introduced AI policies for court employees, such as the Supreme Courts in South Dakota[103] and Illinois.[104] The Illinois Supreme Court Policy on Artificial Intelligence directs that the use of AI by court staff ‘may be expected, should not be discouraged, and is authorized provided it complies with legal and ethical standards’.[105]

9.95International AI policies provide examples that are helpful in considering the scope of an AI policy for CSV and Victoria’s courts and tribunals.

9.96As outlined in Chapter 4, peak standards organisations have released directions on AI governance. While not specific to courts, we heard that international standards could play an important role in shaping public trust in AI governance within courts and tribunals by setting a benchmark of acceptable practices.[106]

9.97These peak bodies direct organisations to develop and document a policy for the development and use of AI to:

a)ensure the use of the AI system is consistent with an organisation’s stated values and principles

b)define key terms and concepts and the scope of their purposes and intended uses

c)align AI governance to broader security, safety, privacy and data governance policies and practices, particularly the use of sensitive or otherwise risky data.[107]

9.98They also encourage organisations to:

a)establish a documentation inventory system

b)establish processes about public disclosure of AI use

c)implement external stakeholder consultation and engagement processes.[108]

9.99Based on these standards and stakeholder feedback, this chapter goes on to suggest key elements to be included in an AI policy for CSV, and Victoria’s courts and VCAT being:

information security and data privacy processes

principled guidance for use of AI by CSV and court and tribunal staff

disclosure and consultation processes.

Alignment of AI use to information security, privacy and data management

9.100Court users and the public need to be able to trust that Victoria’s courts and VCAT can maintain the security and privacy of data. Professor Lyria Bennett Moses has commented: ‘The security of AI systems used by courts is … essential both from a practical standpoint and for the purpose of institutional and public confidence.’[109]

9.101International standards highlight that AI policies should refer to and align with an organisation’s existing privacy and data governance processes and policies.[110] This is consistent with AI policies developed by courts.

9.102In Canada the Office of the Commissioner for Federal Judicial Affairs has set out that appropriate data privacy and cybersecurity measures are needed to guide the use of AI by courts and that:

A strong data privacy and cybersecurity framework, including a clear protocol in the event of a breach, can mitigate risks associated with using an AI tool to store or process any sensitive information handled by courts. Consideration should be given to how AI-related policies or protocols fit within existing frameworks for information management and information technology.[111]

9.103In Chapter 5 it is recommended that Victoria’s courts and VCAT update existing privacy policies and develop AI policies to state how they seek to be consistent with the Victorian Information Privacy Principles (IPPs). In Chapter 5 we also refer to guidance released by the Office of the Victorian Information Commissioner on the use of AI tools.[112] This guidance may be helpful for Victoria’s courts and VCAT to consider how they can align with the IPPs.

9.104We also suggest that Victoria’s courts and VCAT should consider a privacy by design approach to court data (as defined in Chapter 3).[113] Victoria’s courts and VCAT should also consider:

implementing robust security controls (including physical security, cybersecurity and insider threat safeguards across the AI lifecycle).[114]

implementing processes and documenting how teams will support the management and protection of data usage rights for AI (including intellectual property, Indigenous Data Sovereignty, privacy, confidentiality and contractual rights).[115]

9.105In Chapter 3, we highlight that organisations need to consider the physical location of where data is stored and whether the use of an AI tool will result in information travelling outside of Victoria.

Principled guidance for use of AI by CSV and court and tribunal staff

9.106As highlighted in Table 19, many international court AI policies contain rules or guidance to staff about acceptable uses of AI.

9.107The Conference of State Court Administrators in the United States advised that to best achieve time and labour savings, court staff need to be provided with guidelines on what is acceptable AI use and what processes should be followed.[116]

9.108The Supreme Court noted existing duties on court and tribunal staff in relation to privacy and confidentiality:

CSV employees’ terms of employment include duties relating to confidentiality, which is reinforced in various ways. There are also CSV IT [information technology] policies that apply to Court staff, and CSV provides information to staff regarding the use of AI.[117]

9.109While there are general information technology policy requirements in place, many people supported the development of principled guidelines about the use of AI by court and tribunal staff.[118]

9.110As discussed in Chapter 6, the Commission’s principles could help to guide safe use of AI in Victoria’s courts and tribunals. An AI policy could serve an educative function and connect the principles to relevant considerations for CSV, court and tribunal staff.

9.111Guidance to CSV and court and tribunal staff should be separate to guidance for judicial officers which is discussed in Chapter 8. This is because judicial officers have different roles and responsibilities to court and tribunal staff.

9.112Table 21 provides examples of international court policies to demonstrate how the Commission’s principles could help guide the use of AI by CSV, court and VCAT staff.

Table 21: Examples of principle-based guidance for staff

Principle

Guidance for staff

Impartiality and fairness

Court staff ‘must thoroughly review all material to ensure it contains neither overt prejudice nor subtle bias.’[119]

‘Use AI consistently with core values and ethical rules … promote AI tools that are accessible to all individuals, including those with disabilities, and that they do not inadvertently exclude any segments of the population or inadvertently perpetuate bias against anyone, including marginalized groups.’[120]

Accountability and independence

‘Any use of GenAI output is ultimately the responsibility of the Authorized User. Authorized Users are responsible to ensure the accuracy of all work product and must use caution when relying on the output of GenAI.’[121]

‘Always verify AI-generated content before use. Generative AI can sometimes generate false information and the output should not be relied on without verification. While AI can be used as a starting point, the output should never be used verbatim in the completion of reports/documents for the Court’.[122]

‘The planning, procurement, and deployment of generative AI in … courts must firmly uphold the fundamental principle of judicial independence, encompassing its individual and institutional dimensions.’[123]

Transparency and open justice

‘ensure transparency and accountability in the design, development, procurement, deployment, and ongoing monitoring of AI in a manner that respects and strengthens public trust. When using AI tools to create content, agency external facing services or dataset inputs or outputs shall disclose the use of AI.’[124]

Discussion on disclosure is provided from paragraph [9.113].

Contestability and procedural fairness

Court/tribunal use of AI ‘shall be documented in ways that ensure the technology is understood by those that make decisions, monitor outcomes, or explain results.’[125]

‘Any AI tool used in court applications must be able to provide understandable explanations for their decision-making output.’[126]

This is discussed further from paragraph [9.153].

Privacy and data security

‘Respect data privacy: Be vigilant about confidentiality and data privacy. Remember that information input into AI systems is outside the court’s secure network and may be exposed to the public.’[127]

‘Authorized Users may not input any Non-Public Information into Non-Approved GenAI.’[128]

‘Employees and affiliated entities must not use LLMs [large language models] in any way that infringes copyrights or on the intellectual property rights of others’.[129]

‘Should any problems arise related to the use of generative AI, such as unauthorized access or misuse of sensitive, confidential, or privacy restricted information, users must alert the Help Desk and their supervisor immediately.’[130]

Access to justice

‘Serving the public fairly and effectively should guide all decisions related to the use of AI. Consider all potential users of the tool and incorporate their needs into its design, implementation, and monitoring.’[131]

Efficiency and effectiveness

‘AI will not be the appropriate solution to every problem and should not be used simply because it is new, exciting, or available. Possible use of AI should be founded on identifying the problem and assessing possible solutions – including other technologies or non-technological approaches, rather than simply integrating AI into ineffective processes.’[132]

‘The use of AI tools shall be to enhance and improve the value added by our Judicial Branch employees’.[133]

Human oversight and monitoring

‘Review of AI output through competent human oversight is important at all stages for validating results and making any necessary corrections. The level of human oversight required will depend on various factors: For example, greater oversight may be required for tools not developed specifically for court or legal purposes. When developing tools for courts, greater oversight may be required in the early stages to evaluate accuracy.’[134]

Recommendation

19.An AI policy for Court Services Victoria staff and court and tribunal staff should include the Commission’s principles on the safe and acceptable use of AI.

Disclosure of AI use by Victoria’s courts and VCAT

9.113Clearly communicating use of AI by courts and tribunals is critical to upholding transparency and open justice and can support public trust in the administration of justice. Public confidence in the courts depends on what the public knows about how the courts use AI.[135]

9.114Many court users expected courts and tribunals to disclose and consult on their use of AI. A sample of stakeholder views is illustrated in Table 22.

Table 22: Stakeholder views on disclosure of court/tribunal use of AI

Stakeholder

Views on disclosure of AI use by courts and tribunals

Victoria Legal Aid

‘Consistent with human rights and a client-centred approach, we consider that targeted consultation on AI adoption is vital to foster trust and respect in the justice system. In particular, consultation should occur with groups which represent the diversity of our community and those who engage with the system’.[136]

Northern Community Legal Centre

‘If further technological reforms are to be introduced, court users should be included in consultation processes prior to implementation as well as during regular monitoring activities.’[137]

Law Institute of Victoria

‘courts and tribunals should engage in public consultations before implementing AI tools. Courts and tribunals should also disclose AI use to all court users … Public consultation would allow affected groups to express concerns and would ensure that AI implementation aligns with community expectations for fairness and transparency.’[138]

Federation of Community Legal Centres and Justice Connect

‘Courts and tribunals should ensure processes and decisions supported by AI systems are transparent to court users.’[139]

Centre for the Future of the Legal Profession and UNSW Law and Justice

‘it is imperative to disclose when AI is being used in a human process within courts or tribunals, which are rule of law promoting institutions.[140]

9.116Court users expressed strong support for disclosure by courts. But there were mixed views among courts on whether disclosure was necessary. Representatives of the Supreme Court stated:

The use of AI in administrative processes does not need to be disclosed. The administrative process for listing matters is not disclosed now and AI does not change that. These processes can be a mix of judicial and administrative actions.[141]

9.117In contrast, representatives of the Coroners Court provided in-principle support for disclosure where AI is used by court staff and judicial officers. But they noted whether disclosure is necessary may depend on the type of AI tool being used.[142]

9.118Some discussion focused on what sorts of uses would require disclosure. Some stakeholders said that AI uses by courts and tribunals that were merely administrative would not need to be disclosed.[143] In Connecticut, the judicial branch is required to publish an inventory of AI tools. But this does not include products embedded in other systems that pose minimal risks, such as autocomplete functions in email.[144]

9.119Another approach taken by some jurisdictions is to limit disclosure to publicly facing tools and distinguish between AI tools used in administrative versus adjudicative roles. In California there is a mandatory requirement for court staff using GenAI for any purpose, and judicial officers using GenAI for tasks outside their adjudicative role, to disclose:

the use of or reliance on generative AI if the final version of a written, visual, or audio work provided to the public consists entirely of generative AI outputs.[145]

9.120There is also a discretionary obligation to consider disclosure if judicial officers use GenAI within their adjudicative role to create content provided to the public. But it is acknowledged that basing disclosure on whether a judicial officer has used AI in their adjudicative role ‘could create difficulties for courts’.[146]

9.121Disclosure is critical because there is currently a high level of distrust toward AI systems in Australia. A 2025 study by the University of Melbourne and KPMG ranked Australia as one of the lowest countries in the word for trust and acceptance of AI.[147] Only 36 per cent of Australians are willing to trust AI systems.[148]

9.122Not only are there high levels of distrust towards AI, but trust in public institutions across Australia has been falling and courts are not immune to that trend.[149] Professor Gabrielle Appleby has warned that:

In the judicial sphere, the trust that might have been previously reposed in exclusive judicial self-regulation, characterised by informality and opaqueness, no longer exists, or at least, is no longer sufficient.[150]

9.123Disclosure of AI tools used in judicial systems has received international support. The UN Special Rapporteur on the independence of judges and lawyers recommended that ‘key information about judicial AI systems be made publicly available, to permit legal challenges and oversight by civil society.’[151]

9.124If courts want to leverage AI to deliver court services, they need to build public confidence. One report suggested that ‘people are more likely to trust AI systems when they believe they understand AI and when and how it is used in common applications and have received AI education or training.’[152] The principles of contestability, transparency and accountability depend upon identifying the appropriate level of disclosure.

Disclosure in an AI inventory for Victoria’s courts and VCAT

9.125At a time when AI technology is still rapidly evolving, it is recommended that all AI tools procured, deployed or developed by Victoria’s courts and VCAT should be publicly disclosed in an AI inventory.

9.126This should include disclosure of any AI tool which is made available to CSV or court and tribunal staff or judicial officers. Disclosure should be aimed at an organisational level to capture tools that have been implemented in Victoria’s courts and VCAT for administrative purposes, such as transcription and translation tools. It should also include AI tools made available to judicial officers, such as legal research tools, although this does not require individual judicial officers or court staff to disclose every individual use of AI (see the discussion on judicial officer disclosure in Chapter 8).

9.127While this may result in the disclosure of low risk uses, transparency is necessary now to build public confidence in the use of AI tools by Victoria’s courts and VCAT. Representatives of the Public Record Office Victoria stated that:

We are in a rapidly changing moment with the introduction of AI. It is better for tighter regulations on AI use until things become better understood and practices become more managed … in terms of governance, we suggest erring on the side of caution including about what you disclose, at least for some time.[153]

9.128This was supported by other stakeholders who echoed that disclosure ‘should apply even if the process in question is in a seemingly mundane area (such as filing) and its use is for administrative rather than judicial purposes’.[154]

9.129It is recognised that there may be difficulties in identifying where AI has been incorporated into existing products because of the growth of embedded AI (discussed in Chapter 3). To make a meaningful disclosure, Victoria’s courts and tribunals should take reasonable steps to identify whether new software or technology employs AI. Some stakeholders suggested that this disclosure could involve ‘identifying the type of system and where it is being deployed’.[155]

9.130Representatives of the County Court considered several ways courts and tribunals could publish information about AI tools. For example:

court or tribunal websites

court or tribunal annual reports

CSV’s annual report if systems are made available across multiple jurisdictions.[156]

9.131As an example, the Coroners Court has published information on its website about the AI pilot program.[157] Representatives of the Public Record Office Victoria were also supportive of Victoria’s courts and VCAT developing an AI inventory or register and highlighted that it should be actively maintained and publicly accessible.[158]

9.132At a minimum CSV should coordinate the publishing of an AI inventory that captures tools used by all of Victoria’s courts and VCAT. Having a coordinated list will help to identify any duplication of AI tools and opportunities for consistency. This will also assist in the identification of potential risks across CSV, courts and tribunals. Individual courts may also choose to publish information on their website or in their annual reports.

9.133As we highlight at paragraph [9.117], an example of how courts can publish information on AI usage would be the Judicial Branch of the State of Connecticut which conducts an annual inventory of all systems used by the Judicial Branch. This inventory is published on their website.[159]

9.134It is important for an AI inventory to be regularly updated to ensure it is accurate and comprehensive. The process of courts and tribunals undertaking a regular stocktake serves an equally important function by raising awareness about the status and availability of AI systems within Victoria’s courts and VCAT. It is recommended that the AI inventory be updated annually.

Recommendation

20.Court Services Victoria should coordinate an AI inventory, reasonably identifying AI tools designed, developed, deployed or procured by Court Services Victoria, Victoria’s courts and VCAT, which should be published and updated annually.

Community consultation on AI use by Victoria’s courts and VCAT

9.135In addition to disclosure, some stakeholders supported Victoria’s courts and VCAT undertaking community consultation on AI tools.

9.136Consultation is a key feature of international court issued AI policies. In Canada guidance provided to courts emphasises that consultation with the community is critical to planning and assessing the need for, and feasibility of, any AI system.[160]

9.137We heard from human rights groups that community consultation was integral to ensuring that AI use in courts and VCAT aligned with a human rights approach. The Human Rights Law Centre recommended that:

Civil society, legal professionals, and affected communities must be regularly consulted on the use of AI systems in Victoria’s courts to ensure these systems reflect the needs and expectations of Victorians.[161]

9.138This was supported by community legal centre representatives who said there was value in courts and tribunals conducting consultations and user testing.[162] Additionally, the Office of the Victorian Information Commissioner strongly agreed that ‘courts and tribunals should consult with the public before using AI’.[163]

9.139Some court users raised concerns about inadequate consultation when new technologies had previously been implemented in Victoria’s courts and VCAT. The Northern Community Legal Centre shared that:

a recent 2024 review of pre-court information forms by the Magistrates’ Court of Victoria provided Northern CLC with a window of only 48 hours to provide a written submission based upon our research with court service users. It is not apparent if courts have tested the accessibility and useability of their online forms with court service users, and particularly those from marginalised cohorts that are more likely to experience difficulties.[164]

9.140Some courts recognised the importance of consultation when introducing AI tools. The County Court stated, ‘Consultation with key stakeholders will also assist in determining when and how the Court uses AI.’[165]

9.141The importance of consultation has been recognised internationally. The UN Special Rapporteur on the independence of judges and lawyers recommended that when considering AI tools, judiciaries should engage in multistakeholder consultations.[166]

9.142As discussed in Chapter 6, consultation with the legal profession and court users, particularly those from marginalised or disadvantaged groups, is critical to implementing the principles of transparency and open justice, impartiality and fairness, and efficiency and effectiveness.

9.143Stakeholders highlighted that when consultation may be necessary ‘will depend on the intended use of any AI applications and the associated risks’.[167] Not every use of AI will require consultation. The Supreme Court considered lower risk AI uses may not always require consultation:

in relation to AI that generates backgrounds and cancels out noise in virtual hearings, it would not appear to be necessary for the Court to understand the data that the AI was trained on, to disclose to or consult court users on the use of the AI.[168]

9.144Representatives of the Judicial College of Victoria considered that the threshold to consult should be high, for instance where the AI tool may impact people’s liberty.[169] They also raised concerns about placing mandatory obligations to consult on courts:

Courts are different to regulatory agencies and consultation obligations do not fit well with courts. A moral obligation to consult exists. But a duty to consult is worrying from the perspective of needing to preserve courts’ independence.[170]

9.145To determine when consultation may be necessary, we heard that Victoria’s courts and VCAT could build consultation into risk assessments.[171] A proposed AI assurance framework to support Victoria’s courts and VCAT to assess risks of AI use cases is discussed below (from paragraph [9.183]) it contains considerations for consultation across the AI lifecycle.

9.146How courts and tribunals should consult is dependent on the AI use case being considered. Stakeholders discussed a variety of possible consultation approaches. The Castan Centre recommended the establishment of a standing consultation forum:

it is significantly harder for individuals involved in courts and tribunals and grassroots organisations who do not necessarily consider themselves as part of the established justice landscape to be empowered and included within reforms. Therefore, deliberate efforts need to be made to seek out and hear from these voices. This kind of consumer and community engagement is standard practice in health services research and system evaluation, and consistent with a human rights-based approach.[172]

9.147Standing court user groups are already being used in some of Victoria’s courts. Representatives of the Coroners Court told us that they have a court user group. They considered that AI could become a standing item on the agenda for that group.[173]

9.148Other legal organisations have implemented standing advisory groups which include membership of people with lived experience. Victoria Legal Aid’s Data and Digital Ethics and Human Rights Advisory Group is used to review projects to

ensure they meet ethical and human rights obligations and that there are controls in place, to detect things like bias or identify when and how to consult with impacted communities.[174]

9.149Consultation and user testing is also a feature of international court and tribunal policies.[175]

9.150Victoria’s courts and VCAT should consider what consultation mechanism is most appropriate based on the AI use case being considered. Input from a range of relevant stakeholders with diverse backgrounds should be considered. This includes ‘court users from marginalised backgrounds and the services who work with them’.[176]

9.151We also heard that when testing AI tools there should be a focus on including marginalised individuals and groups. As an example, representatives from the Victorian Advocacy League for Individuals with Disabilities stated that where relevant ‘AI tools should be trialled with people with strong to moderate intellectual disability or people with short-term memory issues’.[177]

9.152We heard that there is value in ongoing user testing and in providing avenues for court users to give ongoing feedback. The Federation of Community Legal Centres and Justice Connect recommended that:

user feedback should be incorporated to continuously improve AI tools over time. Allowing the public to report issues or inaccuracies with AI-generated advice will ensure that the system can adapt to users’ needs and enhance its reliability.[178]

9.153Victoria’s courts and VCAT should consider implementing processes to ensure there is an accessible avenue for court users to provide meaningful feedback on their experience of AI systems.

Recommendation

21.Victoria’s courts and VCAT should consult with people likely affected by AI tools based on the AI assurance framework (set out in recommendation 23). Consultation should occur before implementing AI tools and throughout the AI lifecycle.

Notification and human oversight considerations for AI decision making

9.154In Chapter 6 we identified considerations for courts and tribunals to ensure that the incorporation of AI does not undermine people’s rights to challenge decisions.

9.155These considerations include:

notifying people whose rights are significantly affected by a decision made or materially influenced by AI

ensuring there is human oversight of decisions made or materially influenced by AI.

9.156Each of these are discussed below.

Notification and explanation of AI use

9.157Victoria’s courts and VCAT should notify people whose rights are significantly affected by a decision made or materially influenced by AI.

9.158This notification needs to be clear and understandable. Australia’s National framework for the assurance of artificial intelligence in government, while not specific to courts, provides useful direction. It advises that governments should disclose the use of AI to people who may be impacted and provide clear and simple explanations for how an AI system reaches an outcome.[179] Information should also be tailored to the intended audience so that it is understandable.

9.159However, as we discuss in Chapter 3, complexity and proprietary considerations can limit and even prevent the explainability of AI tools. Representatives of the Monash Digital Law Group stated that while ‘it seems on the face of it to be a reasonable request that a human explain’ how an AI decision is made, ‘it is increasingly difficult to do so’.[180]

9.160Despite the opacity of AI tools, the Gradient Institute and the CSIRO have stated that to implement the Australian Government’s AI Ethics Principle of contestability it is ‘essential to provide impacted individuals with an adequate understanding of how the system decided their outcome and what data the decision was based on so that they have grounds to contest’.[181]

9.161To enable Victoria’s court’s and VCAT to provide clear and understandable information about AI decisions, consideration should be given to the explainability of AI tools during their design or procurement.

9.162The UNESCO draft AI guidelines recommend courts adopt ‘AI systems that are transparent in terms of how the system was developed, how it operates, its training data, its limitations (its margin of error), its capabilities, and the purpose of the systems’.[182]

9.163Relevantly, the Australian Human Rights Commission has recommended that the:

Australian Government should not make administrative decisions, including through the use of automation or artificial intelligence, if the decision maker cannot generate reasons or a technical explanation for an affected person.[183]

9.164When making decisions about future AI use cases, Victoria’s courts and VCAT should exercise caution and avoid using tools where there are technical complexities or proprietary constraints which could prevent them from being able to provide understandable explanations for decisions.

9.165Victoria’s courts and VCAT should seek to obtain information about how an AI tool was developed and how it works before implementation and should preference systems that can generate reasons or a technical explanation for decisions.

9.166Contestability and procedural fairness are considered as part of the proposed AI assurance framework for Victoria’s courts and VCAT discussed from paragraph [9.183].

Human oversight of AI decisions by courts and tribunals

9.167Victoria’s courts and VCAT should retain human oversight of decisions made or materially influenced by AI.

9.168The Centre for the Future of the Legal Profession and UNSW Law and Justice stated that human scrutiny and oversight of AI decisions is necessary to provide reassurance to members of the public and to support trust in the rule of law.[184] The Coroners Court similarly stated ‘Human oversight remains a critical component of using emerging technologies.’[185]

9.169In Canada, the Office of the Commissioner for Federal Judicial Affairs stated ‘Human oversight of AI is essential… at all stages for validating results and making any necessary corrections’.[186]

9.170However, the level of human oversight required is context dependent. Necessary human oversight may be informed by the type of tool used, the particular use to which it is applied and the stage of the AI lifecycle. For example, greater oversight may be needed when a tool is at the early stages of development.

9.171The Australian Human Rights Commission has advised that:

There is an important role for people in overseeing, monitoring and intervening in AI-informed decision making. Human involvement is especially important to:

review individual decisions, especially to correct for errors at the individual level

oversee the operation of an AI-informed decision-making system to ensure the system is operating effectively as a whole.[187]

9.172Victoria’s courts and VCAT should ensure that the introduction of AI does not prevent people significantly affected by a decision made or materially influenced by AI from seeking human intervention. Courts and tribunals should make it clear who is responsible for AI decisions so that people can access human-to-human interventions. This will require Victoria’s courts and VCAT to designate individuals with clear responsibility for AI decisions.

Recommendation

22.Victoria’s courts and VCAT should:

a.notify people whose rights are significantly affected by a decision made or materially influenced by AI. Notification should include clear and understandable information on how the decision was made.

b.ensure there is human oversight of decisions made or materially influenced by AI. The extent of oversight will depend on the context.

Developing an AI assurance framework

9.173Our terms of reference ask us to consider principles or guidelines that can be used in the future to assess the suitability of new AI systems in Victoria’s courts and tribunals.

9.174Assurance frameworks can provide a structured process for assessing risks and the suitability of an AI system. Assurance frameworks can guide decision making at different project phases to support:

procurement

design and development

data collection and training

deployment and use

monitoring and evaluation.

9.175Examples of AI assurance frameworks range from general to court-specific frameworks. While these frameworks vary in content, important common elements include:

Identifying risks, benefits and purpose: Often a series of questions are included to prompt users to identify risks. Many frameworks require users to identify the purpose and benefit to be delivered by the AI use case. Having a clear purpose can help ensure the adoption of AI tools leads to the realisation of benefits through improved processes or outcomes.

Categorising and evaluating risks: AI assurance frameworks categorise risks in different ways. The level of risk can help determine what mitigations are required. Some frameworks assign a risk category (for instance low, medium or high) to specific AI use cases. For example, facial recognition tools may be categorised as high risk. These approaches are simple to apply and provide certainty but remove discretion. Use cases can also become quickly outdated. Other frameworks generate risk ratings by using a risk matrix to assess the likelihood and consequence of risks to principles. This allows decision makers to retain a high level of discretion and is more flexibly applied to emerging technologies. But it can be complex and requires interpretation, which can create inconsistency.

Treating risks: Once risks have been identified and categorised, some frameworks require users to develop a mitigation plan. An organisation’s risk tolerance will inform what action is required for each risk category.

Monitoring and review: Most frameworks require continuous monitoring and evaluation and encouraged users to regularly reassess risks.

Assigning roles and responsibilities: AI assurance frameworks often encourage users to designate and document roles and responsibilities which can increase accountability and transparency.

AI assurance frameworks for courts and tribunals

9.176AI assurance frameworks developed to date are largely generic. Many have been developed by governments to apply across all agencies and to many contexts.

9.177AI assessment frameworks have been developed for use by the Australian, New South Wales and Queensland governments.[188] The Commonwealth Scientific and Industrial Research Organisation’s Responsible AI Pattern Catalogue also includes direction for organisations to conduct responsible AI risk assessments.[189]

9.178We also heard that private companies like Microsoft have published AI impact assessment templates.[190]

9.179However, some organisations are adapting generic frameworks to better suit their specific needs. We heard that the Office of Public Prosecutions (OPP) is developing an AI Principles Framework and accompanying roadmap to guide and govern the implementation of AI which will align with the Victorian and Australian government AI guidance. It will also consider the specific operating context for the OPP ‘to reflect court and community expectations around AI’.[191]

9.180The UN Special Rapporteur on the independence of judges and lawyers has stated that AI should not be adopted within judicial systems ‘without careful assessment of its potential harms, whether these can be eliminated, and whether there are other solutions that are less risky’.[192]

9.181Some AI assurance frameworks have been specifically designed for courts and tribunals. Examples include:

England and Wales: Illustrative Framework Assessment Tool[193]

European Commission for the Efficiency of Justice (CEPEJ): Assessment Tool for the Operationalisation of the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment[194]

State of Connecticut: Judicial Branch Artificial Intelligence Responsible Use Framework.[195]

9.182The Australia New Zealand Policing Advisory Agency has also released a Responsible and Ethical Artificial Intelligence Framework to support police and forensic services by detailing how AI ethics principles can be operationalised.[196]

9.183Adopting an AI assurance framework can support Victoria’s courts and VCAT to identify the objective of an AI use case and to assess and mitigate risks.

An AI assurance framework for Victoria’s courts and tribunals

9.184An AI assurance framework could support Victoria’s courts, VCAT and CSV to make decisions about developing or purchasing closed AI tools (closed AI tools are discussed in Chapter 3). This could complement an AI policy by providing direction on how to assess the risk of AI use cases.

9.185Many stakeholders supported the development of an AI assurance framework to assist Victoria’s courts and VCAT to assess potential AI use cases, as illustrated in Table 23.

Table 23: Stakeholder views on an AI assurance framework to assess risk

Stakeholder

Views on AI assurance frameworks for Victoria’s courts and VCAT

Coroners Court

The Court ‘supports the development of a guideline or framework to assist courts and tribunals in identifying, assessing and managing risks of AI specific to their jurisdiction. Such a framework must be sufficiently flexible and adaptable to enable courts and tribunals to explore opportunities to use AI in ways appropriate for the relevant jurisdiction. It should not be overly prescriptive or rigid, given the rapidly evolving nature of AI … this could take the form of a checklist of matters that jurisdictions may take into account or consider in approaching use of AI in their jurisdiction’.[197]

Office of the Victorian Information Commissioner

‘an AI assessment framework for courts and tribunals should be developed … in developing a framework, OVIC recommends reviewing existing guidance and resources’.[198]

Judicial College of Victoria

‘A framework for thinking about risk could be enormously valuable because it starts breaking down the granularity of AI and increasing awareness that not all AI is the same … This would provide a framework that courts might find useful for thinking about AI risk that provides more rigour.’[199]

Pilot Victorian Public Service AI Assurance Framework

9.186In 2025, the Victorian Government began piloting the VPS AI Assurance Framework (the VPS Framework) across government departments and agencies. The framework is based on and localises Australia’s National framework for the assurance of artificial intelligence in government.[200]

9.187CSV decided to pilot the VPS Framework to assess AI tools for court administrative purposes.[201] We heard from representatives of VCAT that the pilot VPS Framework is being used by the AI Working Group to assess Microsoft CoPilot.[202] We heard that the Magistrates’ Court is also piloting the VPS Framework.[203]

9.188The pilot VPS Framework is:

designed to support the safe and responsible delivery of AI in the VPS by promoting transparency and accountability, and a common approach to identifying, evaluating, communicating, and managing the ethics and risks associated with AI use cases

comprised of a self-assessment tool and guidance material, and adopts Australia’s AI Ethics Principles,[204] consistent with the National Framework as mentioned above.

9.189The self-assessment tool is comprised of three sections:

1)capturing basic information about the AI use case

2)an initial assessment to identify the highest residual risk rating of the use case against specific criteria

3)a more detailed assessment of how the AI use case complies with Australia’s AI Ethics Principles, where one or more of the residual risks identified in the initial assessment is rated medium or higher.

9.190The pilot VPS Framework does not replace existing policies, frameworks or practices for procuring, developing or delivering Victorian Government technology projects. It aims to support delivery of AI use cases by ensuring the use of AI is safe, responsible and in line with community expectations.

9.191The VPS Framework is currently a pilot. Whether it will be amended and formally adopted is dependent on feedback from the pilot process. Additionally, any AI assurance framework will need to be reconsidered if the Australian government adopts a risk-based approach to AI regulation (see discussion in Chapter 4).[205]

Using the pilot VPS AI Assurance Framework to develop a bespoke approach for Victoria’s courts and VCAT

9.192There is an opportunity to use the pilot VPS Framework as a foundation to build a tool that is specific to the context of Victoria’s courts and VCAT.

9.193As noted above, CSV and some court jurisdictions are already considering or are actively piloting the VPS Framework. Representatives from VCAT told us there could be value in applying the VPS Framework to assess AI uses. It was noted that:

The VPS tool seems to have the right questions … Having something off the shelf is better than starting from scratch. No significant issues have been raised about the VPS tool so far, but it hasn’t been tested in any detail yet.[206]

9.194 A benefit of the pilot VPS Framework is that it allows for the discretion of decision makers to be maintained. It allows users to assess the level of risk in the context within which they operate. It does not prohibit any AI system or risk category but leaves it up to each agency to set their own risk appetite. This would mean that use of the VPS Framework would be less likely to infringe the independence of judicial decision-making compared to a framework that prohibits certain AI uses.

9.195Providing a list of high risk or prohibited AI uses is not recommended by the Commission because such a list will quickly become outdated, given the pace of technological advancements. The Commission is of the view the principle-based risk assessment matrix in the pilot VPS Framework strikes the right balance between consistency and flexibility.

9.196However, the current pilot VPS Framework does not directly consider risks and opportunities of AI use cases that are unique to courts and tribunals. It is also based on Australia’s AI Ethics Principles rather than the Commission’s principles (see Chapter 6). Although there are substantial similarities between these principles, there are also gaps. The Commission’s principles incorporate core principles of justice that are not directly captured by the Australian principles, being:

judicial independence

open justice

procedural fairness

access to justice.

9.197There is an opportunity to adapt the pilot VPS Framework so that it can be appropriately applied in Victoria’s courts and VCAT by:

incorporating the Commission’s principles, which are widely supported by stakeholders and will help provide a foundation for maintaining public trust in the courts.

expanding how the pilot VPS Framework considers and assesses the impact of AI uses by Victoria’s courts and VCAT on human rights, information security and privacy.

9.198As discussed in Chapter 5, legislative change is not currently recommended in relation to the protection of privacy and human rights. However, we discussed non-legislative mechanisms to help increase public trust in the use of AI in courts and tribunals such as:

human rights impact assessments

privacy impact assessments.

9.199A human rights and privacy impact assessment should be referred to in, and complement, a courts and VCAT-specific AI assurance framework.

9.200Table 24 provides an example of questions that could be included in a courts and VCAT-specific AI assurance framework risk rating questionnaire. If a user answered yes to any of the proposed questions, they would need to consider the consequence and likelihood of the risk and assign each risk a rating that would be prescribed by the courts and VCAT-specific AI assurance framework. If one or more medium or high residual risk ratings are identified, then a more detailed assessment of how the AI use case complies with the Commission’s principles should be completed.

Table 24: Example questions for a courts and VCAT-specific AI assurance framework

The Commission’s principle

The Commission’s proposed questions

Impartiality and fairness

Is there a risk of the AI use case producing biased or unfairly discriminatory outcomes against individuals, communities or groups? (Consider who designed the system, and why, what data the system is trained on, and the quality and relevance of that data).

Is there a risk the AI use case is not compliant with applicable human rights laws, including the Charter of Human Rights and Responsibilities Act 2006 (Vic)? For guidance refer to The Charter of Human Rights and Responsibilities: A Guide for Victorian Public Sector Workers which provides practical steps to consider human rights before making decisions such as:

i)identify which human rights are relevant to the AI use case across the AI lifecycle

ii)identify any interference or limitation to those rights

iii)identify possible impacts of the AI use case on a person’s rights, particularly in terms of the right to a fair trial, non-discrimination and privacy

iv)consider whether the AI use case can be justified and balances all interests to evaluate whether any limitation on human rights is reasonable, justifiable and proportionate.[207]

Accountability and independence

Is there a risk associated with how the AI use case could influence or be perceived to influence judicial independence (personal or institutional)?

Is there a risk that it will be unclear who within the court/tribunal is responsible for the operation of the AI tool and for any outputs or decisions it makes?

Transparency and open justice

Is there a risk that the court or tribunal will not be able to provide clear, understandable explanations for how the AI tool works (for example because of complexity or proprietary interests)?

Contestability and procedural fairness

Is there a risk the AI use case will undermine people’s existing rights to challenge decisions?

Privacy and data security

Is there a risk that the AI use case will impact the confidentiality, integrity or availability of information held by courts and tribunals? Courts and tribunals should:

i)consider privacy and security by design principles[208]

ii)consider what risks are inconsistent with the Privacy and Data Protection Act 2014 (Vic) Information Privacy Principles for collection, use and disclosure of personal information. When considering privacy implications, it may be useful to refer to the Office of the Victorian Information Commissioner’s Privacy Impact Assessment guide and template.[209]

Access to justice

Is there a risk of the AI use case negatively affecting public accessibility or inclusivity of court and tribunal services?

Is there a risk of the AI use case creating or exacerbating barriers to justice such as the digital divide?

Is there a risk the AI use case will be inconsistent with Indigenous Data Sovereignty rights?

Efficiency and effectiveness

Is there a risk that the use case will decrease efficiency or quality of services, or result in worse outcomes (compared to traditional services) for court users?

Is there a risk that the estimated efficiencies of the AI use case do not account for full direct and indirect costs, including wider societal or environmental costs?

Human oversight and monitoring

Is there a risk implementation of the AI use case will replace the option for court users to access human supports?

Is there a risk that proprietary interests or complexity of the AI use case will restrict continual monitoring and evaluation over time?

9.201A proposed process is set out below in Figure 7 to show how a courts and VCAT-specific framework could be used to support decision making about whether CSV, courts and tribunals should develop or procure AI systems.

Figure 7: Proposed process for Victoria’s courts, VCAT and CSV to use a courts and VCAT-specific assurance framework

9.202The framework could support decision making on whether Victoria’s courts and VCAT develop or procure an AI system, whereas everyday uses of AI by court or tribunal staff and judicial officers should be guided by the proposed guidelines.

9.203A courts and VCAT-specific framework should be principles-based to allow for flexibility in its application. This would allow the framework to be applied to judicial and administrative uses.

9.204Victoria’s courts and VCAT also have record management obligations under the Public Records Act 1973 (Vic). Representatives of the Public Record Office Victoria told us that where Victoria’s courts and VCAT create or manage records with the use of AI, they need to clearly document the use of those AI tools and record-keeping requirements need to be built into governance structures.[210] In line with this feedback, CSV, Victoria’s courts and VCAT should retain AI assurance material to inform existing risk registers and individual project plans for AI uses that progresses past early experimentation.

Types of AI should be considered in assessment

9.205In Chapter 3 we outlined the distinction between public and closed AI tools and how they carry different types and scales of risk.

9.206When assessing AI use cases, Victoria’s courts and VCAT should be aware that some closed AI tools can better protect privacy rights compared to public AI tools.[211] This is because closed tools can often be negotiated to not remember user prompts, not allow user data to be used to train the underlying model or to ensure user data remains within Victoria.

9.207Comparatively, developed closed AI tools carry the least privacy risks. This is because they are not reliant on third-party suppliers to store and manage court and tribunal data. The OPP stated that Victoria’s courts and VCAT should focus on procuring, developing or deploying closed AI tools.[212]

9.208Developing closed AI tools will give Victoria’s courts and VCAT control over the tool and how information and data is managed. To reduce privacy risks, Victoria’s courts and VCAT should preference closed AI tools above public tools.

9.209However, there can be benefits in outsourcing the development of closed AI tools to companies that have the relevant resources and expertise. Additionally, the cost of designing closed AI tools in-house can be very high.[213]

9.210Yet the UN Special Rapporteur on the independence of judges and lawyers has recommended that judiciaries proceed cautiously when entering agreements for the provision of AI services with for-profit private actors, which may seek ‘to monetize data extracted from judicial systems’.[214] If Victoria’s courts and VCAT engage third parties to provide closed procured or hybrid AI tools they must exercise caution during the procurement process (the different types of AI are explained in Chapter 3).

9.211Peak international standards organisations suggests that in addition to existing technology procurement processes, when engaging third-party AI suppliers organisations should:

seek information on the transparency of system functions (such as training data, training and inference algorithms, assumptions and limitations). The Australian Government also recommends that as part of the procurement process for third-party AI suppliers, organisations should agree to ‘transparency mechanisms required for the AI system or component’ and reflect this in contracts and project documentation.[215]

test third-party AI systems

set clear and complete instructions for third-party system usage

address supply chain, full product lifecycle and associated processes, consider legal, ethical, and other issues concerning procurement and use of third-party software or hardware systems and data.[216]

9.212Importantly, courts and tribunals will also need to consider data governance and ownership across the AI lifecycle when contracting for any AI tools.[217]

Consultation should be included in the assessment

9.213As discussed, (from paragraph [9.134]) consultation should form part of the assessment of AI use cases. Under the pilot VPS Framework people who may be affected by an AI use case must be identified and documented.

9.214This process should be included in a courts and VCAT-specific assurance framework. In applying the framework, if medium or high risks are identified, consultation and user testing should be held with people likely affected by the AI use case, before it is implemented.

Assessments should be ongoing

9.215As discussed, (paragraph [9.7]) we heard that Victoria’s courts and VCAT should implement continuous monitoring and periodic review of AI systems.[218]

9.216The pilot VPS Framework directs that the assessment of AI use cases should be reviewed as they move along the AI lifecycle and where any material changes occur. It also recommends use case be reviewed periodically after deployment in line with identified risks.

9.217Similarly, a courts and VCAT-specific assurance framework should be used throughout the AI use case lifecycle to identify and monitor risks and make appropriate improvements.

Recommendation

23.Court Services Victoria should develop an AI assurance framework aligned with the Commission’s principles to manage the ethics and risks associated with AI use cases for Victoria’s courts and VCAT, based on the pilot VPS AI Assurance Framework.


  1. John Alford, Royston Gustavson and Philip Williams, The Governance of Australia’s Courts: A Managerial Perspective (Report, Australian Institute of Judicial Administration Incorporated, 2004) 2.

  2. Australian Government et al, National Framework for the Assurance of Artificial Intelligence in Government: A Joint Approach to Safe and Responsible AI by the Australian, State and Territory Governments (Report, 21 June 2024) 6.

  3. Submissions 5 (Office of the Victorian Information Commissioner), 10 (Castan Centre for Human Rights Law, Monash University), 12 (Victoria Legal Aid), 14 (Centre for Artificial Intelligence and Digital Ethics, The University of Melbourne); 26 (Supreme Court of Victoria).

  4. Submission 15 (Human Rights Law Centre).

  5. Ibid.

  6. Submission 27 (Federation of Community Legal Centres and Justice Connect).

  7. Submission 5 (Office of the Victorian Information Commissioner).

  8. Submission 16 (Law Institute Victoria).

  9. Submission 22 (Centre for the Future of the Legal Profession and UNSW Law and Justice).

  10. Consultation 21 (Public Record Office Victoria). See also Public Record Office Victoria, Recordkeeping Policy: Artificial Intelligence Technologies and Recordkeeping (Policy, 29 February 2024) <https://prov.vic.gov.au/sites/default/files/files/documents/ai_tech_and_recordkeeping_policy_v1_2024.pdf>.

  11. For example, see Supreme Court Act 1986 (Vic) s 28AAA (1).

  12. Victorian Auditor-General’s Office, Administration of Victorian Law Courts (Independent Assurance Report to Parliament No 2021– 22:06, October 2021) 11 <https://www.audit.vic.gov.au/report/administration-victorian-courts/?section=’>.

  13. Consultation 12 (County Court of Victoria).

  14. Consultation 15 (Magistrates’ Court of Victoria).

  15. Consultation 2 (Coroners Court of Victoria).

  16. Victorian Civil and Administrative Tribunal, Annual Report 2023-24 (Report, September 2024) 29.

  17. Operating under the Court Services Victoria Act 2014 (Vic).

  18. Court Services Victoria, Delivering Excellence in Court and Tribunal Administration, Annual Report 2023-24 (Report, October 2024) 24.

  19. CSV is required by the Standing Directions under the Financial Management Act 1994 (Vic) to comply with the Victorian Government Risk Management Framework. See: Minister for Finance (Vic), Standing Directions 2018 Under the Financial Management Act 1994 (Issued 11 October 2018, incorporating revisions to 4 September 2023); Victorian Government, Victorian Government Risk Management Framework (Report, August 2020).

  20. Victorian Auditor-General’s Office, Administration of Victorian Law Courts (Independent Assurance Report to Parliament No 2021– 22:06, October 2021) 41 <https://www.audit.vic.gov.au/report/administration-victorian-courts/?section=’>.

  21. South Australia is the only somewhat analogous jurisdiction in Australia in terms of administrative services for courts. The Courts Administration Authority of South Australia is also an independent statutory entity like CSV which provides independent administrative facilities and services required by South Australian courts. See Courts Administration Act 1993 (SA).

  22. Victorian Auditor-General’s Office, Administration of Victorian Law Courts (Independent Assurance Report to Parliament No 2021– 22:06, October 2021) 12 <https://www.audit.vic.gov.au/report/administration-victorian-courts/?section=’>.

  23. Consultation 12 (County Court of Victoria).

  24. The six jurisdictions are Supreme, County, Magistrates’, Children’s and Coroners courts and VCAT: Court Services Victoria, Delivering Excellence in Court and Tribunal Administration, Annual Report 2023-24 (Report, October 2024) 6.

  25. Submission 25 (Court Services Victoria).

  26. Ibid.

  27. Ibid.

  28. Ibid.

  29. Ibid; Consultation 22 (Court Services Victoria).

  30. Consultation 22 (Court Services Victoria). Provided as supplementary information regarding working group terms of reference.

  31. Submission 25 (Court Services Victoria).

  32. Ibid.

  33. Ibid; Consultation 22 (Court Services Victoria).

  34. Submission 25 (Court Services Victoria).

  35. Linklaters, Ethical, Safe, Lawful: A Toolkit for Artificial Intelligence Projects (Toolkit, 2018) A27. A development sandbox is distinct from a regulatory sandbox. A regulatory sandbox can be set up by regulators to give organisations the ability to test products in a controlled environment and the regulator may provide the organisation with a waiver or an agreement not to take enforcement action while the tools is being developed in the sandbox.

  36. Ibid A27.

  37. Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 19 <https://docs.un.org/en/A/80/169>.

  38. Department of Industry, Science and Resources (Cth), National Artificial Intelligence Centre, and CSIRO, Voluntary AI Safety Standard (Report, August 2024) 25–28 <https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf>.

  39. Ibid 20–21; See also National Institute of Standards and Technology (NIST), AI RMF Playbook (Report, U.S. Department of Commerce, 2024) 93–144 <https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook>.

  40. Australian Government et al, National Framework for the Assurance of Artificial Intelligence in Government: A Joint Approach to Safe and Responsible AI by the Australian, State and Territory Governments (Report, 21 June 2024) 20.

  41. Office of the Commissioner for Federal Judicial Affairs Canada, Action Committee on Modernizing Court Operations, Use of Artificial Intelligence by Courts to Enhance Court Operations (Statement, 20 November 2024) 5 <https://fja-cmf.gc.ca/COVID-19/pdf/Use-of-AI-by-Courts-Utilisation-de-lIA-par-les-tribunaux-eng.pdf>.

  42. ‘NCSC AI Sandbox’, National Centre for State Courts (NCSC) (Web Page, 2025) <https://aisandbox.ncsc.org/login>.

  43. Submission 24 (County Court of Victoria). Consultation 15 (Magistrates’ Court of Victoria). Victorian Civil and Administrative Tribunal, Annual Report 2023-24 (Report, September 2024) 29.

  44. Office of the Chief Justice of New Zealand, Digital Strategy for Courts and Tribunals (Report, March 2023) <https://www.courtsofnz.govt.nz/assets/7-Publications/2-Reports/20230329-Digital-Strategy-Report.pdf>.

  45. Ibid 18–21.

  46. Ibid 27.

  47. Dame Helen Winkelmann, Chief Justice of New Zealand, Chief Justice Launches Digital Strategy for Courts of New Zealand (Media Statement, 29 March 2023) <https://www.courtsofnz.govt.nz/assets/7-Publications/Announcements/20230329-Media-Release-Chief-Justice-launches-Digital-Strategy-for-Courts-of-New-Zealand.pdf>.

  48. Courts of New Zealand, Judiciary Publishes Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals (Media Release, 7 December 2023) 2.

  49. Ibid.

  50. ‘Action Committee on Modernizing Court Operations: The Action Committee – Who We Are and What We Do’, Office of the Commissioner for Federal Judicial Affairs Canada (Web Page, 21 February 2025) <https://fja-cmf.gc.ca/COVID-19/index-eng.html#Committee>.

  51. Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>.

  52. ‘Justice Aidan Xu @ Aedit Abdullah’, Singapore Courts (Web Page, 6 August 2025) <https://www.judiciary.gov.sg/who-we-are/justice-aedit-abdullah>.

  53. ‘AI in the Judiciary: A Singapore Courts Perspective’, Thomson Reuters: Legal Insight (Web Page, 14 January 2025) <https://insight.thomsonreuters.com/sea/legal/posts/ai-in-the-judiciary-a-singapore-courts-perspective>.

  54. Ministry of Justice (UK), AI Action Plan for Justice (Policy Paper, 31 July 2025) <https://www.gov.uk/government/publications/ai-action-plan-for-justice/ai-action-plan-for-justice>.

  55. Ailin O’Flaherty and Andrew Wilkinson, ‘New Artificial Intelligence Advisory Body in England and Wales – Bringing the Modern World to the Judiciary’, Global IP & Technology Law Blog (Squire Patton Boggs, 14 March 2019) <https://www.iptechblog.com/2019/03/new-artificial-intelligence-advisory-body-in-england-and-wales-bringing-the-modern-world-to-the-judiciary/>.

  56. Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI) Guidance for Judicial Office Holders (Guidance, 14 April 2025) <https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version.pdf>.

  57. Ministry of Justice (UK), ‘Justice AI Unit’, Justice AI Unit (Web Page, 2025) <https://ai.justice.gov.uk>.

  58. Victorian Auditor-General’s Office, Administration of Victorian Law Courts (Independent Assurance Report to Parliament No 2021– 22:06, October 2021) 5 <https://www.audit.vic.gov.au/report/administration-victorian-courts/?section=’>.

  59. Submission 16 (Law Institute Victoria).

  60. Submission 26 (Supreme Court of Victoria).

  61. Submission 24 (County Court of Victoria).

  62. Standards Australia Limited, ‘AS ISO/IEC 38507: 2022 Information Technology – Governance of IT – Governance Implications of the Use of Artificial Intelligence by Organizations’ 2–3 <https://store.standards.org.au/product/as-iso-iec-38507-2022>.

  63. Submission 5 (Office of the Victorian Information Commissioner). The Centre for the Future of the Legal Profession and UNSW Law and Justice also commented that ‘Transparency in court internal systems, including where AI is used, and provides reassurance to members of the public, enables human scrutiny and oversight and supports faith in the rule of law.’: Submission 22 (Centre for the Future of the Legal Profession and UNSW Law and Justice).

  64. Consultation 32 (Supreme Court of Victoria).

  65. Australian Government et al, National Framework for the Assurance of Artificial Intelligence in Government: A Joint Approach to Safe and Responsible AI by the Australian, State and Territory Governments (Report, 21 June 2024) 6.

  66. Submission 5 (Office of the Victorian Information Commissioner). Consultation 7 (Judicial College of Victoria).

  67. Consultation 25 (Microsoft).

  68. Submission 5 (Office of the Victorian Information Commissioner).

  69. Consultation 7 (Judicial College of Victoria).

  70. Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 19 <https://docs.un.org/en/A/80/169>.

  71. Ibid.

  72. Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) ch 10.

  73. Ibid 295.

  74. Consultation 12 (County Court of Victoria).

  75. Conference of State Court Administrators (COSCA), Generative AI & the Future of the Courts: Responsibilities and Possibilities (Policy Paper, National Center for State Courts, August 2024) 15 <https://www.ncsc.org/resources-courts/generative-ai-future-courts>.

  76. Consultation 13 (Federal Circuit and Family Court of Australia).

  77. Consultation 22 (Court Services Victoria).

  78. Court Services Victoria, Connecting Courts and Communities, Annual Report 2018-19 (Report, October 2019) 9 <https://courts.vic.gov.au/sites/default/files/publications/csv_annual_report_2018-19.pdf>.

  79. Court Services Victoria, Delivering Excellence in Court and Tribunal Administration, Annual Report 2023-24 (Report, October 2024) 11.

  80. Consultation 22 (Court Services Victoria).

  81. Court Services Victoria, Delivering Excellence in Court and Tribunal Administration, Annual Report 2023-24 (Report, October 2024) 11.

  82. Consultation 22 (Court Services Victoria).

  83. Ibid.

  84. ‘Action Committee on Modernizing Court Operations: The Action Committee – Who We Are and What We Do’, Office of the Commissioner for Federal Judicial Affairs Canada (Web Page, 21 February 2025) <https://fja-cmf.gc.ca/COVID-19/index-eng.html#Committee>.

  85. Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 197.

  86. See for example Ministry of Justice (UK), AI Action Plan for Justice (Policy Paper, 31 July 2025) <https://www.gov.uk/government/publications/ai-action-plan-for-justice/ai-action-plan-for-justice>.

  87. Dame Helen Winkelmann, Chief Justice of New Zealand, Chief Justice Launches Digital Strategy for Courts of New Zealand (Media Statement, 29 March 2023) <https://www.courtsofnz.govt.nz/assets/7-Publications/Announcements/20230329-Media-Release-Chief-Justice-launches-Digital-Strategy-for-Courts-of-New-Zealand.pdf>.

  88. Submissions 25 (Court Services Victoria), 26 (Supreme Court of Victoria). See: Department of Premier and Cabinet (Vic), Administrative Guideline – The Safe and Responsible Use of Generative AI in the Victorian Public Sector (No 2024/07, Issue 1.0, November 2024) <https://www.vic.gov.au/sites/default/files/2024-11/Generative-AI-Guideline-%281%29.pdf>; Department of Premier and Cabinet (Vic), Administrative Guideline Direction on the Use of DeepSeek Products, Applications and Web Services (No 2025/1, Issue:1.0, February 2025) <https://www.vic.gov.au/sites/default/files/2025-02/Administrative-Guideline-DeepSeek.pdf>; Department of Government Services, Guidance for the Safe and Responsible Use of Generative AI in the Victorian Public Sector (Report, Victorian Government, 19 March 2025) <https://www.vic.gov.au/guidance-safe-responsible-use-gen-ai-vps>.

  89. Department of Premier and Cabinet (Vic), Administrative Guideline – The Safe and Responsible Use of Generative AI in the Victorian Public Sector (No 2024/07, Issue 1.0, November 2024) 4 <https://www.vic.gov.au/sites/default/files/2024-11/Generative-AI-Guideline-%281%29.pdf>.

  90. Submission 26 (Supreme Court of Victoria).

  91. Digital Transformation Agency (Cth), Policy for the Responsible Use of AI in Government (Version 1.1, 1 September 2024) 8 <https://www.digital.gov.au/policy/ai/policy>.

  92. Digital Transformation Agency (Cth), Australian Government’s AI Technical Standard (Version 1, July 2025) <https://www.digital.gov.au/policy/ai/AI-technical-standard>.

  93. Consultation 22 (Court Services Victoria).

  94. Ibid.

  95. Office of the Commissioner for Federal Judicial Affairs Canada, Action Committee on Modernizing Court Operations, Use of Artificial Intelligence by Courts to Enhance Court Operations (Statement, 20 November 2024) <https://fja-cmf.gc.ca/COVID-19/pdf/Use-of-AI-by-Courts-Utilisation-de-lIA-par-les-tribunaux-eng.pdf>.

  96. Scottish Courts and Tribunals Service, Scottish Courts and Tribunals Service: Our Approach to the Development of Services Using Artificial Intelligence (Policy, April 2025) <https://www.scotcourts.gov.uk/media/xalno3ff/scts-ai-policy.pdf>.

  97. Minister of the Presidency, Justice and Relations with the Courts (Spain), Policy on the Use of Artificial Intelligence in the Administration of Justice (Policy, 2024) <https://www.mjusticia.gob.es/es/JusticiaEspana/ProyectosTransformacionJusticia/Documents/Spains_Policy_on_the_use_of_AI_in_the_Justice_Administration.pdf>.

  98. Ibid 3; European Commission for the Efficiency of Justice (CEPEJ), European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment (2019, adopted at the 31st plenary meeting of the CEPEJ, Strasbourg, 3-4 December 2018).

  99. Arizona Supreme Court Judicial Branch, Arizona Code of Judicial Administration (Code of Practice, 29 January 2025) ‘Section 1-509: Use of Generative Artificial Intelligence Technology and Large Language Models’ <https://www.azcourts.gov/Portals/0/0/admcode/pdfcurrentcode/1-509%20Use%20of%20AI%20Tech%20and%20LLMs%2001_2025.pdf?ver=acMF-P2SER0dArzTQohBjQ%3d%3d>.

  100. Judicial Council of California, Artificial Intelligence Task Force, Judicial Branch Administration: Rule and Standard for Use of Generative Artificial Intelligence in Court-Related Work (Report to the Judicial Council No 25–109, 16 June 2025) <https://jcc.legistar.com/View.ashx?M=F&ID=14303119&GUID=0C94642A-28D3-47C0-8AE9-1E4DE3A96DFC>.

  101. Ibid 2.

  102. State of Connecticut Judicial Branch, Artificial Intelligence Responsible Use Framework (JBAPPM Policy 1013, 1 February 2024) 13–21.

  103. Supreme Court of South Dakota, South Dakota Unified Judicial System Generative Artificial Intelligence Guidance (Guidance, June 2024).

  104. Supreme Court of Illinois, Illinois Supreme Court Policy on Artificial Intelligence (Policy, 1 January 2025).

  105. Ibid 2.

  106. Consultation 21 (Public Record Office Victoria).

  107. Standards Australia, ‘AS ISO/IEC 42001:2023 Information Technology – Artificial Intelligence – Management System’ 21–22 <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-42001-2023>; National Institute of Standards and Technology (NIST), AI RMF Playbook (Report, U.S. Department of Commerce, 2024) 5–6 <https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook>.

  108. National Institute of Standards and Technology (NIST), AI RMF Playbook (Report, U.S. Department of Commerce, 2024) 10 <https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook>; Standards Australia, ‘AS ISO/IEC 42001:2023 Information Technology – Artificial Intelligence – Management System’ 24–25, 31 <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-42001-2023>; Department of Industry, Science and Resources (Cth), National Artificial Intelligence Centre, and CSIRO, Voluntary AI Safety Standard (Report, August 2024) 31–32 <https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf>.

  109. Lyria Bennett Moses, ‘Stochastic Judges: The Limits of Large Language Models’ (2024) 98(9) Australian Law Journal 640, 645.

  110. Standards Australia, ‘AS ISO/IEC 42001:2023 Information Technology – Artificial Intelligence – Management System’ 22 <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-42001-2023>.

  111. Office of the Commissioner for Federal Judicial Affairs Canada, Action Committee on Modernizing Court Operations, Use of Artificial Intelligence by Courts to Enhance Court Operations (Statement, 20 November 2024) 3 <https://fja-cmf.gc.ca/COVID-19/pdf/Use-of-AI-by-Courts-Utilisation-de-lIA-par-les-tribunaux-eng.pdf>.

  112. Office of the Victorian Information Commissioner (OVIC), Use of Enterprise Generative AI Tools in the Victorian Public Sector (Report, March 2025) <https://ovic.vic.gov.au/privacy/resources-for-organisations/use-of-enterprise-generative-ai-tools-in-the-victorian-public-sector/>; Office of the Victorian Information Commissioner (OVIC), Use of Personal Information with Publicly Available Generative AI Tools in the Victorian Public Sector (Report, March 2025) <https://ovic.vic.gov.au/privacy/resources-for-organisations/use-of-personal-information-with-publicly-available-generative-ai-tools-in-the-victorian-public-sector/>.

  113. Office of the Victorian Information Commissioner (OVIC), Privacy by Design (Guidance No D21/24515, January 2022) <https://ovic.vic.gov.au/privacy/resources-for-organisations/privacy-by-design/>.

  114. Group of Seven (G7), Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI System (Guidance, 30 October 2023) 2 <https://www.soumu.go.jp/hiroshimaaiprocess/en/documents.html>.

  115. Department of Industry, Science and Resources (Cth), National Artificial Intelligence Centre, and CSIRO, Voluntary AI Safety Standard (Report, August 2024) 23 <https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf>.

  116. Conference of State Court Administrators (COSCA), Generative AI & the Future of the Courts: Responsibilities and Possibilities (Policy Paper, National Center for State Courts, August 2024) 6 <https://www.ncsc.org/resources-courts/generative-ai-future-courts>.

  117. Submission 26 (Supreme Court of Victoria).

  118. Submission 27 (Federation of Community Legal Centres and Justice Connect).

  119. Supreme Court of South Dakota, South Dakota Unified Judicial System Generative Artificial Intelligence Guidance (Guidance, June 2024) 2.

  120. Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) 7 <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>.

  121. Delaware Courts, Judicial Branch, Interim Policy on the Use of GenAI by Judicial Officers and Court Personnel (Interim Policy, 22 October 2024) 1 <https://www.courts.delaware.gov/forms/download.aspx?id=266838>.

  122. Toledo Municipal Court, Toledo Municipal Court AI Policy (Policy, 18 December 2024) 1.

  123. Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) 6 <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>.

  124. State of Connecticut Judicial Branch, Artificial Intelligence Responsible Use Framework (JBAPPM Policy 1013, 1 February 2024) 6.

  125. Ibid.

  126. Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) 8 <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>.

  127. Toledo Municipal Court, Toledo Municipal Court AI Policy (Policy, 18 December 2024) 1.

  128. Delaware Courts, Judicial Branch, Interim Policy on the Use of GenAI by Judicial Officers and Court Personnel (Interim Policy, 22 October 2024) 2 <https://www.courts.delaware.gov/forms/download.aspx?id=266838>.

  129. State of Connecticut Judicial Branch, Artificial Intelligence Responsible Use Framework (JBAPPM Policy 1013, 1 February 2024) 9.

  130. Supreme Court of South Dakota, South Dakota Unified Judicial System Generative Artificial Intelligence Guidance (Guidance, June 2024) 2.

  131. Office of the Commissioner for Federal Judicial Affairs Canada, Action Committee on Modernizing Court Operations, Use of Artificial Intelligence by Courts to Enhance Court Operations (Statement, 20 November 2024) 3 <https://fja-cmf.gc.ca/COVID-19/pdf/Use-of-AI-by-Courts-Utilisation-de-lIA-par-les-tribunaux-eng.pdf>.

  132. Ibid.

  133. State of Connecticut Judicial Branch, Artificial Intelligence Responsible Use Framework (JBAPPM Policy 1013, 1 February 2024) 6.

  134. Office of the Commissioner for Federal Judicial Affairs Canada, Action Committee on Modernizing Court Operations, Use of Artificial Intelligence by Courts to Enhance Court Operations (Statement, 20 November 2024) 3 <https://fja-cmf.gc.ca/COVID-19/pdf/Use-of-AI-by-Courts-Utilisation-de-lIA-par-les-tribunaux-eng.pdf>.

  135. Conference of State Court Administrators (COSCA), Generative AI & the Future of the Courts: Responsibilities and Possibilities (Policy Paper, National Center for State Courts, August 2024) 6 <https://www.ncsc.org/resources-courts/generative-ai-future-courts>.

  136. Submission 12 (Victoria Legal Aid).

  137. Submission 18 (Northern Community Legal Centre).

  138. Submission 16 (Law Institute Victoria).

  139. Submission 27 (Federation of Community Legal Centres and Justice Connect).

  140. Submission 22 (Centre for the Future of the Legal Profession and UNSW Law and Justice).

  141. Consultation 32 (Supreme Court of Victoria). See also Submission 26 (Supreme Court of Victoria).

  142. Consultation 2 (Coroners Court of Victoria).

  143. Submission 17 (Office of Public Prosecutions).

  144. State of Connecticut Judicial Branch, Artificial Intelligence Responsible Use Framework (JBAPPM Policy 1013, 1 February 2024) 15.

  145. Judicial Council of California, Artificial Intelligence Task Force, Judicial Branch Administration: Rule and Standard for Use of Generative Artificial Intelligence in Court-Related Work (Report to the Judicial Council No 25–109, 16 June 2025) 8 <https://jcc.legistar.com/View.ashx?M=F&ID=14303119&GUID=0C94642A-28D3-47C0-8AE9-1E4DE3A96DFC>.

  146. Ibid 9.

  147. Nicole Gillespie et al, Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025 (Report, The University of Melbourne and KPMG International, 2025) 28 <https://doi.org/10.26188/28822919>.

  148. Ibid 32.

  149. Nicholas Davis et al, Artificial Intelligence: Governance and Leadership (White Paper, Australian Human Rights Commission and World Economic Forum, 2019) 10.

  150. Gabrielle Appleby, ‘Introduction to the Special Issue on the Judiciary’ (2023) 97(9) Australian Law Journal 600, 603.

  151. Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 21 <https://docs.un.org/en/A/80/169>.

  152. Nicole Gillespie et al, Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025 (Report, The University of Melbourne and KPMG International, 2025) 59 <https://doi.org/10.26188/28822919>.

  153. Consultation 21 (Public Record Office Victoria).

  154. Submission 22 (Centre for the Future of the Legal Profession and UNSW Law and Justice).

  155. Ibid.

  156. Consultation 12 (County Court of Victoria).

  157. ‘Technology at the Court’, Coroners Court of Victoria (Web Page) <https://www.coronerscourt.vic.gov.au/technology-court>.

  158. Consultation 21 (Public Record Office Victoria).

  159. State of Connecticut Judicial Branch, Artificial Intelligence Responsible Use Framework (JBAPPM Policy 1013, 1 February 2024) 10, 15.

  160. Office of the Commissioner for Federal Judicial Affairs Canada, Action Committee on Modernizing Court Operations, Use of Artificial Intelligence by Courts to Enhance Court Operations (Statement, 20 November 2024) 4 <https://fja-cmf.gc.ca/COVID-19/pdf/Use-of-AI-by-Courts-Utilisation-de-lIA-par-les-tribunaux-eng.pdf>.

  161. Submission 15 (Human Rights Law Centre).

  162. Consultation 8 (Federation of Community Legal Centres Workshop).

  163. Submission 5 (Office of the Victorian Information Commissioner).

  164. Submission 18 (Northern Community Legal Centre).

  165. Submission 24 (County Court of Victoria).

  166. Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 21 <https://docs.un.org/en/A/80/169>.

  167. Submission 24 (County Court of Victoria).

  168. Submission 26 (Supreme Court of Victoria).

  169. Consultation 7 (Judicial College of Victoria).

  170. Ibid.

  171. Consultation 31 (Victorian Equal Opportunity & Human Rights Commission).

  172. Submission 10 (Castan Centre for Human Rights Law, Monash University).

  173. Consultation 2 (Coroners Court of Victoria).

  174. Consultation 35 (Victoria Legal Aid).

  175. For example, the Arizona Supreme Court requires that, ‘All public facing Generative AI tools must be thoroughly tested before being deployed’ Arizona Supreme Court Judicial Branch, Arizona Code of Judicial Administration (Code of Practice, 29 January 2025) ’Section 1-509: Use of Generative Artificial Intelligence Technology and Large Language Models’ 4 [H] <https://www.azcourts.gov/Portals/0/0/admcode/pdfcurrentcode/1-509%20Use%20of%20AI%20Tech%20and%20LLMs%2001_2025.pdf?ver=acMF-P2SER0dArzTQohBjQ%3d%3d>.

  176. Submission 18 (Northern Community Legal Centre).

  177. Consultation 24 (Victorian Advocacy League for Individuals with Disability).

  178. Submission 27 (Federation of Community Legal Centres and Justice Connect).

  179. Australian Government et al, National Framework for the Assurance of Artificial Intelligence in Government: A Joint Approach to Safe and Responsible AI by the Australian, State and Territory Governments (Report, 21 June 2024) 21.

  180. Consultation 28 (Monash University Digital Law Group).

  181. Alistair Reid, Simon O’Callaghan and Yaya Lu, Implementing Australia’s AI Ethics Principles: A Selection of Responsible AI Practices and Resources (Report, Gradient Institute and CSIRO, June 2023) 36.

  182. United Nations Educational, Scientific and Cultural Organization (UNESCO), Draft Guidelines for the Use of AI Systems in Courts and Tribunals (Guidelines, May 2025) 15 <https://unesdoc.unesco.org/ark:/48223/pf0000393682>.

  183. Sophie Farthing et al, Human Rights and Technology (Final Report, Australian Human Rights Commission, 2021) 62, 194 <https://humanrights.gov.au/our-work/technology-and-human-rights/projects/final-report-human-rights-and-technology>.

  184. Submission 22 (Centre for the Future of the Legal Profession and UNSW Law and Justice).

  185. Submission 4 (Coroners Court of Victoria).

  186. Office of the Commissioner for Federal Judicial Affairs Canada, Action Committee on Modernizing Court Operations, Use of Artificial Intelligence by Courts to Enhance Court Operations (Statement, 20 November 2024) 3 <https://fja-cmf.gc.ca/COVID-19/pdf/Use-of-AI-by-Courts-Utilisation-de-lIA-par-les-tribunaux-eng.pdf>.

  187. Sophie Farthing et al, Human Rights and Technology (Final Report, Australian Human Rights Commission, 2021) 102 <https://humanrights.gov.au/our-work/technology-and-human-rights/projects/final-report-human-rights-and-technology>.

  188. Digital Transformation Agency (Cth), Pilot AI Assurance Framework Guidance (Web Page, October 2024) <https://www.digital.gov.au/policy/ai/pilot-ai-assurance-framework/guidance>; Digital NSW, NSW Artificial Intelligence Assurance Framework (Updated) (Guidance, 2024) <https://www.digital.nsw.gov.au/policy/artificial-intelligence/nsw-artificial-intelligence-assessment-framework>; Queensland Government, Foundational Artificial Intelligence Risk Assessment Framework (Guidance, September 2024) <https://www.forgov.qld.gov.au/information-technology/queensland-government-enterprise-architecture-qgea/qgea-directions-and-guidance/qgea-policies-standards-and-guidelines/faira-framework>.

  189. ‘Responsible AI Pattern Catalogue’, CSIRO Data 61 Software Systems (Web Page, 2024) <https://research.csiro.au/ss/science/projects/responsible-ai-pattern-catalogue/>.

  190. Consultation 25 (Microsoft); See Microsoft, Microsoft Responsible AI Standard v2 General Requirements (Standard, June 2022) 4.

  191. Provided as supplementary information (August 2025) to Consultation 6 (Office of Public Prosecutions).

  192. Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 19 <https://docs.un.org/en/A/80/169>.

  193. Sophia Adams Bhatti, AI in Our Justice System (Report, JUSTICE, January 2025) 49 <https://justice.org.uk/ai-in-our-justice-system/> Annex 1.

  194. European Commission for the Efficiency of Justice (CEPEJ), Assessment Tool for the Operationalisation of the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment (CEPEJ(2023)16final, Council of Europe, 4 December 2023).

  195. State of Connecticut Judicial Branch, Artificial Intelligence Responsible Use Framework (JBAPPM Policy 1013, 1 February 2024).

  196. Australia New Zealand Policing Advisory Agency (ANZPAA), Australia New Zealand Responsible and Ethical Artificial Intelligence Framework (Report, 22 July 2025) <https://www.anzpaa.org.au/products/products/australia-new-zealand-responsible-and-ethical-artificial-intelligence-framework>.

  197. Submission 4 (Coroners Court of Victoria).

  198. Submission 5 (Office of the Victorian Information Commissioner).

  199. Consultation 7 (Judicial College of Victoria).

  200. Australian Government et al, National Framework for the Assurance of Artificial Intelligence in Government: A Joint Approach to Safe and Responsible AI by the Australian, State and Territory Governments (Report, 21 June 2024).

  201. Consultation 22 (Court Services Victoria).

  202. Consultation 9 (Victorian Civil and Administrative Tribunal).

  203. Consultation 15 (Magistrates’ Court of Victoria).

  204. ‘Australia’s AI Ethics Principles’, Department of Industry, Science and Resources (Web Page, 11 October 2024) <https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles/australias-ai-ethics-principles>; as highlighted by Consultation 22 (Court Services Victoria).

  205. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024) 15–17.

  206. Consultation 9 (Victorian Civil and Administrative Tribunal).

  207. Victorian Equal Opportunity & Human Rights Commission, The Charter of Human Rights and Responsibilities: A Guide for Victorian Public Sector Workers (3rd Edn) (Report, Victorian Equal Opportunity & Human Rights Commission, January 2024) 14 <https://www.humanrights.vic.gov.au/resources/https-resources-charter-guide-for-vps-2024/>.

  208. For an explanation of privacy by design and implementation examples see Office of the Victorian Information Commissioner (OVIC), Privacy by Design (Guidance No D21/24515, January 2022) <https://ovic.vic.gov.au/privacy/resources-for-organisations/privacy-by-design/>; For an example of security by design see for example ‘Essential Eight Explained’, Australian Signals Directorate (Web Page, 27 November 2023) <https://www.cyber.gov.au/resources-business-and-government/essential-cybersecurity/essential-eight/essential-eight-explained>; For international examples also see International Organization for Standardization (ISO), ‘ISO/IEC 27001:2022 Information Security, Cybersecurity and Privacy Protection — Information Security Management Systems — Requirements’ <https://www.iso.org/standard/27001>; National Institute of Standards and Technology (NIST), The NIST Cybersecurity Framework (CSF) 2.0 (NIST CSWP 29, U.S. Department of Commerce, 26 February 2024) <https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.29.pdf>.

  209. Office of the Victorian Information Commissioner (OVIC), Privacy Impact Assessment Guide: Guide for Completing OVIC’s Template (Guide No D20/6442, April 2021) <https://ovic.vic.gov.au/privacy/resources-for-organisations/privacy-impact-assessment/>.

  210. Consultation 21 (Public Record Office Victoria). See also Public Record Office Victoria, Recordkeeping Policy: Artificial Intelligence Technologies and Recordkeeping (Policy, 29 February 2024) <https://prov.vic.gov.au/sites/default/files/files/documents/ai_tech_and_recordkeeping_policy_v1_2024.pdf>.

  211. Consultations 27 (UNSW’s Centre for the Future of the Legal Profession and Professor Lyria Bennett Moses), 34 (Human Technology Institute). Submission 10 (Castan Centre for Human Rights Law, Monash University).

  212. Consultation 6 (Office of Public Prosecutions).

  213. Kalliopi Terzidou, ‘The Use of Artificial Intelligence in the Judiciary and Its Compliance with the Right to a Fair Trial’ (2022) 31(3) Journal of Judicial Administration 154, 160–61 <https://search.informit.org/doi/10.3316/agispt.20220401064756>.

  214. Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 20 <https://docs.un.org/en/A/80/169>.

  215. Department of Industry, Science and Resources (Cth), National Artificial Intelligence Centre, and CSIRO, Voluntary AI Safety Standard (Report, August 2024) 33 <https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf>.

  216. National Institute of Standards and Technology (NIST), AI RMF Playbook (Report, U.S. Department of Commerce, 2024) 32 <https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook>.

  217. Conference of State Court Administrators (COSCA), Generative AI & the Future of the Courts: Responsibilities and Possibilities (Policy Paper, National Center for State Courts, August 2024) 7 <https://www.ncsc.org/resources-courts/generative-ai-future-courts>.

  218. Submission 15 (Human Rights Law Centre).


Voiced by Amazon Polly