8. Judicial officer guidelines to support the safe use of AI
Overview
•Guidelines can be used to support judicial officers to safely use AI in Victoria’s courts and VCAT.
•In Victoria, there is no court-issued guidance to judicial officers on the use of AI.
•Approaches by comparable jurisdictions suggest there is value in developing court-issued AI guidelines for judicial officers.
•Judicial officer guidelines should be developed in Victoria to promote the safe use of AI and assist in the maintenance of trust and confidence in the courts and VCAT.
•Guidelines for Victorian judicial officers should prohibit the use of AI for judicial decision-making. This prohibition should not encompass supportive uses of AI.
AI guidelines for Victorian judicial officers
8.1Our terms of reference ask us to consider how to guide the safe use of AI in Victoria’s courts and VCAT.
8.2As discussed in Chapter 4, the development of guidelines can provide a flexible way to ensure the safe use of AI. In Chapter 7 we discussed how guidelines can support the safe use of AI by court users. This chapter focuses on how AI guidelines can be used to support judicial officers.
8.3There is limited information and direction for judicial officers in Victoria on the use of AI.
8.4The Supreme Court released AI guidelines for litigants, which contain a brief statement about the use of AI by judicial officers.[1] As outlined in Chapter 7, the County Court adopted these guidelines.[2] However, it has not been adopted by other Victorian courts at this stage. VCAT is developing its own guidelines for tribunal members. This work is still underway.[3]
8.5The Supreme Court guidelines state:
AI is not presently used for decision making nor used to develop or prepare reasons for decision because it does not engage in a reasoning process nor a process specific to the circumstance before the court.[4]
8.6The guidelines also refer to the Australasian Institute for Judicial Administration’s AI Decision Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators which is discussed below, from paragraph [8.23].[5] This guide highlights that AI use should be consistent with core judicial values.
8.7While Victoria’s courts and VCAT are at early stages of considering AI, there are examples of AI uses across courts and tribunals. This includes access to publicly available AI tools (see Chapter 3).
8.8Developing guidelines for judicial officers may assist them to understand the risks and limitations of AI when used by court users. Guidelines would also help judicial officers understand how they can use AI in their own work in a way that aligns with core judicial values. These guidelines could then be reinforced through dedicated judicial training, as discussed in Chapter 10.
Stakeholder views on AI guidelines for judicial officers
8.9We heard mixed views about the need for AI guidelines for judicial officers. The scope of views from our consultations is demonstrated in Table 10.
Table 10: Stakeholder views on the need for judicial guidelines on AI
|
Position |
Stakeholder views |
|---|---|
|
Support for judicial officer guidelines |
VCAT: VCAT is developing guidance for tribunal members in the form of a presidential direction.[6] Human Rights Law Centre: ‘Clear, precise and accessible guidelines must be produced for the ethical and responsible use of AI systems in Victorian courts, tailored to the judiciary’s specific needs and aligned with the Charter. These guidelines should be publicly available.’[7] Law Institute of Victoria: Guidelines should be developed for ‘Victorian courts and tribunals, including administrative staff, the judiciary and tribunal members, relating to the use of AI’.[8] Centre for the Future of the Legal Profession and UNSW Law and Justice: ‘where courts are issuing guidelines or rules for litigants and lawyers concerning the use of AI, adapted guidelines should also apply to judicial officers.’[9] |
|
Judicial officer guidelines may be unnecessary |
Supreme Court: ‘The Court has not issued any guidelines to judicial officers on the use of AI. There are currently no plans to issue any such guidelines … However, it is recognised that a cautious approach is required to the use of AI in relation to judicial functions. The Court anticipates that many of the issues will be dealt with on a case-by-case basis by reference to procedural fairness and other fundamental common law principles.’[10] Magistrates’ Court: ‘agrees with the sentiments raised by the Supreme Court’.[11] |
8.10Stakeholders supported the development of guidelines for judicial officers on the use of AI to enable transparency and to maintain public trust in the justice system. The UN Special Rapporteur on the independence of judges and lawyers stated that judges are concerned that AI:
could undermine public trust in justice systems by introducing errors, hallucinations and biases by exposing or monetizing private data, or by subverting the right to a trial by a human judge.’[12]
8.11To respond to these concerns, many jurisdictions have released publicly available judicial guidelines on the use of AI.[13] The Courts and Tribunals Judiciary in England and Wales explained that they chose to publish judicial AI guidelines online to ‘promote transparency, open justice and public confidence’.[14]
8.12Developing publicly available guidelines for judicial officers on AI will help to build and maintain public confidence. Clearly communicating the circumstances in which AI may be used by judicial officers will help to promote public trust about AI use in courts. This is important because there is currently a high level of distrust toward AI systems in Australia (discussed further in Chapter 9).[15]
8.13Guidelines can also serve an educative role to raise awareness among judicial officers about the risks and limitations of AI use in courts and tribunals. Professor Tania Sourdin argues that it is critical for judges to acquire knowledge and understanding about AI and to consider the implications of its use in the justice system.[16] The Canadian Judicial Council emphasises that:
The adoption of AI cannot be a passive or reactive process. Some forms of AI are already embedded in everyday judicial applications for tasks such as translation, grammar checking, speech recognition and legal research. As generative AI becomes more prevalent, it becomes imperative that judges appreciate the implications, limitations, evolving risks, and mitigation strategies associated with its use.[17]
8.14Greater awareness and education for judicial officers is important to ensure the safe use of AI in Victoria’s courts and VCAT. We discuss opportunities for judicial training and education in Chapter 10.
8.15In NSW, the Chief Justice of the Supreme Court released judicial guidelines that apply to all judges in NSW (discussed at paragraph [8.20]).[18] Based on these guidelines, the President of NSW’s Civil and Administrative Tribunal released guidelines for tribunal members.[19]
8.16As discussed in Chapter 7, there is a strong desire among stakeholders for AI guidelines to be consistent across Victoria’s courts and VCAT.
8.17It is recommended that the Chief Justice of the Supreme Court, in consultation with Victorian heads of jurisdiction, develop publicly available judicial AI guidelines that will apply to all judicial officers in Victoria. The President of VCAT should also release aligned guidelines for VCAT members.
8.18This chapter goes on to discuss some useful elements that could be contained in guidelines for Victorian judicial officers.
|
Recommendation 12.The Chief Justice of Victoria, in consultation with Victorian Heads of Jurisdiction, should consider developing public guidelines for all judicial officers in Victoria to support public trust in the administration of justice and inform the safe use and understanding of AI. Based on these, the President of VCAT should release guidelines for tribunal members. |
Current interjurisdictional AI guidance for judicial officers
8.19Our terms of reference ask us to consider how AI is regulated in comparable jurisdictions to identify potential learnings for Victoria. In the following section, we discuss features of AI guidance that has been issued to judicial officers in other Australian jurisdictions as well as internationally.
Interstate court-issued AI guidelines for judicial officers
8.20In November 2024, the Chief Justice of NSW released Guidelines for New South Wales Judges in Respect of Use of Generative AI.[20] The judicial guidelines apply to all courts in NSW. In summary they:
•prohibit judges from using GenAI in the formulation of reasons for judgment or the assessment or analysing of evidence preparatory to the delivery of reasons for judgment
•prohibit the use of GenAI for editing or proofing judgments
•list the limitations of GenAI if used for secondary legal research or any other purposes
•direct that any GenAI research should be verified
•direct that judges should require associates, tipstaff or researchers to disclose when they are using GenAI
•indicate judges may require court users to disclose any use of GenAI and require assurances about that use and compliance with the Practice Note SC Gen 23 (see Chapter 7).[21]
8.21The NSW Civil and Administrative Tribunal also released Guidelines for NCAT Members in respect of use of Generative Artificial Intelligence (Gen AI) based on the guidelines issued by the Chief Justice of NSW.[22]
8.22Guidelines to judicial officers on the use of GenAI have also been released by Queensland Courts.[23]
Other guidance for Australian judicial officers
8.23The Australasian Institute for Judicial Administration released the AI Decision Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators in 2022, which was updated in December 2023.[24] This guide was developed to support judges, tribunal members and court administrators in the Asia-Pacific region when considering the use of AI in the courtroom.
8.24It is the first guide to consider in detail the challenges and opportunities created by AI in Australian courts and tribunals. It looks at legislation, case law and policies at that time from around the world.
8.25The guide considers how AI tools can impact core judicial values.[25] It covers AI concepts, uses, benefits and risks. The guide also contains a list of questions designed to prompt courts and tribunals to consider how the use of AI may interact with core judicial values.
8.26In addition, the Australasian Institute of Judicial Administration produced the Guide to Judicial Conduct which has been adopted by the Council of Chief Justices of Australia and New Zealand.[26] It has also been adopted by the Judicial Commission of Victoria.[27]
8.27The Guide to Judicial Conduct is not focused on AI. However, it aims to provide ‘principled and practical guidance to judges.[28] The guide is not a binding code with prescriptive rules, but a publicly available document which sets out principles and standards of conduct appropriate to the judicial office.[29]
8.28A review of the Guide to Judicial Conduct is underway, led by former High Court judge the Honourable Virginia Bell AC, who is supported by a committee and working group.[30]
8.29We understand that the revised Guide to Judicial Conduct will provide advice relating to the use of GenAI by judges and their staff. This review presents a timely opportunity to raise awareness about the capabilities and limitations of GenAI tools amongst judges and their staff. The revised guide provides an opportunity to set consistent expectations for Australian judges and their staff who choose to use GenAI. Given the rapidly evolving nature of GenAI, it will be important for the guide to be regularly reviewed and updated to ensure the currency of advice.
International guidelines on the use of AI by judicial officers
8.30There are different international approaches to judicial guidelines on the use of AI. Guidelines have been issued by courts, as well as by professional bodies. Assessing international judicial guidance can help to identify how best to support the development of guidelines for judicial officers in Victoria.
8.31Examples of international AI guidelines for judicial officers include those issued in:
•Brazil[31]
•Canada[32]
•Colombia[33]
•England and Wales[34]
•Hong Kong[35]
•India (Kerala)[36]
•New Zealand[37]
•South Korea[38]
•The Vatican[39]
•United States (US) (state-based guidelines including for Arizona, California, Delaware, Illinois, Nevada, Maryland, South Carolina and Utah, Virginia, Washington).[40]
8.32For further information on the guidelines see Appendix C.
8.33Generally, judicial guidelines include information about what AI is, and limitations and risks associated with its use. Most guidelines permit judicial uses of AI with various limitations. While permitted uses vary across jurisdictions, common themes include:
•prohibiting delegation of judicial decision-making to AI
•direction as to whether judicial officers are required to disclose their use of AI
•restrictions on inputting sensitive or personal information into public GenAI tools.
No delegation of judicial decision-making to AI
8.34International judicial guidelines commonly include a prohibition against the delegation of judicial decision-making to AI. The way this prohibition is phrased varies but generally focuses on ensuring any use of AI does not interfere with judicial independence.
8.35Several guidelines make it clear that judicial officers remain accountable and responsible for their decisions irrespective of the technology used. Table 11 contains extracts from international judicial guidelines that prohibit the use of AI for judicial decision-making.
Table 11: International AI judicial guidelines that prevent delegation of judicial decision-making to AI
|
Jurisdiction |
Example of delegation prohibition |
|---|---|
|
Canada |
‘It must be unequivocally understood that no judge is permitted to delegate decision-making authority, whether to a law clerk, administrative assistant, or computer program, regardless of their capabilities.’[41] |
|
Colombia |
AI must not replace human rationality or decision-making in judicial processes. Judicial decision-making remains the responsibility of the judicial officials.[42] |
|
Brazil |
‘Requirement for human supervision over all judicial decisions using artificial intelligence. Judges will be able to use the systems as support, but the final decision will remain their responsibility.’ [43] |
|
Hong Kong |
Judicial officers ‘should ensure that all judicial decisions continue to be independently and personally made by themselves, and should not under any circumstances allow generative AI to take over performance of their judicial functions. In other words, the Court must ensure that any use of generative AI does not usurp or encroach upon its judicial functions but merely supports and facilitates their performance.’ [44] |
|
India (Kerala) |
‘The policy aims to ensure that under no circumstances AI tools are used as a substitute for decision making or legal reasoning … AI tools shall not be used to arrive at any findings, reliefs, order or judgement under any circumstances, as the responsibility for the content and integrity of the judicial order, judgement or any part thereof lies fully with the judges.’[45] |
|
United States |
Delaware: Judicial officers ‘may not delegate their decision-making function to approved GenAI’.[46] Illinois: ‘Judges remain ultimately responsible for their decisions, irrespective of technological advancements.’[47] South Carolina: Judicial officers ‘may not use Generative AI to draft memoranda, orders, opinions, or other documents without direct human oversight and approval. Generative AI tools are intended to provide assistance and are not a substitute for judicial, legal, or other professional expertise’.[48] |
|
Vatican |
‘The decision on the interpretation of the law, the assessment of facts and evidence is reserved exclusively to the magistrate and on the adoption of every measure.’[49] |
Disclosure of judicial uses of AI
8.36Disclosure is a common element of international judicial guidelines.
8.37Some jurisdictions require judicial officers to disclose their use of AI. Others note that disclosure is not required where the use of AI by judicial officers is limited to a supportive capacity.
8.38Table 12 compares international judicial guidelines based on whether they do or do not require disclosure of AI use by judicial officers.
Table 12: Disclosure obligations in international AI judicial guidelines
|
Position on disclosure |
Jurisdiction |
|---|---|
|
Disclosure is required |
Canada: ‘The Court will not use AI, and more specifically automated decision-making tools, in making its judgments and orders, without first engaging in public consultation. For greater certainty, this includes the Court’s determination of the issues raised by the parties, as reflected in its Reasons for Judgment.’[50] |
|
Colombia: ‘court staff and judicial operators must clearly disclose if, how and which AI tools were used in judicial decisions to uphold transparency and integrity.’[51] |
|
|
The Vatican: If content is AI-generated, then this must be disclosed.[52] |
|
|
Disclosure should be considered |
California (US): ‘A judicial officer using generative AI for any task within their adjudicative role … should consider whether to disclose the use of generative AI if it is used to create content provided to the public.’[53] |
|
Disclosure is not required |
England and Wales: ‘Judges are not generally obliged to describe the research or preparatory work which may have been done in order to produce a judgment. Provided these guidelines are appropriately followed, there is no reason why generative AI could not be a potentially useful secondary tool.’[54] |
|
New Zealand: Judicial officers ‘do not need to disclose use of a GenAI chatbot.’[55] |
|
|
United States (Illinois): ‘Disclosure of AI use should not be required in a pleading.’[56] |
Confidentiality and privacy obligations
8.39Confidentiality and privacy obligations are a key feature of international judicial guidelines on the use of AI. Most guidelines contain warnings that judicial officers need to be aware and alert to security and privacy risks associated with using GenAI.[57]
8.40Guidelines also caution that confidentiality and privacy risks are heightened when public AI tools are used. The Canadian Judicial Council warns that ‘uploading a draft judgment, or any sensitive or personal information to a free AI editing or translating website brings with it serious privacy implications.[58] For discussion on the different risks associated with public and closed AI tools, see Chapter 3.
8.41To ensure privacy and confidentiality of information is maintained, several guidelines prohibit judicial officers from entering information that is not already public into a public AI tool. The definition of public AI varies across guidelines. Table 13 provides examples of the different limitations placed on the use of public AI tools.
Table 13: Confidentiality and privacy obligations in international AI judicial guidelines
|
Jurisdiction |
Prohibition on inserting non-public information into public |
|
England and Wales |
‘Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential.’[59] |
|
Hong Kong |
Judicial officers ‘should not enter any information which is private, confidential or sensitive into open or public generative AI chatbots. Make sure that your input is adequately generalised and anonymised’[60] |
|
New Zealand |
‘Generally, you should not enter any information into an AI chatbot that is not already in the public domain. Do not enter any information that is private, confidential, suppressed or legally privileged information.’[61] |
|
United States (California) |
‘A judicial officer using generative AI for any task within their adjudicative role … should not enter confidential, personal identifying, or other non-public information into a public generative AI system.’[62] |
International guidance in development
8.42International bodies are also developing guidelines for the use of AI in courts and tribunals by judicial officers.
8.43In 2024, UNESCO released the draft Guidelines for the Use of AI Systems in Courts and Tribunals for public consultation. These guidelines were revised in May 2025.[63] They include principles and directions for courts and tribunals and outline specific guidance for individual members of the judiciary. They include a focus on transparency, directing judicial members to:
Disclose the use of generative AI systems for drafting text – rulings, opinions, and other documents that may have legal consequences – or when it is explicitly used in court hearings. For that purpose, distinguish the text produced by the AI chatbot used in a decision by employing quotation marks and a citation system.[64]
8.44Other directions to the judiciary in the draft UNESCO guidelines include:
•protecting personal and confidential data by not inputting such information into external GenAI tools
•verifying outputs before using them
•preventing potential infringements of copyright and intellectual property rights associated with the use of content produced by GenAI
•taking responsibility for any outputs produced by GenAI
•providing parties with an opportunity to challenge and contest decisions taken with or supported by AI systems.[65]
8.45The near universal membership of UNESCO gives weight to the global influence of its work around judicial guidelines and will likely influence state practices around the world. Australia is a member of UNESCO and, once finalised, the guidelines may be influential in informing how Australian jurisdictions adopt guidelines on AI for judicial members.
8.46In July 2025, the UN Special Rapporteur on the independence of judges and lawyers recommended that ‘Judiciaries should develop and adopt guidelines on the use of AI, having regard to international guidance such as that developed by UNESCO, and States should make resources available to the judiciary for that purpose’.[66]
What should guidelines for judicial officers in Victoria contain?
8.47Based on stakeholder feedback and international examples, we have identified some useful elements that could be contained in guidelines for Victorian judicial officers.
8.48Useful elements of AI guidelines for judicial officers include:
a)definition of AI and subcategories of AI
b)principles and educative information
c)a prohibition on the use of AI by judicial officers for judicial decision-making.
Guidelines for judicial officers to contain definitions
8.49Guidelines for judicial officers should contain a clear definition of AI. For consistency, this should align with the OECD definition recommended for court user guidelines in Chapter 7.
8.50Similarly, the definitions of GenAI, public AI and closed AI as described in Chapter 7 should also be incorporated. Guidelines for judicial officers should also set out the risks associated with the different types of AI. We heard that there is concern from judges in Victoria ‘about privacy and reputational risks of judicial officers when the AI searches and prompts they use become known’.[67] To address privacy concerns, guidelines should state that judicial officers should not enter any information that is not already public information into public AI tools. Examples of how this could be framed are listed in Table 14.
Guidelines for judicial officers to contain principles and educative information
8.51The Commission’s principles should be incorporated into guidelines for judicial officers (the principles are explained in Chapter 6).
8.52As discussed in Chapter 7, the principles could be given effect by being incorporated into guidelines. This could take the form of educative statements in guidelines to increase judicial officers’ understanding of the risks and limitations of AI.
8.53Examples of educative statements from international guidelines that can be used to help judicial officers implement the principles are provided in Table 14.
Table 14: Examples of principle-based guidelines for judicial officers
|
Principle |
Guidance for judicial officers |
|---|---|
|
Impartiality and fairness |
•‘Have regard to ethical issues – particularly biases and the need to address them. GenAI chatbots generate responses based on the dataset they are trained on (which is generally information from the internet). Information generated by a Gen AI chatbot will reflect any biases or misinformation in its training data.’[68] •‘be vigilant against AI technologies that jeopardize due process, equal protection, or access to justice. Unsubstantiated or deliberately misleading AI generated content that perpetuates bias, prejudices litigants, or obscures truth-finding and decision-making will not be tolerated.’[69] |
|
Accountability and independence |
•‘Before using any AI tools, ensure you have a basic understanding of their capabilities and potential limitations.’[70] •‘Information provided by AI tools may be inaccurate, incomplete, misleading or out of date.’[71] •‘The accuracy of any information you have been provided by an AI tool must be checked before it is used or relied upon.’[72] •‘Judicial office holders are personally responsible for material which is produced in their name.’[73] •‘Placing too much reliance on any proprietary AI (whether commercial or publicly funded) could compromise judicial independence.’[74] •See also discussion on the prohibition on delegating judicial decision-making from paragraph [8.54]. |
|
Transparency and open justice |
•See discussion on disclosure from paragraph [8.114]. |
|
Contestability and procedural fairness |
•‘Be aware that court/tribunal users may have used AI tools.’[75] •‘If it appears an AI chatbot may have been used to prepare submissions or other documents, it is appropriate to inquire about this, ask what checks for accuracy have been undertaken (if any), and inform the litigant [lawyers and self-represented litigants] that they are responsible for what they put to the court/tribunal.’[76] •‘Fabricated evidence could be submitted as authentic evidence or authentic evidence could be challenged as fabricated evidence.’[77] ‘Judicial officers must remain cautious and aware of emerging technologies, as fake evidence is fairly easy to create.’[78] •Judicial officers should be aware of, and where appropriate, consider using available powers in relation to expert evidence. In Chapter 5 we discuss a range of express powers judicial officers can use to assess expert evidence in relation to AI. Experts should disclose use of AI in accordance with the Practice Note Expert Evidence in Criminal Trials[79] and the updated Expert Witness Code of Conduct.[80] |
|
Privacy and data security |
•‘Some generative AI chatbots retain the information you input and use it to respond to queries from other users. Unless you are using closed-end generative AI, it should be assumed anything you input can become publicly known.’ Judicial officers ‘should not enter any information which is private, confidential or sensitive into open or public generative AI chatbots. Make sure that your input is adequately generalised and anonymised. Disable the chat history function in the chatbots if this option is available.’[81] •‘Any information that you input into a public AI chatbot should be seen as being published to all the world. The current publicly available AI chatbots remember every question that you ask them, as well as any other information you put into them. That information is then available to be used to respond to queries from other users. As a result, anything you type into it could become publicly known.’[82] •‘Individual judges must also recognize and endeavour to prevent the security and privacy risks associated with using generative AI.’[83] •Judicial officers ‘should avoid using generative AI in any way which may infringe copyright and contravene intellectual property law. For instance, uploading any published materials covered by intellectual property to a generative AI chatbot to obtain a summary or analysis could breach the author’s copyright. Copyright issues may also arise from outputs that are extracted from an original work. It is the user’s responsibility to ensure compliance with copyright and other intellectual property laws when using generative AI.’[84] •‘In the event of any suspected breach of information security or privacy following the use of generative AI for judicial or administrative duties, the JJO [judicial officer] concerned should report the incident to his/her Court Leader as soon as possible.’[85] |
|
Access to justice |
•Be aware that AI use can ‘enhance access to justice and court users’ participation in court processes in a variety of ways’.[86] •Be aware that ‘Some persons may lack the technology to access AI or the knowledge to use it effectively.’[87] |
|
Efficiency and effectiveness |
•‘the Court recognizes that AI can improve the efficiency and fairness of the legal system. For instance, it can assist with tasks such as analyzing large amounts of raw data, aiding in legal research, and performing administrative tasks. This can save time and reduce workload for judges and Court staff, just as it can for lawyers.’[88] •‘AI will not be the appropriate solution to every problem and should not be used simply because it is new, exciting, or available. Possible use of AI should be founded on identifying the problem and assessing possible solutions – including other technologies or non-technological approaches, rather than simply integrating AI into ineffective processes.’[89] |
|
Human oversight and monitoring |
•‘“Human in the loop”: The Court will ensure that members of the Court and their law clerks are aware of the need to verify the results of any AI-generated outputs that they may be inclined to use in their work.’[90] |
Guidelines to prohibit the use of AI for judicial decision-making
8.54There was universal support amongst stakeholders for a prohibition on the use of AI by judicial officers for judicial decision-making.
8.55Several jurisdictions prohibit the use of AI by judicial officers for decision-making. This includes NSW,[91] Canada,[92] Delaware,[93] and Hong Kong.[94] A United Nations Special Rapporteur noted that in some areas, such as judicial decision-making, the use of AI solutions ‘should not be countenanced for final decisions, but only as part of decision-support in certain areas’.[95]
8.56Constraints or prohibitions on the use of AI for judicial decision-making have been emphasised in several international frameworks. The EU AI Act classifies AI tools used in the administration of justice as ‘high risk’ stating:
AI tools can support the decision-making power of judges or judicial independence, but should not replace it: the final decision-making must remain a human-driven activity.[96]
8.57In explaining its current AI guidelines, the Supreme Court told us that:
AI is not currently used for decision-making. Nor is it used to develop or prepare reasons for decision, aside from incidental uses outlined above in terms of legal research databases and Microsoft tools.[97]
8.58We heard broad support from court users for a prohibition on AI use for judicial decision-making. Some of these comments are contained in Table 15.
Table 15: Stakeholder feedback on a prohibition on delegation of judicial authority
|
Stakeholder |
Stakeholder views |
|
Victoria Legal Aid |
‘AI cannot replace the role of human decision making which requires careful ethical, legal and forensic judgement.’[98] |
|
Office of the Victorian Information Commissioner |
‘The use of AI tools should be prohibited in the courts and tribunals for decision-making, or for providing material to be used in arriving at a decision.’[99] |
|
Victorian Bar Association |
‘AI tools, platforms and systems should not currently or for the foreseeable future, be used to [m]ake a judicial or administrative decision in any Victorian court of tribunal. Critically, the Bar contends that Judges should not use GenAI in the analysis of evidence or in writing, editing or proofing judgments. This is because the function of a judge is not (at least for the time being) able to be delegated to AI, because of the limitations of AI and the need for public confidence – and corresponding openness – in the judicial process.’[100] |
|
Office of Public Prosecutions |
Holds ‘concerns for the use of AI tools in judicial decision-making, regardless of whether these tools are legally focused or not. Using AI tools in this process, even if they were legally focused, would present a risk to the transparency of the decision-making process’.[101] |
|
Federation of Community Legal Centres and Justice Connect |
‘Courts and tribunals retain their judicial independence and ultimate decision-making authority. AI can assist, but the responsibility for final rulings remains with judges, magistrates and tribunal members, who are accountable for their decisions.’[102] |
|
Deakin Law Clinic |
‘Human experience and discretion is a core judicial value in the decision-making process. When arriving at a conclusion on a matter, judges engage their discretion and problem-solving skills to evaluate the full range of factors involved in the case. This cannot, at this point in time, be meaningfully exercised by any (known) AI algorithms. Further, it is not morally desirable to allow a machine system to make judgements regarding peoples’ freedoms or even their lives.’[103] |
|
Professor Ian Freckelton AO KC |
‘It is crucial not to abrogate or delegate judicial or professional functions to AI … it is unrealistic to think that judges won’t use AI. We need to focus on the ethical and legal requirement for the judicial mind to engage with all relevant aspects of a judgment. If they fail to do so they are not discharging the judicial function. There is a spectrum of output when this function is exercised, but the crucial thing is that the judicial function has been exercised in a discerning and considered way.’[104] |
Why is a prohibition on AI use for judicial decision-making needed?
8.59Stakeholders unanimously supported a prohibition on the use of AI for judicial decision-making. Some stakeholders said that this should not occur currently or for the foreseeable future. But most stakeholders thought that even if the technology develops, ethical concerns will persist. Others said judicial decision-making would be a high-risk use of AI as discussed in Chapter 3.
8.60The view that AI should not undertake judicial decision-making was supported by the Supreme Court,[105] Coroners Court,[106] County Court,[107] Magistrates’ Court[108] and by VCAT.[109] Representatives of the Supreme Court stated:
The Court will not use AI to make a decision. The judging task is a fundamentally human endeavour and AI is not an appropriate substitute for decision making by a judge.[110]
8.61Internationally, judges have also voiced strong views that: ‘AI must never replace a human in making final decisions.’[111] The UN Special Rapporteur on the independence of judges and lawyers has stated that ‘the right to an independent and impartial tribunal requires access to a human judge’.[112]
8.62The prohibition on delegating judicial decision-making to AI has been described in a variety of ways by other jurisdictions (see Table 11 above). The NSW guidelines prescribe that judges ‘should not use Gen AI in the formulation of reasons for judgment or the assessment or analysis of evidence preparatory to the delivery of reasons for judgment.’[113] Guidance issued by the Vatican adopts different wording: ‘The decision on the interpretation of the law, the assessment of facts and evidence is reserved exclusively to the magistrate’.[114]
8.63There are several reasons why AI tools should not be used for judicial decision-making. These include common risks associated with AI use discussed in Chapter 3, such as:
•errors, inconsistencies and hallucinations in GenAI outputs
•automation bias, that is the human tendency to defer to and rely on algorithmic outputs which could erode judicial discretion
•deskilling of justice professionals including judicial officers, as the overreliance on AI could lead to a loss of legal research, opinion drafting and even judicial reasoning skills
•replication and exacerbation of bias in AI systems, which can result in discriminatory judicial decisions.[115]
8.64There are also specific risks associated with the use of AI for judicial decision-making. The use of AI may negatively impact the evolution of the common law. As discussed in Chapter 3, GenAI systems produce the most statistically likely output based on training data and user inputs. Hallucinations occur because they produce outputs that are likely rather than outputs that are necessarily correct. GenAI systems identify and replicate patterns in data. If GenAI systems are used for judicial decision-making this may result in the replication of past decisions without consideration of the facts and individuals involved in the particular case being decided, or how the law should develop. AI systems can also repeat historical biases. This may prevent legal innovation and the development of new legal precedent which is responsive to changing societal values and understandings.
8.65Additionally, AI:
•can undermine judicial independence and confidence in the administration of justice
•cannot exercise the process of judicial reasoning
•cannot understand or apply morality, human emotions or experience.
Judicial independence
8.66While the wording and scope of the prohibition changes across guidelines a common objective is to protect judicial independence. As explained in the Guide to Judicial Conduct, judicial independence requires ‘that a judge be, and be seen to be, independent of all sources of power or influence in society.’[116]
8.67Several international guidelines highlight that the use of AI may put judicial independence at risk. The Federal Court of Canada interim guidelines state:
The Court acknowledges the potential for AI to impact adversely on judicial independence. The Court also recognizes the risk that public confidence in the administration of justice might be undermined by some uses of AI. The Court will exercise the utmost vigilance to ensure that any use of AI by the Court does not encroach upon its decision-making function.[117]
8.68It is important that AI does not and is not perceived to influence institutional independence and the personal impartiality of judicial officers in discharging their judicial functions and exercising decision-making powers.[118] Professor Lyria Bennett Moses has argued that a ‘critical outcome for any use of AI is retention of public confidence in the judiciary and the legal system as whole’.[119]
8.69Justice Perry of the Federal Court has stated that there are real risks where judges use AI:
In my view AI has no place in the expression of judicial reasoning and were that to occur, it would have a very real capacity to undermine public confidence in the judiciary, no matter how limited the use of AI in the particular judgment may have been.[120]
8.70AI may introduce risks to judicial independence, both institutional and personal.[121] One of the risks to judicial independence is that an overreliance on AI by judicial officers may impact procedural fairness:
Procedural fairness requires judges to maintain discretion and autonomy in their application of the law. Technologies such as AI-driven evidence analysis tools may inadvertently constrain this discretion by presenting conclusions as definitive or overly authoritative.[122]
8.71If judicial officers rely on third-party AI tools for decision making, this may impact judicial independence.[123] If third-party tools designed by private sector actors are used to make decisions, this could create a risk that technology companies could exercise undue influence on the judicial process. Particularly where information about how AI tool works is protected by proprietary interests (as discussed in Chapter 3).[124] Engaging third-party providers to design or deliver AI tools may interfere with judicial independence since companies are profit driven and will not necessarily embed values like fairness.[125] Developers of AI tools also have the power to make choices on the technical parameters of AI applications. Those choices could see biases embedded in AI tools and could influence judicial decision-making.[126]
8.72Justice Perry has also noted that there is a ‘risk of potential control, interference or surveillance from foreign states via privately developed AI tools’.[127] There is also concern that AI could be deployed by states in a way that increases political oversight of courts and reduces judicial independence.[128]
8.73Another concern raised is that there may be a lack of social licence for AI to be used in judicial decision-making. There is a risk that people may not recognise AI decisions as legitimate and authoritative decision-making,[129] which could undermine trust in the administration of justice.
Judicial reasoning
8.74Large language models can ‘mimic the outputs of judges’.[130] Putting aside current limitations that AI tools can be inaccurate, contain hallucinations and biases and do not understand the impact of their outputs, it is possible they can be used to produce written judgments that look similar to judgments containing legal reasoning.[131]
8.75Chief Justice Gageler recently stated that if a large language model was fed data from Commonwealth law reports it is likely that it:
could produce something that looks like a well-reasoned High Court judgment … and I have little doubt that either now, or within two years, the predictive power of that large language model will be pretty close, if not surpassing that above an individual judge.[132]
8.76However, even if the outputs of AI look similar to what is produced by a judge, it has been argued that judicial decision-making is not just about the output and that the process by which that output is reached is critical.[133] Bennett Moses argues that:
What judges do, even in higher courts, goes beyond producing text containing valid doctrinal arguments. What is most important is that they are exercising judgment. This is different from both prediction (working out the expected outcome of litigation using probability) and simulation (which is what ChatGPT does when asked to produce the text of a judgment). The manner of the decision is as critical as its content.[134]
8.77Professors Tania Sourdin and Richard Cornes emphasise the unconscious reasoning process of a human judge, noting that an AI judge is unlikely to replicate this process.[135] Losing these elements may ‘fundamentally change what justice looks like’.[136]
8.78If judicial officers rely on AI tools for judicial decision-making, they may be less able to provide reasons for decisions if they cannot explain the part played by AI tools, either for proprietary reasons, or because they do not understand the technology. The UN Special Rapporteur on the independence of judges and lawyers has stated:
If AI is used to automate judicial decisions, the “black box” nature of AI tools may render the decision-making process so opaque and incontestable that the right to a fair trial is violated.[137]
8.79This is because the judge will not necessarily be able to go into the black box and determine whether the output was informed by flawed or discriminatory data or how the output was produced.[138]
8.80In Victoria, there are existing duties on judicial officers in relation to exercising decision-making powers. A core duty is for judicial officers to provide reasons for their decisions.[139] Five key purposes for providing reasons are that they:
a)allow the parties to see the extent to which arguments have been understood and accepted
b)allow the parties to understand the basis for the judge’s decision
c)foster judicial accountability
d)facilitate certainty in the law by assisting lawyers, the legislature and the public to see how similar cases may be decided
e)assist appellate courts to determine whether the trial judge’s decision was affected by appealable error.[140]
8.81It has also been argued that the requirement to give reasons does not just benefit parties but also the wider public.[141] The duty to give reasons has been tied to the principle of open justice as it requires a judge to describe their reasoning process which allows ‘the public to scrutinise how and why they came to their decision.[142]
8.82The requirement to give reasons has been interpreted by courts to mean that a judicial officer needs to direct their mind to the issues. The ‘reasons must demonstrate a process of reasoning and explain the basis for the decision’.[143] In DPP v Harika, Justice Gillard held:
The object of the requirement is to ensure that judicial officers turn their minds to the issues and determine the matter in accordance with the law. The obligation to state reasons focuses the mind on the issues. In order to determine whether the judicial officer has done so, one turns to the reasons.[144]
8.83While there are several descriptions of judicial reasoning, they commonly refer to a cognitive exercise involving ‘a human judge thinking about the arguments made and coming to their own conclusion’.[145] This concept of ‘turning their mind to the issues’ is essential and should not be delegated to an AI system.
8.84Another duty requires judicial officers to pay attention to the evidence and submissions put before them. Chief Justice French in R v Cesan held:
The appearance of a court not attending to the evidence and arguments of the parties and control of the conduct of the proceedings is an appearance which would ordinarily suggest to a fair and reasonable observer that the judicial process is not being followed.[146]
8.85These concepts were also echoed by the courts we engaged with. Representatives of the County Court noted:
The role of the judge is the decision maker. The role is to make decisions and provide reasons for how the decision was made. It is to state that these are the facts, these are the submissions, and these are the reasons I have arrived at this decision … This is the essential personal account of a decision maker’s process, what is their evaluation of facts. It fundamentally remains the decision maker’s role to show they were informed by these things in coming to their decision.[147]
8.86Representatives of the Coroners Court made similar comments that while decisions may be informed by preparatory work done using AI, it is still up to the judicial officers ‘to review that and satisfy themselves as to content and then make a decision about circumstances. So, AI is an input but not the final arbiter’.[148]
Human emotion and experience
8.87An essential part of judicial decision-making is for judicial officers to apply human emotions to the case before them. Large language models ‘do not and thus cannot exercise moral judgment’.[149]
8.88There are inherently human factors in judicial decision-making, such as human experience, emotion, morality and creativity. Chief Justice Quinlan of Western Australia recently stated that: ‘The law is, after all, and above all, a human institution.’[150]
8.89A recent study in England and Wales suggested that judges perceived judicial decision-making to be a fundamentally human task.[151]Judges in this study identified that their work requires evaluative judgements, practical reasoning and an awareness of human values that goes beyond ‘pure logic’ and that AI would not have the capability to apply those essential processes. The study identified that in some cases ‘a human judge is vital to providing emotional and psychological closure, and a sense of “dignity” which AI cannot provide’.[152]
8.90Bennett Moses has commented that human emotions can help judges in a range of ways. This includes by enabling them to empathise with parties, interpret facts and reflect on the impact of a decision on the community more broadly.[153]
A prohibition on AI for judicial decision-making
8.91A prohibition on the use of AI for judicial decision-making is critical for maintaining judicial independence. Having this prohibition in publicly available judicial guidelines supports transparency and the overarching obligation to maintain public trust.
8.92To provide reassurance to members of the public the prohibition could be described as: ‘Judicial officers in Victoria must not delegate their decision-making authority to AI. Judicial officers will be able to use AI to support their functions but must continue to turn their minds to the evidence and submissions before them to formulate reasons and to decide the matter in accordance with the law.’
8.93This approach is consistent with several publicly available international judicial guidelines discussed above in Table 11 that prohibit the use of AI for judicial decision-making. This approach is also consistent with the Commission’s principles of accountability and independence, as it is necessary for every judicial decision to be ultimately attributable to a person.
8.94In practice, this prohibition would require judicial officers to verify all outputs of AI tools. Judicial officers would remain clearly accountable and responsible for their decisions regardless of any technology they have used.[154]
|
Recommendation 13.Guidelines for judicial officers should prohibit the use of AI for judicial decision-making. |
How can AI support judicial officers?
8.95We heard that there may be opportunities for AI to support judicial officers in a way that does not impact their core judicial decision-making function.
8.96The Canadian Judicial Council’s guidelines on AI use in courts emphasises that ‘judges are encouraged to leverage available support systems to assist in their judicial responsibilities’.[155]
8.97The guidelines include the following examples, noting that these activities should not be misconstrued as judicial decision-making:
•consulting with a colleague or associate on legal queries
•requesting an administrative assistant proofread and format draft decisions
•using grammar and spell-check features
•using speech recognition tools for example for dictation.[156]
8.98Internationally, AI is being used in various ways to support judicial officers. Some examples include:
•In England and Wales, Lord Justice Birss highlighted the potential use of AI in providing case summaries, which he saw as useful in assisting a judge to get across a case more quickly.[157].
•In the United States, a judge used a large language model to interpret the ordinary meaning of a word.[158]
•In Germany, an AI assistant supports judges to ‘sift through documents faster and use specific search criteria to find relevant information from various documents’.[159]
•In the Netherlands, a judge used ChatGPT to collect information to inform his reasoning on the average price of electricity and life span of solar panels.[160]
8.99A qualitative study collected views from 12 judges working in the United Kingdon legal system on opportunities of AI to support their everyday tasks and workflows. Opportunities identified in the study included:[161]
•summarising or creating publicly accessible or child-appropriate versions of decisions (this could involve using alternative formats such as podcasts or videos to make content more accessible)
•proofreading decisions
•initial judgment drafting or summarising of background information (for high volume courts)
•consideration of ‘small claims’
•analysis of bulk sentencing data and court administrative data
•legal research and summarisation of cases and documents (there was hope that AI could assist with these, but also a current lack of trust that it would do so reliably).
8.100Additionally, the UN Special Rapporteur on the independence of judges and lawyers reported judges internationally had identified AI may hold opportunities for:
•summarising parties’ positions for inclusion in written opinions
•summarising and searching within large quantities of evidence
•improving spelling, grammar and syntax in written judgments.[162]
8.101As discussed in Chapter 2, there are opportunities for AI to provide improved and new support systems to judicial officers. However, judicial officers need to be aware that some uses of AI use have higher risks than others.
8.102Table 16 indicates how international approaches have tried to draw a distinction between acceptable uses of AI and uses that require caution and/or are restricted because of the risks associated with use. As shown in Table 16 there are inconsistencies in these approaches. Some are comfortable with AI being used by judicial officers for legal research and others are not. This demonstrates that defining a static list of acceptable AI uses is complicated.
8.103Much of the commentary in this area is related to the use of public rather than closed AI that may have been developed by legal publishers or a court itself (as discussed in Chapter 3).
Table 16: International approaches to judicial use of AI
|
Jurisdiction |
Categories of AI uses relevant to judicial officers in interjurisdictional guidelines |
|---|---|
|
European Union[163] |
Not high risk: •purely ancillary administrative activities such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks. High risk uses: •researching and interpreting facts and the law and in applying the law to a concrete set of facts. |
|
European Commission for the Efficiency of Justice[164] |
The following is a sample and is not an exhaustive list of uses considered. Uses to be encouraged: •caselaw enhancement (finding search options, linking sources, creating data visualisations to illustrate search results). Possible uses, requiring considerable methodological precautions: •help in the drawing up of scales in certain civil disputes. Uses to be considered following additional scientific studies: •judge profiling (to offer judges a more detailed quantitative and qualitative assessment of their activities, with the informative aim of assisting in decision-making, and for their exclusive use). Uses to be considered with the most extreme reservations: •use of algorithms in criminal matters to profile individuals (such as COMPAS, discussed in Chapter 2) •quantity-based norms (Providing each judge with the content of the decisions produced by other judges and locking their choice into the mass of precedents). |
|
Hong Kong[165] |
Potential uses: •summarising information •speech/presentation writing •legal translation •administrative tasks (drafting emails, memoranda, letters). Requiring extra caution: •legal research. Not recommended: •legal analysis. |
|
New South Wales (Australia)[166] |
Be aware of limitations of using GenAI for: •secondary legal research. GenAI should not be used: •editing or proofing draft judgments, and no part of a draft judgment should be submitted to a Gen AI program. |
|
New Zealand[167] |
Potential tasks: •summarising information •speech writing •administrative tasks (drafting emails, scheduling meetings). Tasks requiring extra care: •legal research •legal analysis. |
|
United Kingdom[168] |
Potentially useful tasks: •summarising text •writing presentations •administrative tasks •Microsoft Co-Pilot Chat. Tasks not recommended: •legal research •legal analysis. |
|
Utah (United States)[169] |
AI may be used for: •preparing educational materials •legal research •preparing draft documents •testing reading comprehension of public documents (to ensure a document is accessible to a self-represented litigant) •instructions on how to use a new piece of software. |
8.104We heard from representatives of Victoria’s courts and VCAT that it is important to clarify that the prohibition on AI use in judicial decision-making should not extend to all functions that support judges in making their decisions. However, drawing the distinction between what is a supportive as opposed to a replacement function of AI can be difficult. It has been noted: ‘In practice, it is extremely difficult to draw a line between AI tools that only assist judges and those that can interfere in decisions.’[170]
8.105The New York City Bar Association’s Working Group on Judicial Administration and Artificial Intelligence highlighted that ‘the use of AI by a court in connection with decision-making can take many different forms, and the point at which it is being used to render a decision may not always be clear’.[171]
8.106Representatives of the Supreme Court also noted:
The conceptual distinction between supported and substituted decision making can be a useful tool in delineating an area of appropriate use for AI tools. Like many technological changes, there is likely to be room for AI to support judicial decision making in an appropriate and efficient way. The Court would not support either restriction or disclosure of AI use when it has been used to support the process of decision making. The Court’s reasons for decision stand on their own. If they contain error they may be corrected on appeal. The current statement in the SCV guidelines that AI is not presently used for decision making nor used to develop or prepare reasons for decision may need to be revisited as the technology and our understanding of it evolves.[172]
8.107Sourdin has proposed three categories to describe how technology is reshaping the justice system:
a)Supportive technology: at the most basic level, technology which helps to inform, support and advise people involved in the justice system
b)Replacement technologies: technology which can replace functions and activities that were previously carried out by humans.
c)Disruptive technology: technology which can change the way judges work and provide for very different forms of justice, particularly where processes change significantly such as predictive analytics that may reshape the adjudicative role.[173]
8.108The prohibition discussed above in recommendation 13, would still enable judges to use supportive AI. For example, AI could be used for:
•searching cases and legislative research (preferably via specialised closed AI such as tools developed by legal publishers or specialised legal large language models)[174]
•correcting grammar, word use and improving readability
•administrative tasks (drafting emails and producing presentation material)
•organising, paginating, assembling documents (provided sources are referenced and checked)
•creating chronologies, summarising, auto-inserting precedent material and even assisting with complex single-issue research.
8.109Caution may be required for judicial usage of AI in reframing material, structure creation, complex research or using AI to translate or interpret material. This usage requires extra caution in part because of the potential for error.[175] Judicial officers should be cautious that ‘an AI tool the rewrites a judgment for clarity may alter its legal meaning’.[176]
8.110The prohibition would prevent uses which impact more significantly on the process of human judgment. Such uses may include using AI for evaluation and predictive analytics,[177] using AI ‘assistants’ to check and comment on drafts.
8.111Chief Justice Gageler has flagged that the High Court is planning to run a pilot in 2025 to test closed AI programs for editing judgments. But he qualified that the AI tool ‘is not to generate the original script; it is to improve’.[178] It has been argued that large language models ‘should not author entire judgments, not only because of quality issues associated with outputs, but also because it will lead to bad outcomes and does not involve an appropriate process’.[179] Drafting judgments in their entirety raises significant risks for interfering with judicial functions. The formulation of words in paragraph [8.92] is intended to prevent this from occurring.
8.112For clarity, guidelines to judicial officers should state that uses of AI that are supportive should not be discouraged. However, the Commission recommends that the guidelines should not list specific uses of AI that judicial officers can or cannot use. If guidelines were to include a specific list of supportive AI uses it is likely this list would quickly become outdated given the pace of AI developments.
8.113Instead of setting a static list of approved uses, in Chapter 9 we discuss an AI assurance framework which can be used to help Victoria’s courts and VCAT make decisions about what AI tools should be developed, procured or made available to judicial officers and court staff. An AI assurance framework can help structure the consideration of risks and support decision making to ensure adequate consideration is given to security, privacy and explainability.
|
Recommendation 14.Guidelines for judicial officers should clarify that the prohibition on AI use for judicial decision-making is not intended to encompass supportive uses of AI. |
Should judicial officers disclose their use of AI?
8.114There has been a mixture of approaches internationally as to whether judicial officers are required to disclose their use of AI.
8.115The New York City Bar Association’s Working Group on Judicial Administration and Artificial Intelligence investigated the potential impact of AI on the New York State judiciary. In their report, they identified benefits of disclosing judicial use of AI:
•ensuring judges are accountable for their use of AI
•helping parties and the public better understand how the judiciary is using AI and to raise concerns if a judge appears to use an AI tool inappropriately
•assisting with appellate review of a lower court’s decision.[180]
8.116In the report, they recommended that ‘serious consideration be given to whether judicial ethics rules should be amended to require judges to disclose their use of AI in decision-making’.[181]
8.117Some international jurisdictions have encouraged disclosure and consultation where AI is used in court by judicial officers. The Federal Court of Canada will not use AI without engaging in public consultation this specifically includes using automated decision-making tools, in making judgments.[182]
8.118However, some guidelines are silent as to the need to disclose AI use.[183] In others states judicial officers do not need to disclose AI use. In New Zealand it is clearly stated that: ‘Judges/judicial officers/tribunal members: You do not need to disclose use of a GenAI chatbot.’[184]
8.119In England and Wales, there is no direction for judicial officers to disclose AI use where the guidelines are appropriately followed.[185]
8.120In Victoria, representatives of the Supreme Court suggested judges would not need to disclose use of tools which support decision making.[186] Similar comments were made by other Victorian courts.[187] However, representatives of the Coroners Court said disclosure of the use of AI by judicial officers should be encouraged.[188]
8.121Several court user groups saw value in requiring disclosure of AI use by judicial officers. Some of the comments supporting disclosure are listed in Table 17.
Table 17: Stakeholder views on disclosure of judicial use of AI
|
Stakeholder |
Stakeholder views |
|---|---|
|
Victoria Legal Aid |
‘For courts, all decisions where AI is involved, must be contestable. If courts are using AI, in a process where they are not prepared to show their working out, the data or underlying code, they should not be using AI.’[189] |
|
Office of the Victorian Information Commissioner |
‘Courts and tribunals should be aware of, and transparent about, any AI projects and use cases to their stakeholders and the public.’[190] |
|
Law Institute of Victoria |
‘Courts and tribunals should also disclose AI use to all court users.’[191] |
|
Federation of Community Legal Centres and Justice Connect |
‘Courts and tribunals should ensure processes and decisions supported by AI systems are transparent to court users. It should be clear how AI systems generate their outputs, including the data and models they use, and how AI systems informed decisions, with appropriate human oversight and judgment.’[192] |
8.122There was support amongst court users for disclosure of AI tools by judicial officers. However, some stakeholders recognised that the value of an obligation to disclose AI use may depend on the type of tool and how it is used.[193]
8.123Many people told us that the risk of AI use and the need for transparency increases the closer the use of AI is to the exercise of a judicial officer’s decision-making authority.[194]
8.124If AI is only used as a supportive tool, as recommended above, that does not impact outcomes, there is less reason for judicial officers to disclose individual use in every matter. However, to uphold public confidence in the administration of justice, it is desirable for Victoria’s courts and VCAT to make information publicly available about the sorts of AI tools which judicial officers may choose to use in a supportive capacity.
8.125Disclosure of supportive AI tools by individual judicial officers is not recommended. But in Chapter 9, the Commission recommends that AI tools developed, procured or made available by Victoria’s courts and VCAT to judicial officers, are disclosed at an organisational level via a public AI inventory.
What obligations should apply to judicial support staff?
8.126Judicial officers are assisted by a variety of support staff. This includes associates and research officers. In providing guidance to judicial officers, several jurisdictions have included directions to judicial support staff on the use of AI.
8.127A prescriptive approach has been taken in NSW where guidance states:
Judges should require that their associates, tipstaves or researchers disclose to the judge if and when they are using Gen AI for research purposes or any other related purpose, and associates, tipstaves or researchers should be separately required to verify any such output for accuracy, completeness, currency and suitability.[195]
8.128In New Zealand, judicial support staff are required to:
discuss with your supervising judge/judicial officer/tribunal member how you are using GenAI chatbots (or any other GenAI tools) and the steps you are taking to mitigate any risks.[196]
8.129Some Victorian court representatives viewed a disclosure obligation on judicial support staff as unnecessary. Representatives of the Supreme Court stated:
The Court is confident there is a frank and open relationship between Judges and associates in how their work is undertaken. Associates are made aware of their obligations in relation to dealing with material to which they have access at the Court. These are reinforced by judicial officers. There are important legal rules which protect the confidentiality of the work undertaken between judges and associates (judicial privilege) that reflect the fact that it is a matter for individual judicial officers how that work is undertaken.[197]
8.130The primary purpose of judicial officer guidelines on AI use is to serve an educative function. There is value in setting a clear and transparent expectation that judicial support staff should discuss their use of AI with their supervising judicial officer. This would serve to improve communication and awareness of the risks and limitations of AI.
|
Recommendation 15.Guidelines for judicial officers should encourage judicial support staff to discuss their use of AI tools with their supervising judicial officer. |
AI-assisted online dispute resolution
8.131AI-assisted online dispute resolution tools are not currently being used by Victoria’s courts and VCAT. However, in Chapter 2, we identified that international courts and tribunals are using AI-assisted online dispute resolution. We discussed that AI tools can be used at different stages and for different purposes, from supporting people to participate in dispute resolution to producing suggested or final decisions. In Singapore, GenAI is being piloted in the Small Claims Tribunal to support people to participate in the resolution of small claims, for example, by translating documents.[198] In future, this system may be able to point parties to settlement options but this is not available yet.
8.132AI may present opportunities to support Victorians to resolve disputes. However, the use of AI-assisted online dispute resolution tools to produce decisions raises significant complexities. At this stage, the Commission has recommended AI should not be used for judicial decision-making for the reasons discussed above.
8.133However, given the rapid development of AI technology, if existing risks such as inaccuracy, bias and opacity are sufficiently mitigated, Victoria’s courts and VCAT may in future wish to consider incorporating AI-assisted online dispute resolution for some matters. We heard that there could be opportunities for AI-assisted online dispute resolution to be considered in the context of small civil claims, including consumer disputes.[199] We also heard that AI-assisted online dispute resolution may be best suited to high-volume, low-complexity disputes where there is low discretion.[200]
8.134Victoria’s courts and VCAT would need to exercise extreme caution when considering the incorporation of such systems to suggest or make decisions that affect people’s rights or interests. Careful consideration of safeguards would be necessary to protect people’s rights if such systems were developed.
-
Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, 6 May 2024) <http://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation>.
-
County Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, 3 July 2024).
-
Consultation 9 (Victorian Civil and Administrative Tribunal).
-
Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, 6 May 2024) para 12 <http://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-practice -use-of-ai-in-litigation>; County Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, 3 July 2024) para 12.
-
Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration, December 2023) 43.
-
Consultation 9 (Victorian Civil and Administrative Tribunal).
-
Submission 15 (Human Rights Law Centre).
-
Submission 16 (Law Institute Victoria).
-
Submission 22 (Centre for the Future of the Legal Profession and UNSW Law and Justice).
-
Submission 26 (Supreme Court of Victoria).
-
Consultation 15 (Magistrates’ Court of Victoria).
-
Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 19 <https://docs.un.org/en/A/80/169>.
-
Ibid 6.
-
Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI) Guidance for Judicial Office Holders (Guidance, 14 April 2025) 3 <https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version.pdf>.
-
Nicole Gillespie et al, Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025 (Report, The University of Melbourne and KPMG International, 2025) 28 <https://doi.org/10.26188/28822919>.
-
Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 295.
-
Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) 5 <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>.
-
Supreme Court of New South Wales, Guidelines for New South Wales Judges in Respect of Use of Generative AI (Guidelines, 21 November 2024) <https://supremecourt.nsw.gov.au/documents/About-the-Court/policies/Guidelines_Gen_AI.pdf>.
-
NSW Civil and Administrative Tribunal (NCAT), Guidelines for NCAT Members in Respect of Use of Generative Artificial Intelligence (Gen AI) (Guidelines, 7 March 2025) <https://ncat.nsw.gov.au/documents/policies/member-guidelines-generative-ai.pdf>.
-
Supreme Court of New South Wales, Guidelines for New South Wales Judges in Respect of Use of Generative AI (Guidelines, 21 November 2024) <https://supremecourt.nsw.gov.au/documents/About-the-Court/policies/Guidelines_Gen_AI.pdf>.
-
Supreme Court of New South Wales, Supreme Court Practice Note SC Gen 23 Use of Generative Artificial Intelligence (Gen AI) (Practice Note, 28 January 2025) <https://supremecourt.nsw.gov.au/documents/Practice-and-Procedure/Practice-Notes/general/current/PN_SC_Gen_23.pdf>.
-
NSW Civil and Administrative Tribunal (NCAT), Guidelines for NCAT Members in Respect of Use of Generative Artificial Intelligence (Gen AI) (Guidelines, 7 March 2025) <https://ncat.nsw.gov.au/documents/policies/member-guidelines-generative-ai.pdf>.
-
At the time of finalising this report, Queensland Courts published the following guidelines: Queensland Courts, The Use of Generative AI Guidelines for Judicial Officers (Guidelines, 15 September 2025) <https://www.courts.qld.gov.au/__data/assets/pdf_file/0009/879714/the-use-of-generative-ai-guidelines-for-judicial-officers.pdf>.
-
Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration, December 2023).
-
Ibid 43.
-
Australian Institute of Judicial Administration (AIJA), Guide to Judicial Conduct, Third Edition (Revised) (Guide, December 2023) <https://aija.org.au/wp-content/uploads/2024/04/Judicial-Conduct-guide_revised-Dec-2023-formatting-edits-applied.pdf>.
-
‘Guidelines’, Judicial Commission of Victoria (Web Page, 2024) <https://www.judicialcommission.vic.gov.au/professional-guidelines/#1b7b6041-c7ff-40cf-8e88-be4ef341c912>; Note s 134 of the Judicial Commission of Victoria Act 2016 (Vic) provides that the Judicial Commission may make guidelines about the standards of ethical and professional conduct expected of judicial officers and non-judicial members of VCAT.
-
Australian Institute of Judicial Administration (AIJA), Guide to Judicial Conduct, Third Edition (Revised) (Guide, December 2023) ix <https://aija.org.au/wp-content/uploads/2024/04/Judicial-Conduct-guide_revised-Dec-2023-formatting-edits-applied.pdf>.
-
Ibid 2.
-
Michael Pelly, ‘An Interview with Chief Justice Gageler’, Westlaw Updates & Alerts (Web Page, 8 July 2025) 4 <https://support.thomsonreuters.com.au/product/westlaw-precision-australia/updates-alerts/interview-chief-justice-gageler>.
-
‘Brazilian National Council of Justice Approves New Regulation for the Use of Artificial Intelligence by the Judiciary’, Instituto Dannemann Siemsen (Web Page, 6 March 2025) <https://ids.org.br/en/news-post/brazilian-national-council-of-justice-approves-new-regulation-for-the-use-of-artificial-intelligence-by-the-judiciary/>; National Council of Justice (Brazil) (CNJ), RESOLUÇÃO No 615, DE 11 DE MARÇO DE 2025. Estabelece Diretrizes Para o Desenvolvimento, Utilização e Governança de Soluções Desenvolvidas Com Recursos de Inteligência Artificial No Poder Judiciário [Resolution No. 615, of March 11, 2025. Establishes Guidelines for the Development, Use, and Governance of Solutions Developed with Artificial Intelligence Resources in the Judiciary (English Translation)] (No 615/2025, 11 March 2025) <https://rm.coe.int/resolucao-cnj-615-ia/1680b51b65>.
-
Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>; ‘Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence’, Federal Court of Canada (Guidelines, 20 December 2023) <https://www.fct-cf.gc.ca/en/pages/law-and-practice/artificial-intelligence>; Office of the Commissioner for Federal Judicial Affairs Canada, Action Committee on Modernizing Court Operations, Use of Artificial Intelligence by Courts to Enhance Court Operations (Statement, 20 November 2024) <https://fja-cmf.gc.ca/COVID-19/pdf/Use-of-AI-by-Courts-Utilisation-de-lIA-par-les-tribunaux-eng.pdf>.
-
‘Justice Meets Innovation: Colombia’s Groundbreaking AI Guidelines for Courts’, UNESCO (Web Page, 1 April 2025) <https://www.unesco.org/en/articles/justice-meets-innovation-colombias-groundbreaking-ai-guidelines-courts>; Superior Council of the Judiciary, Republic of Colombia, Acuerdo PCSJA24-12243 DE 2024: Por El Cual Se Adoptan Lineamientos Para El Uso y Aprovechamiento Respetuoso, Responsable, Seguro y Ético de La Inteligencia Artificial En La Rama Judicial [Agreement PCSJA24-12243 of 2024: By Which Guidelines Are Adopted for the Respectful, Responsible, Safe and Ethical Use and Exploitation of Artificial Intelligence in the Judicial Branch (English Translation)] (16 December 2024) <https://actosadministrativos.ramajudicial.gov.co/web/Acto%20Administrativo/Default.aspx?ID=19280>.
-
Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI) Guidance for Judicial Office Holders (Guidance, 14 April 2025) <https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version.pdf>.
-
Hong Kong Judiciary Administration, Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff of the Hong Kong Judiciary (Guidelines, July 2024).
-
‘AI Tools Not for Decision Making: Kerala HC Guidelines to District Judiciary on AI Usage’, The Economic Times (online, 20 July 2025) <https://economictimes.indiatimes.com/tech/artificial-intelligence/ai-tools-not-for-decision-making-kerala-hc-guidelines-to-district-judiciary-on-ai-usage/articleshow/122794562.cms?from=mdr>.
-
Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Judges, Judicial Officers, Tribunal Members and Judicial Support Staff (Guidelines, 7 December 2023) <https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Judicial.pdf>.
-
Bae Kim & Lee LLC et al, ‘Announcement of Guidelines on Use of Artificial Intelligence in Judiciary’, Lexology (Web Page, 19 March 2025) <https://www.lexology.com/library/detail.aspx?g=3de965db-8c9c-4938-af8c-5e4bc09c02dc>.
-
Eleonora Rosati, ‘New Vatican AI Guidelines for the Development and Use of AI Models: From AI Training to Vatican’s Authorship and Ownership of AI-Generated Outputs (at Least within the Vatican City State)’, The IPKat (Web Page, 25 January 2025) <https://ipkitten.blogspot.com/2025/01/new-vatican-ai-guidelines-for.html>; The Pontifical Commission for the State of the Vatican City, Decreto Della Pontificia Commissione per Lo Stato Della Città Del Vaticano Recante “Linee Guida in Materia Di Intelligenza Artificiale” [Decree of the Pontifical Commission for the Vatican City State Containing ‘Guidelines on Artificial Intelligence’ (English Translation)] (Decree No N. DCCII, 16 December 2024) <https://www.vaticanstate.va/images/N.%20DCCII.pdf>.
-
Arizona Supreme Court Judicial Branch, Arizona Code of Judicial Administration (Code of Practice, 29 January 2025) ’Section 1-509: Use of Generative Artificial Intelligence Technology and Large Language Models’ <https://www.azcourts.gov/Portals/0/0/admcode/pdfcurrentcode/1-509%20Use%20of%20AI%20Tech%20and%20LLMs%2001_2025.pdf?ver=acMF-P2SER0dArzTQohBjQ%3d%3d>; Board for Judicial Administration and Washington Courts, BJA AI Statement of Principles (Report, 21 January 2025); Delaware Courts, Judicial Branch, Interim Policy on the Use of GenAI by Judicial Officers and Court Personnel (Interim Policy, 22 October 2024) <https://www.courts.delaware.gov/forms/download.aspx?id=266838>; Judicial Council of California, Artificial Intelligence Task Force, Judicial Branch Administration: Rule and Standard for Use of Generative Artificial Intelligence in Court-Related Work (Report to the Judicial Council No 25–109, 16 June 2025) <https://jcc.legistar.com/View.ashx?M=F&ID=14303119&GUID=0C94642A-28D3-47C0-8AE9-1E4DE3A96DFC>; Judicial Council, Utah, Interim Rules on the Use of Generative AI (Interim Rules, 25 October 2023) <https://nationalcenterforstatecourts.app.box.com/s/px0vzpzzg6n42ng10i4lya4al0mwjhqq>; Maryland Judiciary, Guidelines for the Acceptable Use of Artificial Intelligence (AI) Tools and Platforms (Guidelines, 15 April 2024); Nevada Courts, Artificial Intelligence: A Guide for Judicial Officers (Guide, February 2025) <https://nvcourts.gov/__data/assets/pdf_file/0028/46693/AI_Guide_for_Judicial_Officers.pdf>; Supreme Court of Illinois, Supreme Court Policy on Artificial Intelligence: Judicial Reference Sheet (Reference Sheet, 1 January 2025) <https://ilcourtsaudio.blob.core.windows.net/antilles-resources/resources/cb3d6da3-66c7-469d-97f3-41568bdeee8c/ISC%20AI%20Policy%20Bench%20Card.pdf>; The Supreme Court of South Carolina, Interim Policy on the Use of Generative Artificial Intelligence (Policy, 25 March 2025) <https://www.sccourts.org/media/courtOrders/PDFs/2025-03-25-01.pdf>.
-
Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) 3 <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>.
-
‘Justice Meets Innovation: Colombia’s Groundbreaking AI Guidelines for Courts’, UNESCO (Web Page, 1 April 2025) <https://www.unesco.org/en/articles/justice-meets-innovation-colombias-groundbreaking-ai-guidelines-courts>; Superior Council of the Judiciary, Republic of Colombia, Acuerdo PCSJA24-12243 DE 2024: Por El Cual Se Adoptan Lineamientos Para El Uso y Aprovechamiento Respetuoso, Responsable, Seguro y Ético de La Inteligencia Artificial En La Rama Judicial [Agreement PCSJA24-12243 of 2024: By Which Guidelines Are Adopted for the Respectful, Responsible, Safe and Ethical Use and Exploitation of Artificial Intelligence in the Judicial Branch (English Translation)] (16 December 2024) <https://actosadministrativos.ramajudicial.gov.co/web/Acto%20Administrativo/Default.aspx?ID=19280>.
-
‘Brazilian National Council of Justice Approves New Regulation for the Use of Artificial Intelligence by the Judiciary’, Instituto Dannemann Siemsen (Web Page, 6 March 2025) <https://ids.org.br/en/news-post/brazilian-national-council-of-justice-approves-new-regulation-for-the-use-of-artificial-intelligence-by-the-judiciary/>.
-
Hong Kong Judiciary Administration, Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff of the Hong Kong Judiciary (Guidelines, July 2024) 2.
-
‘AI Tools Not for Decision Making: Kerala HC Guidelines to District Judiciary on AI Usage’, The Economic Times (online, 20 July 2025) <https://economictimes.indiatimes.com/tech/artificial-intelligence/ai-tools-not-for-decision-making-kerala-hc-guidelines-to-district-judiciary-on-ai-usage/articleshow/122794562.cms?from=mdr>.
-
Delaware Courts, Judicial Branch, Interim Policy on the Use of GenAI by Judicial Officers and Court Personnel (Interim Policy, 22 October 2024) 2 <https://www.courts.delaware.gov/forms/download.aspx?id=266838>.
-
Supreme Court of Illinois, Illinois Supreme Court Policy on Artificial Intelligence (Policy, 1 January 2025) 2.
-
The Supreme Court of South Carolina, Interim Policy on the Use of Generative Artificial Intelligence (Policy, 25 March 2025) 2 <https://www.sccourts.org/media/courtOrders/PDFs/2025-03-25-01.pdf>.
-
The Pontifical Commission for the State of the Vatican City, Decreto Della Pontificia Commissione per Lo Stato Della Città Del Vaticano Recante “Linee Guida in Materia Di Intelligenza Artificiale” [Decree of the Pontifical Commission for the Vatican City State Containing ‘Guidelines on Artificial Intelligence’ (English Translation)] (Decree No N. DCCII, 16 December 2024) art 12 <https://www.vaticanstate.va/images/N.%20DCCII.pdf>.
-
‘Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence’, Federal Court of Canada (Guidelines, 20 December 2023) 2 <https://www.fct-cf.gc.ca/en/pages/law-and-practice/artificial-intelligence>.
-
‘Justice Meets Innovation: Colombia’s Groundbreaking AI Guidelines for Courts’, UNESCO (Web Page, 1 April 2025) <https://www.unesco.org/en/articles/justice-meets-innovation-colombias-groundbreaking-ai-guidelines-courts>.
-
Eleonora Rosati, ‘New Vatican AI Guidelines for the Development and Use of AI Models: From AI Training to Vatican’s Authorship and Ownership of AI-Generated Outputs (at Least within the Vatican City State)’, The IPKat (Web Page, 25 January 2025) <https://ipkitten.blogspot.com/2025/01/new-vatican-ai-guidelines-for.html>.
-
Judicial Council of California, Artificial Intelligence Task Force, Judicial Branch Administration: Rule and Standard for Use of Generative Artificial Intelligence in Court-Related Work (Report to the Judicial Council No 25–109, 16 June 2025) 18–19 <https://jcc.legistar.com/View.ashx?M=F&ID=14303119&GUID=0C94642A-28D3-47C0-8AE9-1E4DE3A96DFC>.
-
Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI) Guidance for Judicial Office Holders (Guidance, 14 April 2025) 5 <https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version.pdf>.
-
Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Judges, Judicial Officers, Tribunal Members and Judicial Support Staff (Guidelines, 7 December 2023) 3 <https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Judicial.pdf>.
-
Supreme Court of Illinois, Illinois Supreme Court Policy on Artificial Intelligence (Policy, 1 January 2025) 2.
-
Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) 8 <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>.
-
Ibid.
-
Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI) Guidance for Judicial Office Holders (Guidance, 14 April 2025) 3 <https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version.pdf>.
-
Hong Kong Judiciary Administration, Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff of the Hong Kong Judiciary (Guidelines, July 2024) 3.
-
Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Judges, Judicial Officers, Tribunal Members and Judicial Support Staff (Guidelines, 7 December 2023) 2 <https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Judicial.pdf>.
-
Judicial Council of California, Artificial Intelligence Task Force, Judicial Branch Administration: Rule and Standard for Use of Generative Artificial Intelligence in Court-Related Work (Report to the Judicial Council No 25–109, 16 June 2025) 18 <https://jcc.legistar.com/View.ashx?M=F&ID=14303119&GUID=0C94642A-28D3-47C0-8AE9-1E4DE3A96DFC>.
-
United Nations Educational, Scientific and Cultural Organization (UNESCO), Draft Guidelines for the Use of AI Systems in Courts and Tribunals (Guidelines, May 2025) <https://unesdoc.unesco.org/ark:/48223/pf0000393682>.
-
Ibid 29.
-
Ibid 26–30.
-
Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 19 <https://docs.un.org/en/A/80/169>.
-
Consultation 7 (Judicial College of Victoria).
-
Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Judges, Judicial Officers, Tribunal Members and Judicial Support Staff (Guidelines, 7 December 2023) 3 <https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Judicial.pdf>.
-
Supreme Court of Illinois, Illinois Supreme Court Policy on Artificial Intelligence (Policy, 1 January 2025).
-
Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI) Guidance for Judicial Office Holders (Guidance, 14 April 2025) 3 <https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version.pdf>.
-
Ibid.
-
Ibid 4.
-
Ibid 5; This view is reflected in the values of judicial officers in relation to AI. See Erin Solovey, Brian Flanagan and Daniel Chen, ‘Interacting with AI at Work: Perceptions and Opportunities from the UK Judiciary’ in Proceedings of the 4th Annual Symposium on Human-Computer Interaction for Work (Conference Paper, 22 June 2025) 1, 3 <https://dl.acm.org/doi/10.1145/3729176.3729192>.
-
Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) 6 <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>. Ibid.
-
Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI): Guidance for Judicial Office Holders (Guidance, 14 April 2025) 7 <https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version.pdf>.
-
Ibid.
-
AI Rapid Response Team, Artificial Intelligence: Guidance for Use of AI and Generative AI in Courts (Guidance, National Centre for State Courts, 7 August 2024) 10 <https://www.ncsc.org/sites/default/files/media/document/AI-Courts-NCSC-AI-guidelines-for-courts.pdf>.
-
Nevada Courts, Artificial Intelligence: A Guide for Judicial Officers (Guide, February 2025) 2 <https://nvcourts.gov/__data/assets/pdf_file/0028/46693/AI_Guide_for_Judicial_Officers.pdf>.
-
Supreme Court of Victoria, SC CR 3 – Expert Evidence in Criminal Trials (Practice Note, 1 June 2025) para 7.4 <https://www.supremecourt.vic.gov.au/areas/legal-resources/practice-notes/sc-cr-3-expert-evidence-in-criminal-trials>.
-
See Recommendation 10. Supreme Court (General Civil Procedure) Rules 2025 (Vic) Form 44A.
-
Hong Kong Judiciary Administration, Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff of the Hong Kong Judiciary (Guidelines, July 2024) 3.
-
Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI) Guidance for Judicial Office Holders (Guidance, 14 April 2025) 3 <https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version.pdf>.
-
Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) 8 <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>.
-
Hong Kong Judiciary Administration, Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff of the Hong Kong Judiciary (Guidelines, July 2024) 4.
-
Ibid.
-
Office of the Commissioner for Federal Judicial Affairs Canada, Action Committee on Modernizing Court Operations, Use of Artificial Intelligence by Courts to Enhance Court Operations (Statement, 20 November 2024) 1 <https://fja-cmf.gc.ca/COVID-19/pdf/Use-of-AI-by-Courts-Utilisation-de-lIA-par-les-tribunaux-eng.pdf>.
-
Ibid.
-
‘Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence’, Federal Court of Canada (Guidelines, 20 December 2023) 1 <https://www.fct-cf.gc.ca/en/pages/law-and-practice/artificial-intelligence>.
-
Office of the Commissioner for Federal Judicial Affairs Canada, Action Committee on Modernizing Court Operations, Use of Artificial Intelligence by Courts to Enhance Court Operations (Statement, 20 November 2024) 3 <https://fja-cmf.gc.ca/COVID-19/pdf/Use-of-AI-by-Courts-Utilisation-de-lIA-par-les-tribunaux-eng.pdf>.
-
‘Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence’, Federal Court of Canada (Guidelines, 20 December 2023) 2 <https://www.fct-cf.gc.ca/en/pages/law-and-practice/artificial-intelligence>.
-
Supreme Court of New South Wales, Guidelines for New South Wales Judges in Respect of Use of Generative AI (Guidelines, 21 November 2024) 1 <https://supremecourt.nsw.gov.au/documents/About-the-Court/policies/Guidelines_Gen_AI.pdf>.
-
Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) 3 <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>.
-
Delaware Courts, Judicial Branch, Interim Policy on the Use of GenAI by Judicial Officers and Court Personnel (Interim Policy, 22 October 2024) 2 <https://www.courts.delaware.gov/forms/download.aspx?id=266838>.
-
Hong Kong Judiciary Administration, Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff of the Hong Kong Judiciary (Guidelines, July 2024) 2.
-
Joseph A. Cannataci, Special Rapporteur, Artificial Intelligence and Privacy, and Children’s Privacy: Report of the Special Rapporteur on the Right to Privacy, Joseph A. Cannataci, UN Doc A/HRC/46/37 (25 January 2021) 3 <https://documents.un.org/doc/undoc/gen/g21/015/65/pdf/g2101565.pdf>.
-
Regulation (EU) 2024/1689 (Artificial Intelligence Act) [2024] OJ L 2024/1689, recital 61.
-
Submission 26 (Supreme Court of Victoria).
-
Submission 12 (Victoria Legal Aid).
-
Submission 5 (Office of the Victorian Information Commissioner).
-
Submission 23 (Victorian Bar Association).
-
Submission 17 (Office of Public Prosecutions).
-
Submission 27 (Federation of Community Legal Centres and Justice Connect).
-
Submission 20 (Deakin Law Clinic).
-
Consultation 19 (Professor Ian Freckelton AO KC).
-
Consultation 32 (Supreme Court of Victoria).
-
Submission 4 (Coroners Court of Victoria).
-
Submission 24 (County Court of Victoria).
-
Consultation 15 (Magistrates’ Court of Victoria).
-
Consultation 9 (Victorian Civil and Administrative Tribunal).
-
Consultation 32 (Supreme Court of Victoria).
-
Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 16 <https://docs.un.org/en/A/80/169>.
-
Ibid.
-
Supreme Court of New South Wales, Guidelines for New South Wales Judges in Respect of Use of Generative AI (Guidelines, 21 November 2024) 1 <https://supremecourt.nsw.gov.au/documents/About-the-Court/policies/Guidelines_Gen_AI.pdf>.
-
The Pontifical Commission for the State of the Vatican City, Decreto Della Pontificia Commissione per Lo Stato Della Città Del Vaticano Recante “Linee Guida in Materia Di Intelligenza Artificiale” [Decree of the Pontifical Commission for the Vatican City State Containing ‘Guidelines on Artificial Intelligence’ (English Translation)] (Decree No N. DCCII, 16 December 2024) art 12 <https://www.vaticanstate.va/images/N.%20DCCII.pdf>.
-
Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 16–18 <https://docs.un.org/en/A/80/169>; See also a discussion on the risks of bias in Monika Zalnieriute and Felicity Bell, ‘Technology and the Judicial Role’ in Gabrielle Appleby and Andrew Lynch (eds), The Judge, the Judiciary and the Court: Individual, Collegial and Institutional Judicial Dynamics in Australia (Cambridge University Press, 2021) 116, 133–136.
-
Australian Institute of Judicial Administration (AIJA), Guide to Judicial Conduct, Third Edition (Revised) (Guide, December 2023) 7 <https://aija.org.au/wp-content/uploads/2024/04/Judicial-Conduct-guide_revised-Dec-2023-formatting-edits-applied.pdf>.
-
‘Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence’, Federal Court of Canada (Guidelines, 20 December 2023) 2 <https://www.fct-cf.gc.ca/en/pages/law-and-practice/artificial-intelligence>.
-
Monika Zalnieriute and Felicity Bell, ‘Technology and the Judicial Role’ in Gabrielle Appleby and Andrew Lynch (eds), The Judge, the Judiciary and the Court: Individual, Collegial and Institutional Judicial Dynamics in Australia (Cambridge University Press, 2021) 116, 130–132.
-
Lyria Bennett Moses, ‘Stochastic Judges: The Limits of Large Language Models’ (2024) 98(9) Australian Law Journal 640, 644.
-
Justice Perry, ‘Emerging Technologies and International Frameworks’ (Speech, Australian Law Librarians’ Association Conference, Adelaide, 9 August 2024) 8 <https://www.fedcourt.gov.au/digital-law-library/judges-speeches/justice-perry/perry-j-20240809>.
-
Daniel Escott, FIJIT: Integrating Judicial Independence and Technology (Manuscript, Osgoode Hall Law School, York University, 2025) 8.
-
Ibid 9.
-
Justice Perry, ‘Emerging Technologies and International Frameworks’ (Speech, Australian Law Librarians’ Association Conference, Adelaide, 9 August 2024) 8 <https://www.fedcourt.gov.au/digital-law-library/judges-speeches/justice-perry/perry-j-20240809>; Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 17–18 <https://docs.un.org/en/A/80/169>.
-
Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 18 <https://docs.un.org/en/A/80/169>.
-
Kalliopi Terzidou, ‘The Use of Artificial Intelligence in the Judiciary and Its Compliance with the Right to a Fair Trial’ (2022) 31(3) Journal of Judicial Administration 154, 160–161 <https://search.informit.org/doi/10.3316/agispt.20220401064756>.
-
Ibid 165.
-
Justice Perry, ‘Emerging Technologies and International Frameworks’ (Speech, Australian Law Librarians’ Association Conference, Adelaide, 9 August 2024) 8 <https://www.fedcourt.gov.au/digital-law-library/judges-speeches/justice-perry/perry-j-20240809>.
-
Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 18 <https://docs.un.org/en/A/80/169>.
-
Lyria Bennett Moses, ‘Artificial Intelligence: Affordances and Limits in the Context of Judging’ (2024) 157(1) Journal & Proceedings of the Royal Society of New South Wales 123, 128.
-
Lyria Bennett Moses, ‘Stochastic Judges: The Limits of Large Language Models’ (2024) 98(9) Australian Law Journal 640, 649.
-
Ibid.
-
Michael Pelly, ‘An Interview with Chief Justice Gageler’, Westlaw Updates & Alerts (Web Page, 8 July 2025) 2 <https://support.thomsonreuters.com.au/product/westlaw-precision-australia/updates-alerts/interview-chief-justice-gageler>.
-
Lyria Bennett Moses, ‘Stochastic Judges: The Limits of Large Language Models’ (2024) 98(9) Australian Law Journal 640, 646–47.
-
Lyria Bennett Moses, ‘Artificial Intelligence: Affordances and Limits in the Context of Judging’ (2024) 157(1) Journal & Proceedings of the Royal Society of New South Wales 123, 127.
-
Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 215; Tania Sourdin and Richard Cornes, ‘Do Judges Need to Be Human? The Implications of Technology for Responsive Judging’ in T. Sourdin and A. Zariski (eds), The Responsive Judge: International Perspectives on Law and Justice (Springer, 2018) 87, 104–5; See also Monika Zalnieriute and Felicity Bell, ‘Technology and the Judicial Role’ in Gabrielle Appleby and Andrew Lynch (eds), The Judge, the Judiciary and the Court: Individual, Collegial and Institutional Judicial Dynamics in Australia (Cambridge University Press, 2021) 116, 21.
-
Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration, December 2023) 56.
-
Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 5 <https://docs.un.org/en/A/80/169>.
-
Ibid 17.
-
Judicial College Victoria, Criminal Proceedings Manual (Online Manual) ‘6.2 Reasons for decision’ [1] n 1599 <https://resources.judicialcollege.vic.edu.au/article/1053061/section/843511> (7 May 2025) citing; Soulemezis v Dudley (1987) 10 NSWLR 247; R v Arnold [1999] 1 VR 179; Reasons for a decision may also be required under specific statutory provisions for example, Criminal Procedure Act 2009 (Vic) s 351.
-
Judicial College Victoria, Criminal Proceedings Manual (Online Manual) ‘6.2 Reasons for decision’ [3] <https://resources.judicialcollege.vic.edu.au/article/1053061/section/843511> (7 May 2025).
-
Brian Barry, ‘AI for Assisting Judicial Decision-Making: Implications for the Future of Open Justice’ (2024) 98(9) Australian Law Journal 656, 659–61.
-
Ibid 659.
-
Judicial College Victoria, Criminal Proceedings Manual (Online Manual) ‘6.2 Reasons for decision’ [7] <https://resources.judicialcollege.vic.edu.au/article/1053061/section/843511> (7 May 2025) citing; DPP v Harika [2001] VSC 237.
-
DPP v Harika [2001] VSC 237, [30].
-
Brian Barry, ‘AI for Assisting Judicial Decision-Making: Implications for the Future of Open Justice’ (2024) 98(9) Australian Law Journal 656, 661.
-
R v Cesan [2008] HCA 52; (2008) 36 CLR 358, [72].
-
Consultation 12 (County Court of Victoria).
-
Consultation 2 (Coroners Court of Victoria).
-
Lyria Bennett Moses, ‘Stochastic Judges: The Limits of Large Language Models’ (2024) 98(9) Australian Law Journal 640, 653; See also Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 249–50.
-
Justice Peter Quinlan, ‘The Impact of Social Media and AI on Public Trust in the Judiciary’ (Speech, Global Summit of Hellenic Lawyers, Athens, Hellas, 9 July 2025) 7 <https://www.supremecourt.wa.gov.au/_files/Speeches/2025/The%20Impact%20of%20Social%20Media%20and%20AI%20on%20Public%20Trust%20in%20the%20Judiciary.pdf>.
-
Erin Solovey, Brian Flanagan and Daniel Chen, ‘Interacting with AI at Work: Perceptions and Opportunities from the UK Judiciary’ in Proceedings of the 4th Annual Symposium on Human-Computer Interaction for Work (Conference Paper, 22 June 2025) 1, 3–4 <https://dl.acm.org/doi/10.1145/3729176.3729192>.
-
Ibid 3.
-
Lyria Bennett Moses, ‘Stochastic Judges: The Limits of Large Language Models’ (2024) 98(9) Australian Law Journal 640, 645.
-
As recommended in the Supreme Court of Illinois, Illinois Supreme Court Policy on Artificial Intelligence (Policy, 1 January 2025) 2.
-
Canadian Judicial Council, Guidelines for the Use of Artificial Intelligence in Canadian Courts (Guidelines, September 2024) 3 <https://cjc-ccm.ca/sites/default/files/documents/2024/AI%20Guidelines%20-%20FINAL%20-%202024-09%20-%20EN.pdf>.
-
Ibid.
-
Lord Justice Birss, ‘Speech by the Deputy Head of Civil Justice: Future Visions of Justice’ (Speech, King’s College London Law School, 18 March 2024) <https://www.judiciary.uk/speech-by-the-deputy-head-of-civil-justice-future-visions-of-justice/>.
-
Snell v United Specialty Insurance Company, 102 F.4th 1208 (2024), 1234.
-
Eckhard Schindler, ‘Judicial Systems Are Turning to AI to Help Manage Vast Quantities of Data and Expedite Case Resolution’, IBM (Web Page, 4 February 2025) <https://www.ibm.com/case-studies/blog/judicial-systems-are-turning-to-ai-to-help-manage-its-vast-quantities-of-data-and-expedite-case-resolution>.
-
‘Dutch Judge Uses ChatGPT to Help Reach a Verdict’, Dutch News (online, 5 August 2024) <https://www.dutchnews.nl/2024/08/dutch-judge-uses-chatgpt-to-help-reach-a-verdict/>.
-
Erin Solovey, Brian Flanagan and Daniel Chen, ‘Interacting with AI at Work: Perceptions and Opportunities from the UK Judiciary’ in Proceedings of the 4th Annual Symposium on Human-Computer Interaction for Work (Conference Paper, 22 June 2025) 1, 4 <https://dl.acm.org/doi/10.1145/3729176.3729192>.
-
Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 15 <https://docs.un.org/en/A/80/169>.
-
Regulation (EU) 2024/1689 (Artificial Intelligence Act) [2024] OJ L 2024/1689, recital 61.
-
European Commission for the Efficiency of Justice (CEPEJ), European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment (2019, adopted at the 31st plenary meeting of the CEPEJ, Strasbourg, 3-4 December 2018) 64–7.
-
Hong Kong Judiciary Administration, Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff of the Hong Kong Judiciary (Guidelines, July 2024) 6.
-
Supreme Court of New South Wales, Guidelines for New South Wales Judges in Respect of Use of Generative AI (Guidelines, 21 November 2024) 1 <https://supremecourt.nsw.gov.au/documents/About-the-Court/policies/Guidelines_Gen_AI.pdf>.
-
Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Judges, Judicial Officers, Tribunal Members and Judicial Support Staff (Guidelines, 7 December 2023) 5–6 <https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Judicial.pdf>.
-
Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI) Guidance for Judicial Office Holders (Guidance, 14 April 2025) 6 <https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version.pdf>.
-
Judicial Council, Utah, Interim Rules on the Use of Generative AI (Interim Rules, 25 October 2023) 2 <https://nationalcenterforstatecourts.app.box.com/s/px0vzpzzg6n42ng10i4lya4al0mwjhqq>.
-
Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 17 <https://docs.un.org/en/A/80/169>; See also Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 135–137.
-
Working Group on Judicial Administration and Artificial Intelligence (JAAI), New York City Bar, Artificial Intelligence and the New York State Judiciary: A Preliminary Path (Report, 3 June 2024) 7 <https://nationalcenterforstatecourts.app.box.com/s/dwtpkyv4vadgvwryz872382xg1gj9svh>.
-
Consultation 32 (Supreme Court of Victoria).
-
Tania Sourdin, ‘Judge v Robot? Artificial Intelligence and Judicial Decision-Making’ (2018) 41(4) University of New South Wales Law Journal 1114, 1117 <https://www.unswlawjournal.unsw.edu.au/article/judge-v-robot-artificial-intelligence-and-judicial-decision-making/>. Ibid.
-
However, even legal research tools continue to contain hallucinations and inaccuracies. See, for example, Varun Magesh et al, ‘Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools’ (2025) 22(2) Journal of Empirical Legal Studies 216 <https://onlinelibrary.wiley.com/doi/10.1111/jels.12413>.
-
Also, Aniket Deroy, Kripabandhu Ghosh and Saptarshi Ghosh, ‘How Ready Are Pre-Trained Abstractive Models and LLMs for Legal Case Judgement Summarization?’ (2023) arXiv:2306.01248v2 [cs.CL] <https://doi.org/10.48550/arXiv.2306.01248>; Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 16–17 <https://docs.un.org/en/A/80/169>.
-
Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) 17 <https://docs.un.org/en/A/80/169>.
-
There has been significant commentary on the risks of predictive analytics tools, particularly when used by judicial officers to inform decision making in the criminal justice system. See for example, Jeff Larson et al, ‘How We Analyzed the COMPAS Recidivism Algorithm’, ProPublica (online, 23 May 2016) <https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm>.
-
Michael Pelly, ‘An Interview with Chief Justice Gageler’, Westlaw Updates & Alerts (Web Page, 8 July 2025) 3 <https://support.thomsonreuters.com.au/product/westlaw-precision-australia/updates-alerts/interview-chief-justice-gageler>.
-
Lyria Bennett Moses, ‘Stochastic Judges: The Limits of Large Language Models’ (2024) 98(9) Australian Law Journal 640, 654.
-
Working Group on Judicial Administration and Artificial Intelligence (JAAI), New York City Bar, Artificial Intelligence and the New York State Judiciary: A Preliminary Path (Report, 3 June 2024) 7 <https://nationalcenterforstatecourts.app.box.com/s/dwtpkyv4vadgvwryz872382xg1gj9svh>.
-
Ibid.
-
‘Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence’, Federal Court of Canada (Guidelines, 20 December 2023) 2 <https://www.fct-cf.gc.ca/en/pages/law-and-practice/artificial-intelligence>.
-
Hong Kong Judiciary Administration, Guidelines on the Use of Generative Artificial Intelligence for Judges and Judicial Officers and Support Staff of the Hong Kong Judiciary (Guidelines, July 2024).
-
Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Judges, Judicial Officers, Tribunal Members and Judicial Support Staff (Guidelines, 7 December 2023) 3 <https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Judicial.pdf>.
-
Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI) Guidance for Judicial Office Holders (Guidance, 14 April 2025) 6 <https://www.judiciary.uk/wp-content/uploads/2025/04/Refreshed-AI-Guidance-published-version.pdf>.
-
Consultation 32 (Supreme Court of Victoria).
-
Consultation 12 (County Court of Victoria).
-
Consultation 2 (Coroners Court of Victoria).
-
Consultation 35 (Victoria Legal Aid).
-
Submission 5 (Office of the Victorian Information Commissioner).
-
Submission 16 (Law Institute Victoria).
-
Submission 27 (Federation of Community Legal Centres and Justice Connect).
-
Submission 15 (Human Rights Law Centre). Consultation 7 (Judicial College of Victoria).
-
Consultation 7 (Judicial College of Victoria).
-
Supreme Court of New South Wales, Guidelines for New South Wales Judges in Respect of Use of Generative AI (Guidelines, 21 November 2024) 2 <https://supremecourt.nsw.gov.au/documents/About-the-Court/policies/Guidelines_Gen_AI.pdf>.
-
Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Judges, Judicial Officers, Tribunal Members and Judicial Support Staff (Guidelines, 7 December 2023) 3 <https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Judicial.pdf>.
-
Consultation 32 (Supreme Court of Victoria).
-
European Commission for the Efficiency of Justice (CEPEJ), Artificial Intelligence Advisory Board (AIAB), 1st AIAB Report on the Use of Artificial Intelligence (AI) in the Judiciary Based on the Information Contained in the Resource Centre on Cyberjustice and AI (CEPEJ-AIAB (2024) 4 Rev 5, 28 February 2025) 8.
-
Submissions 14 (Centre for Artificial Intelligence and Digital Ethics, The University of Melbourne), 22 (Centre for the Future of the Legal Profession and UNSW Law and Justice). See also discussion in Vivi Tan, Jeannie Paterson and Julian Webb, ‘Generative AI in Small Value Consumer Disputes: Reviving Not Resolving Challenges of Design and Governance in Online Dispute Resolution’ (2025) 48(4) University of New South Wales Law Journal (forthcoming) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5313052>.
-
Consultation 6 (Office of Public Prosecutions).
|
|
