Glossary
|
Term |
Definition |
|---|---|
|
Access to justice |
Access to justice is about ensuring ‘people have an understanding of their rights under the law and an ability to pursue their case and receive the support they need when engaging with the law and the justice system.’[1] |
|
Algorithm |
A set of instructions that guide a computer in performing specific tasks or solving problems.[2] |
|
Artificial intelligence (AI) |
A ‘machine-based system that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment’.[3] |
|
AI agent |
An AI agent ‘is an intelligent software system designed to perceive its environment, reason about it, make decisions, and take actions to achieve specific objectives autonomously’.[4] |
|
AI evidence |
‘Evidence is material presented to a court to prove or disprove a fact. It includes what witnesses say as well as documents and other objects.’[5] For this report, ‘AI evidence’ is evidence that is generated, processed or analysed by AI. This can include text, video or audio evidence. |
|
AI lifecycle |
‘The AI system lifecycle is a structured process that occurs in stages, ensuring the holistic coverage of the AI system from discovery to retirement. The AI lifecycle stages include: Discover: Design, data, train and evaluate. Operate: Integrate, deploy and monitor. Retire: Decommission.’[6] |
|
AI model |
‘The output of an algorithm that has been applied to a dataset. In simple terms, an AI model is used to make predictions or decisions and an algorithm is the logic by which that AI model operates.’[7] |
|
AI system |
An AI ‘system is a broader concept than a model … An AI system comprises various components, including, in addition to the model or models, elements such as interfaces, sensors, conventional software, etc.’[8] |
|
AI tool |
For this report, an ‘AI tool’ refers to any software program, product or service that uses AI as the central technology, and which a user can directly interact with or have access to. |
|
AI use case |
For this report, an ‘AI use case’ refers to the use of an AI tool or system that is designed, developed, deployed or procured to support official work of Victoria’s courts or VCAT. This may either be standalone, or part of a wider solution. |
|
Automated decision-making |
The ‘application of automated systems in any part of the decision-making process … Automated systems range from traditional non-technological rules-based systems to specialised technological systems which use automated tools to predict and deliberate … Although ADM [automated decision-making] may in some instances use AI technologies, in other cases it will not’.[9] |
|
Black box |
‘Where the data inputted is known, and the decisions made from that data are known, but the way in which the data was used to make the decisions is not understood by humans.’[10] |
|
Chatbot |
A ‘computer program that interacts with humans through natural language conversations. Some chatbots use LLMs [large language models] to generate content according to user inputs.’[11] |
|
Closed AI |
For this report, ‘closed AI’ is defined in contrast to public AI. Closed AI tools are generally not openly accessible to the public and information used in closed AI tools remain within a controlled environment. When an AI tool is ‘closed’ there are controls to reduce risks related to privacy, or confidentiality settings that protect information from being made publicly available or used to train the AI tool. |
|
Computer vision |
The capability to ‘acquire, process and interpret data representing images or video’.[12] |
|
Deepfake |
A ‘digital photo, video or sound file of a real person that has been edited [or generated using AI] to create an extremely realistic but false depiction of them doing or saying something that they did not actually do or say’.[13] It can also involve false depictions of an object or environment. |
|
Digital divide |
‘The social and economic gap between those who have access to, and ability to use, computer technology and those who do not.’[14] |
|
Embedded AI |
For this report, we use the phrase ‘embedded AI’ to refer to AI applications that are integrated in existing software or technology products. |
|
Expert system |
‘An AI system that encapsulates knowledge provided by a human expert in a specific domain to infer solutions to problems … An expert system consists of a knowledge base, an inference engine and a user interface. The knowledge base stores knowledge of a specific domain.’[15] |
|
Explainable AI (also known as XAI) |
A ‘set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms’.[16] |
|
Fine-tuning |
In the context of AI, fine-tuning refers to ‘refining a pre-trained model to enhance its accuracy and efficiency, particularly for a specific task or dataset’.[17] |
|
General Purpose AI |
An AI system ‘that addresses a broad range of tasks and uses, both intended and unintended by developers’.[18] |
|
Generative AI (GenAI) |
AI ‘systems that create content as text, images, music, audio and videos based on a user’s prompts’.[19] |
|
Hallucination |
‘AI models making up facts to fit a prompt’s intent. When a LLM [large language model] processes a prompt, it searches for statistically appropriate words, not necessarily the most accurate answer. An AI system does not ‘understand’ anything, it only recognises the most statistically likely answer. That means an answer might sound convincing but have no basis in fact.’[20] |
|
Large language model |
Data transformation systems that are ‘trained with large numbers of parameters, which are numerical values that developers adjust to shape the inputs and outputs of an AI model. When a user inputs a prompt, the model generates text content in response.’[21] They can ‘generate text based on the patterns and relationships (probabilities) it has learned from massive datasets … by predicting the next term or word, in a sentence, given the words that came before it’.[22] |
|
Machine learning |
‘A set of techniques for creating algorithms so that computational systems can learn from data.’[23] |
|
Natural language processing |
A ‘type of AI used to analyse, understand and generate human language. For legal documents, NLP [natural language processing] analyses legal documents, contracts, and other legal texts to identify key provisions, clauses and risks’.[24] |
|
Neural network |
A ‘machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions’.[25] |
|
Online dispute resolution |
‘Involves the use of information and communications technology to help parties resolve disputes. Within a court and tribunal system, ODR [online dispute resolution] is a digital platform that allows people to progress through dispute resolution for low-value disputes, from the commencement of a claim to final determination, entirely online.’[26] |
|
Open domain (vs closed domain) |
‘A closed domain system, also known as domain-specific, focuses on a particular set of topics and has limited responses based on the business problem … On the other hand, an open domain system is expected to understand any topic and return relevant responses.’[27] |
|
Open source (vs closed source) |
Open source has ‘publicly accessible source code and underlying architecture, allowing developers, deployers, researchers and enterprises to use, modify and distribute them freely or subject to limited restrictions’. Whereas closed source has ‘proprietary underlying source code and architecture. They are accessible only under specific terms defined by their developers’.[28] |
|
Prompt |
‘A prompt is an instruction, query, or command that a user enters into a GenAI interface to request a response from the system.’[29] |
|
Public AI |
For this report, the term ‘public AI’ refers to AI tools that are openly accessible to the public, typically via the internet. |
|
Retrieval Augmented Generation |
Enhances large language models by ‘retrieving relevant document chunks from external knowledge base through semantic similarity calculation. By referencing external knowledge, RAG [retrieval augmented generation] effectively reduces the problem of generating factually incorrect content.’[30] |
|
Scheming |
Scheming is where an AI tool or agent ‘covertly pursues misaligned goals, hiding its true capabilities and objectives’.[31] |
|
Self-represented litigant |
‘Anyone who is attempting to resolve any component of a legal problem for which they do not have legal counsel, whether or not the matter actually goes before a court or tribunal.’[32] |
|
Specialised AI |
For this report, we use ‘specialised AI’ to refer to AI tools developed for a specific, fixed and identifiable purpose, including legal research or translation. Its properties are defined in opposition to General Purpose AI tools, which can be adapted by users to serve a range of purposes. |
|
Technology assisted review |
A ‘process for prioritizing or coding a collection of documents using a computerized system that harnesses human judgements of one or more subject matter expert(s) on a smaller set of documents and then extrapolates those judgements to the remaining document collection’.[33] |
-
County Court of Victoria, Access to Justice and Dispute Resolution (Fact Sheet, 2018) 1.
-
‘Voluntary AI Safety Standard: Terms and Definitions’, Department of Industry, Science and Resources (Web Page, 5 September 2024) <https://www.industry.gov.au/publications/voluntary-ai-safety-standard/terms-and-definitions>.
-
Organisation for Economic Co-operation and Development (OECD), Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, 3 May 2024, 7 <https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449>.
-
OWASP Gen AI Security Project, Agentic AI – Threats and Mitigations: OWASP Top 10 for LLM Apps & Gen AI Agentic Security Initiative (White Paper, Version 1.0, February 2025) 4 <https://genai.owasp.org/resource/agentic-ai-threats-and-mitigations/>.
-
‘Glossary of Legal Terms’, Supreme Court of Victoria (Web Page, March 2020) <http://www.supremecourt.vic.gov.au/about-the-court/how-the-court-works/glossary>.
-
Digital Transformation Agency (Cth), Australian Government’s AI Technical Standard (Version 1, July 2025) 13 <https://www.digital.gov.au/policy/ai/AI-technical-standard>.
-
IBM, What Is an AI Model? (Web Page, 13 September 2023) <https://www.ibm.com/think/topics/ai-model>.
-
David Fernández‑Llorca et al, ‘An Interdisciplinary Account of the Terminological Choices by EU Policymakers Ahead of the Final Agreement on the AI Act: AI System, General Purpose AI System, Foundation Model, and Generative AI’ [2024] Artificial Intelligence and Law, 7–8 <https://doi.org/10.1007/s10506-024-09412-y>.
-
Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Discussion Paper (Discussion Paper, June 2023) 5–6. The definition of ‘Automated Decision Making (ADM)’ is based on the definition in Commonwealth Ombudsman, Automated Decision-Making: Better Practice Guide (Report, 4 March 2020).
-
Toby Walsh et al, Closer to the Machine: Technical, Social, and Legal Aspects of AI (Report, Office of the Victorian Information Commissioner (OVIC), August 2019) 3.
-
Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 6 <https://doi.org/10.60836/PSMC-RV23>.
-
Standards Australia Limited, ‘AS ISO/IEC 22989:2023 Information Technology – Artificial Intelligence – Artificial Intelligence Concepts and Terminology’ 16 [3.7.1] <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-22989-202>.
-
‘Deepfake Trends and Challenges – Position Statement’, eSafety Commissioner (Web Page, 1 September 2024) <https://www.esafety.gov.au/industry/tech-trends-and-challenges/deepfakes>.
-
Macquarie Dictionary Online (Web Page, 2025) (online at 25 Sept 2025) <https://www-macquariedictionary-com-au.eu1.proxy.openathens.net/about/> ‘digital divide’.
-
Standards Australia Limited, ‘AS ISO/IEC 22989:2023 Information Technology – Artificial Intelligence – Artificial Intelligence Concepts and Terminology’ 46 [8.5.2] <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-22989-202>.
-
‘What Is Explainable AI (XAI)?’, IBM Think (Web Page, 29 March 2023) <https://www.ibm.com/think/topics/explainable-ai>.
-
Neural Ninja, ‘The Art of Fine-Tuning AI Models: A Beginner’s Guide’, Let’s Data Science (Web Page, 29 January 2024) <https://letsdatascience.com/the-art-of-fine-tuning-ai-models/>.
-
‘Voluntary AI Safety Standard: Terms and Definitions’, Department of Industry, Science and Resources (Web Page, 5 September 2024) <https://www.industry.gov.au/publications/voluntary-ai-safety-standard/terms-and-definitions>.
-
Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 2 <https://doi.org/10.60836/PSMC-RV23>.
-
Ibid 28.
-
Ibid 5.
-
LexisNexis Australia, Practical Guidance AU – Cybersecurity, Data Protection & Privacy (at 15 July 2025) ‘AI terms and phrases for legal professionals’; See also IBM, What Are Large Language Models (LLMs)? (Web Page, 2 November 2023) <https://www.ibm.com/think/topics/large-language-models>.
-
See also Standards Australia Limited, ‘AS ISO/IEC 23053 – Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)’ [6.3] <https://www.iso.org/standard/74438.html>; Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 4 <https://doi.org/10.60836/PSMC-RV23>.
-
LexisNexis Australia, Practical Guidance AU – Cybersecurity, Data Protection & Privacy (at 15 July 2025) ‘AI terms and phrases for legal professionals’; See also IBM, What Is NLP (Natural Language Processing)? (Web Page, 11 August 2024) <https://www.ibm.com/think/topics/natural-language-processing>.
-
Fangfang Lee, ‘What Is a Neural Network?’, IBM Think (Web Page, 6 October 2021) <https://www.ibm.com/topics/neural-networks>.
-
Peter Cashman and Eliza Ginnivan, ‘Digital Justice: Online Resolution of Minor Civil Disputes and the Use of Digital Technology in Complex Litigation and Class Actions’ (2019) 19 Macquarie Law Journal 39, 41 <https://www.mq.edu.au/__data/assets/pdf_file/0012/866289/Digital-Justice.pdf>.
-
Team Symbl, ‘Conversation Understanding: Open Domain vs. Closed Domain’, Symbl.Ai Blog (Web Page, 10 December 2020) <https://symbl.ai/developers/blog/conversation-understanding-open-domain-vs-closed-domain/>.
-
Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 9 <https://doi.org/10.60836/PSMC-RV23>.
-
Ibid 2.
-
Yunfan Gao et al, ‘Retrieval-Augmented Generation for Large Language Models: A Survey’ (2024) arXiv:2312.10997v5 [cs.CL]:1-21, 1 <https://arxiv.org/abs/2312.10997>.
-
Alexander Meinke et al, ‘Frontier Models Are Capable of In-Context Scheming’ (2025) arXiv:2412.04984v2 [cs.AI]:1-72, 1 <http://arxiv.org/abs/2412.04984>.
-
Elizabeth Richardson, Tania Sourdin and Nerida Wallace, Self-Represented Litigants: Gathering Useful Information, Final Report – June 2012 (Report, Australian Centre for Justice Innovation, Monash University, October 2012) 4 [1.15] <https://research.monash.edu/en/publications/self-represented-litigants-gathering-useful-information-final-rep>.
-
Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration, December 2023) 19 n 32 citing Maura R Grossman and Gordon V Cormack, ‘The Grossman-Cormack Glossary of Technology-Assisted Review’ (2013) 7(1) Federal Courts Law Review 1, 32.
|
|
