4. Australian and global AI regulation
Overview
•The regulatory environment for Victoria’s courts and VCAT is shaped by national and international developments which are continuing to evolve.
•This chapter provides an update of AI regulatory frameworks in Australia and overseas, building on developments outlined in our consultation paper.
•Recent global events indicate that international efforts to consolidate global AI regulatory standards are increasingly difficult. The international environment is fragmented, with some jurisdictions focused on risk-based regulation and others encouraging ‘innovation’.
•Australia does not have AI-specific legislation. However, the Australian Government is considering regulatory options.
•The Australian Government’s progress on the regulation of AI is important for Victoria’s courts and VCAT. A national regulatory approach will address risks consistently across the country and promote safe and reliable use of AI.
International cooperation on AI regulation
4.1Globally, AI regulation is evolving and will influence the development of approaches in Australia. Australian laws and regulatory responses can be shaped by international declarations or recommendations. This may affect the quality, range and use of AI tools in courts and tribunals.
4.2The Castan Centre discussed the significance of international law in the context of the human rights treaties which Australia has signed.[1] The Human Rights Law Centre drew attention to United Nations (UN) documents on AI policies, along with references to Victoria’s human rights obligations.[2] It noted the relevance of the UN’s Roadmap for Digital Cooperation and UNESCO’s draft Guidelines for the Use of AI Systems in Courts and Tribunals.[3]
International dialogue and agreements
4.3International agreements and initiatives can influence Australia’s approaches to regulating AI. Australia is a signatory to the Bletchley Declaration and the Seoul Declaration for Safe, Innovative and Inclusive AI.[4]
4.4Australia is a member of the Hiroshima AI Process Friends Group, which is led by the G7 and promotes the safe, secure, and trustworthy use of AI.[5] Through the Global Partnership on AI, Australia signed the 2024 Global Partnership on AI New Delhi Declaration,[6] and attended the Global Partnership on AI Summit in Belgrade in December 2024.[7]
4.5The first intergovernmental standard on AI was the OECD’s Recommendation of the Council on Artificial Intelligence in 2019.[8] It called on governments to build international dialogue to advance knowledge about AI and to promote consensus-driven global technical standards. The OECD’s Recommendation of the Council for Agile Regulatory Governance to Harness Innovation in 2021 also calls for the ‘stepping up’ of bilateral, regional and multilateral regulatory cooperation to address the ‘transboundary reach of innovation’.[9]
4.6In March 2024, the UN General Assembly adopted a resolution to promote safe, secure and trustworthy AI.[10] This was followed by global efforts throughout 2024 to build cooperation in AI regulation.
4.7The Council of Europe formally adopted the Framework Convention on Artificial Intelligence in May 2024.[11] Canada, Japan and other non-member states have signed.[12] As of September 2025, Australia has not signed this convention.[13]
4.8In September 2024, UN member states adopted a Pact for the Future that includes a Global Digital Compact. These seek to shape digital technologies to support the UN’s Sustainable Development Goals.[14] The Global Digital Compact calls for enhanced international governance ‘to promote coordination and compatibility of emerging artificial intelligence governance frameworks’ as a key objective.[15]
4.9An AI Action Summit was co-hosted by France and India in February 2025, where over 90 countries including Australia signed the Paris AI Action Statement.[16] The statement identifies that AI systems should be ethical, trustworthy and promote AI accessibility to reduce the digital divide between advanced and developing nations.[17]
4.10Some international initiatives can help to guide approaches to AI use in courts. For instance, the UNESCO Recommendation on AI Ethics describes expectations for judicial uses:
Member States should enhance the capacity of the judiciary to make decisions related to AI systems as per the rule of law and in line with international law and standards, including in the use of AI systems in their deliberations, while ensuring that the principle of human oversight is upheld. In case AI systems are used by the judiciary, sufficient safeguards are needed to guarantee inter alia the protection of fundamental human rights, the rule of law, judicial independence as well as the principle of human oversight, and to ensure a trustworthy, public interest-oriented and human-centric development and use of AI systems in the judiciary.[18]
4.11In July 2025, the UN Special Rapporteur on the independence of judges and lawyers recommended that jurisdictions exercise caution when considering the adoption of AI in judicial systems.[19]
4.12While the need for global cooperation was a theme in the period between 2019 and 2024, there appears to be a shift in 2025. National AI regulatory approaches have taken different directions, and at this stage the Council of Europe Framework Convention stands as the only binding international agreement.
International AI standards
4.13During our consultations, we heard that international organisations such as the National Institute of Standards and Technology and the International Standards Organisation can help shape AI governance.[20] Courts and tribunals could look to international standards as a source of guidance.
4.14International standards provide specific, practical measures for organisations, companies and governments to commit to responsible internal processes and conduct. They align organisations with regulatory standards accepted in other parts of the world.
4.15In recognition of this, the Australian Government sought to align the Voluntary AI Safety Standard with two leading international standards on AI management systems, the International Standards Organisation’s AS ISO/IEC 42001: 2023 and the National Institute of Standards and Technology’s AI RMF 1.0.[21]
4.16Organisations like the Public Record Office Victoria highlight that international standards have an important role in benchmarking acceptable AI practices.[22] This may be useful for Victoria’s courts and VCAT when they procure third-party AI products and services. If a provider complies with international standards, this indicates that the provider meets globally recognised AI governance practices.
How AI is regulated in other countries
4.17According to one assessment, over 200 AI-related laws had been enacted around the world by the end of 2024, ranging from laws that strengthen measures against online piracy to laws that regulate AI in political advertising.[23]
4.18Internationally, jurisdictions have taken different approaches to regulating AI. These include:
•comprehensive statutory approaches—involving risk-based legislation or horizontal legislation aimed across sectors and industries
•other statutory approaches—establishing framework legislation or relying on amending existing regulation to cover AI-related issues
•non-statutory approaches—relying on existing guidelines and often anchored in high-level principles. For our purposes this includes countries that have not settled on how to regulate AI.
4.19Regulatory approaches can also be either ex-ante or ex-post:
•Ex-ante regulation—requires companies to meet certain standards before an AI system is deployed, with a focus on preventing harm before it occurs.
•Ex-post regulation—focuses on enforcement after deployment to target and reduce known risks by issuing fines or penalties in response to proven harm.[24]
4.20We discuss how these distinctions might inform regulatory responses for safe use of AI in Victoria’s courts and VCAT. In this chapter, we consider recent global developments based on the distinction between statutory and non-statutory approaches.
Comprehensive statutory approaches
4.21The following jurisdictions have enacted comprehensive legislation with strong enforceable measures aimed at AI use across sectors and industries:
•European Union: Signed the Artificial Intelligence Act 2024 (EU AI Act) into law in June 2024.[25] The EU AI Act prohibits the use of AI for certain practices, such as biometric categorisations based on sensitive traits, predicting the risk of a person’s criminal behaviour based on their personality traits and characteristics[26] and real-time biometric identification in public spaces for law enforcement (with some exceptions).[27] The Act classifies the administration of justice as high risk and requires AI systems used for these purposes to have risk management, data governance, record keeping, human oversight and other measures to ensure transparency in how those AI systems operate.[28] The Act also contains distinctive transparency and regulatory requirements for developers of General Purpose AI systems, as these are considered high risk.[29]
•South Korea: Established a comprehensive regulatory framework for AI in January 2025.[30] The Act is seen as broadly aligned with the EU AI Act. It takes a risk-based approach and introduces binding obligations for domestic and foreign companies involved in implementing high-impact AI. This includes a requirement to evaluate whether an AI system is high-impact,[31] and defines compliance requirements for companies using high-impact and GenAI.[32] AI business operators providing high-impact AI are required to implement a range of measures, such as a risk management plan, user protection measures, human supervision and documentation of these processes.[33]
4.22Other nations are currently considering comprehensive or risk-based laws:
•Brazil: Passed a Senate Bill in December 2024 to establish a regulatory framework covering the development, use and governance of AI systems. The Bill is set to be voted on by the Chamber of Deputies, before approval by the President.[34] The legislation builds on Brazil’s national AI strategy and three prior non-legislated Bills focused on reducing harm and promoting innovation. The law prohibits excessive risk systems that classify or rank individuals based on their social behaviour or personality traits. It bans live biometric identification systems unless there is express legal or judicial authorisation.[35] The law also classifies the use of AI in the administration of justice as high risk.[36]
•Mexico: A Bill was introduced in February 2025 to adopt a General Law on the Use of AI.[37] The Bill seeks to amend the Mexican Constitution to grant its Congress authority to legislate about this matter. The proposed law sets limits on the development and deployment of AI in Mexico and outlines specific prohibitions to prevent its misuse. It also proposes the creation of a national registry on AI systems.[38]
•Chile: Proposed an AI Bill which sits between self-regulation and risk-based regulation.[39] The draft law sets out internationally accepted ethical principles, aligned with the UNESCO Recommendation on the Ethics of AI. The Bill classifies AI systems into those presenting an unacceptable risk, high risk, limited risk and no evident risk.[40]
•Canada: The pace of the Artificial Intelligence and Data Bill, discussed in our consultation paper, slowed following its second reading.[41] The Bill expired due to the Canadian Federal election in April 2025. This was an omnibus Bill that would have established three separate Acts in areas of law related to consumer privacy and personal data protection, as well as AI.
4.23Other countries have introduced Bills that contain enforceable measures targeted at specific risks or uses, such as the Philippines,[42] or have members of legislatures that are proposing comprehensive risk-based approaches, such as deputies to the National People’s Congress in China.[43]
Other statutory approaches
4.24Other governments have proposed standalone legislation but avoided new powers of enforcement. Instead, they have relied on:
•existing laws
•existing regulatory bodies
•non-binding measures to encourage compliance.
4.25Some jurisdictions have looked to establish ‘framework legislation’, which the Australian government has described as legislation focused on adapting existing regulatory frameworks.[44] Framework legislation would ‘provide a consistent set of definitions and measures that would then be implemented through amendments to existing regulatory frameworks’ and define ‘the guardrails to apply and the threshold for when they would apply’.[45] This approach can include establishing government agencies to implement national AI strategies, sharing information across government or monitoring risk.
4.26Peru became the first country in South America to enact legislation concerning AI in July 2022.[46] The law is narrowly focused and establishes the Presidency of the Council of Ministers as the national authority responsible for directing, evaluating and supervising the development of AI in the country.[47]
4.27El Salvador passed a law in February 2025 that is focused on the promotion of AI and technologies.[48] The law establishes a National Artificial Intelligence Agency to coordinate and supervise obligations established under the law. It also ensures developers who are using open domain data and list themselves on the national register can be exempt from liability resulting from the unintended consequences of AI.[49]
4.28In May 2025, the Japanese parliament enacted a law to promote the development of AI.[50] Rather than imposing regulations or penalties on AI developers, the Act sets out general principles and lacks specific binding obligations.[51] In early 2024, the Japanese Government had been progressing comprehensive and enforceable legislation, which was abandoned in favour of the approach enacted in May 2025.[52]
4.29The National Assembly of Vietnam also officially adopted the Law on Digital Technology Industry in June 2025, which contains dedicated sections on AI.[53] The overall legal framework promotes adopting digital technologies and includes measures to boost domestic innovation, strengthen the digital workforce and attract international talent and capital.
4.30In Taiwan, the National Science and Technology Council published a draft Act on AI in July 2024.[54] The Bill was approved by Taiwan’s Executive Yuan for deliberation in August 2025.[55] The Bill is focused on encouraging technological innovation and lacks regulatory measures, such as those reflected in the EU AI Act.
4.31Some of these statutory approaches are aligned to the OECD’s Recommendation of the Council for Agile Regulatory Governance to Harness Innovation which focuses on:
•adjusting regulatory management tools
•laying institutional foundations that enable cooperation and ‘joined up’ approaches
•developing governance frameworks for ‘agile’ regulation
•adapting enforcement measures to a country’s evolving needs.[56]
4.32These kinds of statutory approaches may evolve over time to include enforceable legislative measures.
Non-statutory approaches to AI regulation
4.33Non-statutory approaches cover a wide range of regulatory responses by different countries. In addition, some governments are in a state of flux, and it is difficult to establish where their regulatory approach will land.
4.34Since October 2024, the United States has shifted to a fragmented regulatory framework with the change of government and the rollback of Executive Orders announced under the previous administration.[57] In July 2025, the White House announced an AI Action Plan focused on fostering innovation, building infrastructure to support AI (such as data centres) and international diplomacy efforts that drive adoption of US technology around the world.[58]
4.35In contrast, Californian legislators have enacted AI-related laws focused on risk assessment, privacy protections and transparency in the use of training data at the state-level.[59] These were introduced after the California Governor vetoed a bill that would have created AI safety standards protecting people from ‘critical harms’.[60] Illinois legislators have also amended existing human rights laws to prohibit employers from using AI in relation to employment practices and requiring employers to notify employees where AI is used in employment decisions.[61]
4.36Other countries have focused on non-binding regulatory strategies such as principles or guidelines. This involves regulators enforcing existing laws on AI developers and companies. These approaches are characterised by strategies that leverage existing regulatory avenues.
4.37Singapore has not proposed specific legislation to govern AI. The Singaporean regulatory approach is described in the National AI Strategy 2.0.[62] This is supplemented by sector-based regulations and guidelines, such as the Info-Communications Media Development Authority’s Proposed Model AI Governance Framework for GenAI and the AI Verify Testing Framework developed by the AI Verify Foundation, an AI governance framework and software toolkit.[63]
4.38England and Wales have also indicated it will take a non-legislative approach. The AI Opportunities Action Plan was commissioned to explore how AI can be harnessed to support economic growth.[64] The England and Wales governments endorsed all 50 recommendations and committed to implement them over 12 months.[65]
4.39These non-statutory approaches in part reflect current global uncertainties. But some countries have determined appropriate AI regulation does not require legislative change.
Considering different regulatory approaches
4.40Countries are taking different approaches to the role of legislative reform in the regulation of AI across the world. At a broad level statutory and non-statutory regulatory responses each have strengths and weaknesses, although this will change depending on the context and issues to be addressed.
4.41Statutory approaches can create legally enforceable obligations, but they may not be as well equipped to:
•keep pace with rapidly changing technologies
•be effective, if legislative compliance is difficult to understand, monitor or assess
•protect those adversely affected, where regulators are not appropriately empowered to administer the legislation.[66]
4.42Introducing statutory approaches can be challenging for fast-moving issues, where risks to be addressed are not yet clear or are changing.[67]
4.43Non-statutory approaches are faster to implement and easier to adapt than legislation. But while they can set regulatory expectations, non-statutory approaches are unenforceable. This means that:
•compliance is likely to be incomplete
•bad actors are most likely to claim they are compliant and are the least likely to comply
•the public is less likely to trust voluntary measures
•tailored principles and guidelines may multiply, creating uncertainty and confusion
•effectiveness is difficult to measure.[68]
4.44The diversity of regulatory approaches from around the world indicates both statutory and non-statutory approaches to AI regulation have important roles to play.
4.45It is useful to consider the breadth of AI regulatory approaches to inform what is most suitable for Victoria’s courts and VCAT. Most stakeholders supported principles (Chapter 6) and guidelines (Chapters 7 and 8). Though we were cautioned that principles alone may not be effective. Generally, we heard that non-statutory approaches were considered more flexible than legislation in responding to the rapidly changing AI landscape in the context of courts. We were also cautioned about introducing court-specific legislative change at this early stage (Chapter 5). It was recognised that national regulatory responses could influence Victoria’s courts and VCAT.
Regulation at the national level
4.46Australia does not have AI-specific regulation, but the Australian Government has explored a risk-based regulatory approach that involves ex-ante preventative measures. This proposal includes consideration of ways to strengthen existing laws.[69]
4.47The Productivity Commission has recommended AI-specific regulation should be a last resort, where existing or technology neutral regulations are not able to adequately mitigate the risk of harm.[70]
4.48Regulatory responses by the Australian Government may impact approaches to the safe use of AI in Victoria’s courts and VCAT. The Victorian Government is engaged with state and national counterparts on the development of a national regulatory approach.
4.49In September 2024, the Australian Government published a paper proposing mandatory guardrails for AI in high-risk settings.[71] The Australian Government’s approach builds on its interim response to the Safe and Responsible AI in Australia discussion paper.[72]
4.50Similar guardrails were detailed in the Voluntary AI Safety Standard, also published in September 2024. These aim to provide practical guidance to Australian companies and organisations about the safe use of AI.[73] The standard is supported by eight voluntary principles called Australia’s AI Ethics Principles. The principles seek to ensure the safe, secure and reliable use of AI.[74]
4.51Also relevant to AI regulation is the Privacy and Other Legislation Amendment Act 2024 (Cth),[75] which introduced the first set of 106 recommendations made in the Privacy Act Review Report.[76] The amendments require companies and other organisations to include information in their privacy policies about automated decisions which affect the rights of individuals.[77] However, the Privacy Act 1988 (Cth) does not apply to Victoria’s courts or tribunals. The second set of changes are anticipated in 2025.[78]
4.52The report explores a Model Law regulating facial recognition technology that was developed by the Human Technology Institute.[79] The Australian Government, in its response to the report, agreed further consideration is needed to determine how facial recognition technology and other uses of biometric information should be accommodated in privacy and other relevant frameworks.[80]
4.53A high-level discussion paper on copyright-related issues and transparency was circulated to the Copyright and Artificial Intelligence Reference Group in September 2024. The discussion paper raised two areas that could impact the proposal for mandatory guardrails:
•whether data used to train, fine-tune or test an AI model should be legally obtained
•whether data sources need to be disclosed.[81]
4.54It explored whether organisations and companies developing GenAI tools should ensure the content they create can be detected as artificially generated or manipulated.
4.55A summary of responses highlighted general support for transparency requirements in relation to AI and that transparency measures should be supported through whole-of-economy regulation rather than changes to the Copyright Act 1968 (Cth).[82]
Victorian Government alignment with national framework
4.56In June 2024, data and digital ministers representing the Australian, state and territory governments endorsed a National Framework for the Assurance of Artificial Intelligence in Government (the National Framework).[83] The Policy for the responsible use of AI in government was later released and is intended to unify approaches to the governance, assurance and transparency of AI across the Australian Public Service.[84]
4.57The Victorian Government has developed an Administrative Guideline based on the National Framework.[85] The guideline applies minimum standards to public service bodies and public entities as defined under the Public Administration Act 2004 (Vic).[86] The guideline is supported by guidance that provides direct and practical advice to Victorian public sector employees, contractors, consultants and volunteers about the safe and appropriate use of GenAI.[87]
4.58Victoria’s courts and VCAT and staff employed by Court Services Victoria (CSV) are not currently required to apply the guideline. However, CSV has stated that it operates within government frameworks and seeks to apply the Victorian guidelines.[88]
4.59Other state agencies such as the Public Record Office Victoria and the Office of the Victorian Information Commissioner have released standards and targeted guidance relevant to the use and implementation of GenAI in the Victorian Government.[89] Some of these developments may be relevant to Victoria’s courts and tribunals.[90]
How risk-based national regulation may impact Victoria’s courts and VCAT
4.60It is difficult to predict which position the Australian Government will ultimately take. In November 2024, the Commonwealth Parliamentary Select Committee on Adopting AI recommended that the Australian Government introduce dedicated, whole-of-economy legislation to regulate high-risk uses of AI. This is in line with one of the three options provided in the mandatory guardrails proposals paper.[91] In contrast, in August 2025 the Productivity Commission in an interim report advised the Australian Government to pause steps to implement mandatory guardrails for high-risk AI until a review of regulatory gaps could be completed.[92]
4.61In its proposal paper, the Australian Government explored how mandatory guardrails could apply to all General Purpose AI.[93] It outlined that General Purpose AI systems ‘pose unforeseeable risks because they can be applied in contexts they were not originally designed for’.[94] This is relevant to courts because national regulation could require, for example, developers of GenAI systems used in legal research to comply with mandatory safety requirements.[95] This would potentially enhance trust and confidence in AI systems used in Victoria’s courts and VCAT, particularly for public AI tools that litigants and lawyers may use in court and tribunal proceedings.
4.62The Australian Government also considered a definition of ‘foreseeable high-risk uses’ which is principles based. This would designate AI systems as high-risk (and subject to mandatory guardrails) where there is a combination of factors, such as the risk of adverse:
•impacts to an individual’s recognised human rights
•impacts to an individual’s physical or mental health or safety
•legal effects, defamation or similarly significant effects on an individual
•impacts to groups of individuals or collective rights of cultural groups
•impacts to the Australian economy, society, environment and rule of law.[96]
4.63A principle-based approach to the definition of high-risk could have implications for Victoria’s courts and VCAT given that:
•the misapplication of AI by courts and tribunals would risk adverse impacts on Australian society and the rule of law
•the decision of any court or tribunal may produce an adverse legal effect for an individual
•the administration of criminal cases directly risks adverse impacts on an individual’s human rights such as the right to liberty[97]
•the use of AI in other court proceedings and in court administration could jeopardise the right to equality before the law and a fair hearing[98]
•AI tools used to predict recidivism, bail or sentence length in other jurisdictions have been documented to contain bias and could risk adverse impacts for groups and cultural communities[99]
•data breaches relating to personal and sensitive information collected by court administration may impact upon privacy rights of court users.[100]
4.64An alternative proposed by the Australian Government is a list-based approach that could designate any use of AI in the administration of justice as high-risk, alongside a range of specific exemptions.[101]
4.65The Office of the Victorian Information Commissioner had concerns about whether a national approach would cover certain high-risk GenAI uses.[102] This concern remains relevant given a national approach to AI regulation has not been determined.
4.66It is not yet clear what approach will be taken by the Australian Government. But if AI systems, applications and tools used in courts and tribunals are designated as high-risk or parts of court functions are considered high-risk, compliance with mandatory guardrails will need to be integrated into court and tribunal approaches.
What will national regulation of AI mean for Victoria’s courts and VCAT?
4.67We heard that there are constitutional limits on the extent to which the Australian Government can define how state courts operate.[103]
4.68The Supreme Court stated:
The Court would be wary of legislative reforms to regulate the use of AI if those reforms have the effect of curtailing the judicial process. However, it is recognised that a cautious approach is required to the use of AI in relation to judicial functions.[104]
4.69In the future, states may need to introduce legislation to apply elements of a national regulatory framework to courts. Any such future reforms would need to be carefully considered to ensure they do not infringe upon the separation of powers principle which requires the judiciary to operate independently and without interference of the executive arm of government.[105]
4.70Despite these limitations, representatives of the Human Technology Institute recognised that there are still ‘many areas of activity—especially within the broad domain of legal practice—that can be regulated by the Commonwealth [Australian] Government’.[106]
4.71In practice, this means the Australian Government can regulate the organisations and companies that develop and provide AI tools used by litigants, lawyers, Victoria’s courts and VCAT. A representative of Digital Rights Watch suggested intervention by the Australian Government could provide a form of accreditation for safe and trustworthy tools. It stated, it ‘may be helpful to have a government labelling service that endorses algorithms that comply with voluntary standards’.[107]
4.72The County Court endorsed the role of the Australian Government in shaping AI regulation stating a ‘broad approach to regulation at the Federal level will allow governments and courts to adopt a consistent approach with the rest of the nation’.[108]
4.73Additional amendments to privacy laws were also anticipated to impact the procurement of AI products and tools in Victoria’s courts.[109] CSV considered that privacy changes to legislation at state or national levels would have a flow-on effect in the ongoing review of its own AI Framework.[110]
4.74Regulation at the national level has an important role to play in assisting Victoria’s courts and VCAT to make AI use safer by:
•setting the public policy objectives of AI regulation nationally
•defining national general principles and standards that guide planning and regulation in various jurisdictions
•coordinating policy making and information sharing across jurisdictions and sectors
•filling gaps in existing laws and regulatory schemes where specific sectors are not equipped to address harms.
4.75A national regulatory approach would have the benefit of consistently addressing opportunities and risks related to high-risk AI systems, applications and tools. This may have an impact on the type and quality of AI tools available for use in Victoria’s courts and tribunals. While national AI regulation is important for consistency, the Commission emphasises consideration must be given to ensure any national reform maintains judicial independence.
|
Recommendation 1.The Victorian Government should collaborate with the Australian government and other states and territories to develop a consistent and agreed national approach to AI regulation. While national AI regulation is important for safety and consistency, care must be given to ensure any national reform maintains judicial independence. |
-
Submission 10 (Castan Centre for Human Rights Law, Monash University).
-
Submission 15 (Human Rights Law Centre).
-
United Nations Secretary-General, Roadmap for Digital Cooperation: Report of the Secretary-General (Report, June 2020) <https://digitallibrary.un.org/record/3978036?v=pdf&ln=en>; See United Nations Educational, Scientific and Cultural Organization (UNESCO), Draft Guidelines for the Use of AI Systems in Courts and Tribunals (Guidelines, May 2025) <https://unesdoc.unesco.org/ark:/48223/pf0000393682>; Notably Colombia’s Superior Council of the Judiciary recently adapted the draft guidelines in partnership with UNESCO: ‘Justice Meets Innovation: Colombia’s Groundbreaking AI Guidelines for Courts’, UNESCO (Web Page, 1 April 2025) <https://www.unesco.org/en/articles/justice-meets-innovation-colombias-groundbreaking-ai-guidelines-courts>. See also United Nations Secretary General – Human Rights in the Administration of Justice: Report of the Secretary-General , UN Doc A/79/296 (7 August 2024).
-
Prime Minister’s Office, 10 Downing Street, Foreign, Commonwealth & Development Office and Department for Science, Innovation and Technology (UK), The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 (Policy Paper, 1 November 2023) <https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023>; ‘The Seoul Declaration by Countries Attending the AI Seoul Summit, 21-22 May 2024’, Department of Industry, Science and Resources (Web Page, 24 May 2024) <https://www.industry.gov.au/publications/seoul-declaration-countries-attending-ai-seoul-summit-21-22-may-2024>.
-
‘Hiroshima AI Process – Friends Group’, Hiroshima AI Process (Web Page, May 2025) <https://www.soumu.go.jp/hiroshimaaiprocess/en/meeting.html>.
-
Ministers of the Global Partnership on Artificial Intelligence (GPAI), 2024 GPAI New Delhi Declaration (Report, 3 July 2024) <https://wp.oecd.ai/app/uploads/2025/01/gpai-new-delhi-declaration-2024.pdf>; ‘Australia Supports Safe AI through New GPAI Declaration’, Department of Industry, Science and Resources (Web Page, 29 August 2024) <https://www.industry.gov.au/news/australia-supports-safe-ai-through-new-gpai-declaration>.
-
‘Australia Supports Collaborative AI Research through GPAI Belgrade Declaration’, Department of Industry, Science and Resources (Web Page, 15 January 2025) <https://www.industry.gov.au/news/australia-supports-collaborative-ai-research-through-gpai-belgrade-declaration>.
-
Organisation for Economic Co-operation and Development (OECD), Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, 3 May 2024, 3 <https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449> The Recommendation underwent subsequent revisions in November 2023 and May 2024.
-
Organisation for Economic Co-operation and Development (OECD), Recommendation of the Council for Agile Regulatory Governance to Harness Innovation, OECD/LEGAL/0464, 6 October 2021, 12 [5] <https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0464>.
-
United Nations, Seizing the Opportunities of Safe, Secure and Trustworthy Artificial Intelligence Systems for Sustainable Development, UN Doc A/78/L.49 (11 March 2024) <https://docs.un.org/A/78/L.49>.
-
Council of Europe, Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, opened for signature 5 September 2024, CETS No. 225 <https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence>.
-
Global Affairs Canada, Canada Signs the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (News Release, 11 February 2025) <https://www.canada.ca/en/global-affairs/news/2025/02/canada-signs-the-council-of-europe-framework-convention-on-artificial-intelligence-and-human-rights-democracy-and-the-rule-of-law.html>; Ministry of Foreign Affairs of Japan, Signing of the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Press Release, 11 February 2025) <https://www.mofa.go.jp/press/release/pressite_000001_00983.html>.
-
Council of Europe, Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, opened for signature 5 September 2024, CETS No. 225, ‘Chart of signatures and ratifications of Treaty 225’ <https://www.coe.int/en/web/conventions/full-list?module=signatures-by-treaty&treatynum=225>.
-
United Nations, Global Digital Compact, UN Doc A/79/L.2 (adopted 22 Sept 2024) <https://www.un.org/global-digital-compact/sites/default/files/2024-09/Global%20Digital%20Compact%20-%20English_0.pdf>.
-
Ibid [51].
-
‘Australia Signs Paris AI Action Summit Statement’, Department of Industry, Science and Resources (Web Page, 14 February 2025) <https://www.industry.gov.au/news/australia-signs-paris-ai-action-summit-statement>.
-
Artificial Intelligence Action Summit, Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet (Report, 11 February 2025) <https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet>.
-
United Nations Educational, Scientific and Cultural Organization (UNESCO), Recommendation on the Ethics of Artificial Intelligence (2022, Adopted on 23 Nov 2021) 28 [63] <https://unesdoc.unesco.org/ark:/48223/pf0000381137>.
-
Margaret Satterthwaite, Special Rapporteur, AI in Judicial Systems: Promises and Pitfalls: Report of the Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, UN Doc A/80/169 (16 July 2025) <https://docs.un.org/en/A/80/169>.
-
The International Organization for Standardization is a non-government organisation that brings together global experts to develop international standards relating to technology and manufacturing, including AI. See ‘ISO: Global Standards for Trusted Goods and Services’, ISO (Web Page) <https://www.iso.org/home.html>; National Institute of Standards and Technology is an agency of the United States Department of Commerce whose responsibilities include cultivating trust in the design, development, use and governance of AI technologies and systems. See ‘What We Do’, NIST: National Institute of Standards and Technology (Web Page) <https://www.nist.gov/>.
-
Department of Industry, Science and Resources (Cth), National Artificial Intelligence Centre, and CSIRO, Voluntary AI Safety Standard (Report, August 2024) <https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf>. Standards Australia, ‘AS ISO/IEC 42001:2023 Information Technology – Artificial Intelligence – Management System’ <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-42001-2023>.
-
Submission 13 (Name withheld). Consultation 21 (Public Record Office Victoria).
-
Nestor Maslej et al, The 2025 AI Index Report (Report, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, April 2025) 337 <https://hai.stanford.edu/ai-index/2025-ai-index-report>.
-
Gianclaudio Malgieri and Frank Pasquale, ‘Licensing High-Risk Artificial Intelligence: Toward Ex Ante Justification for a Disruptive Technology’ (2024) 52 Computer Law & Security Review 105899, 2 <https://www.sciencedirect.com/science/article/pii/S0267364923001097>.
-
Regulation (EU) 2024/1689 (Artificial Intelligence Act) [2024] OJ L 2024/1689 noting that implementation of the Act is taking a staged approach extending until 2 August 2027.
-
Ibid ch II, art 5 However ‘this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity’: at art 5(1)(d).
-
Ibid.
-
Ibid ch III art 6(2) and annex III.
-
Ibid ch V.
-
‘Framework Act on the Development of Artificial Intelligence and Establishment of Trust (English Translation)’, Center for Security and Emerging Technology, Georgetown University (Web Page, 9 July 2025) <https://cset.georgetown.edu/publication/south-korea-ai-law-2025/>; Ingong Baljeongwa Shinroe Guiban Joseong Deunge Gwanhan Gibbonberban (Daean) [Basic Act (Alternative) on the Development of Artificial Intelligence and the Creation of a Trust Foundation (English Translation)] (S.Kor, Law No. 20676).
-
Lee & Ko, ‘A New Era for AI: Republic of Korea Takes a Bold Step with AI Regulation’, asialaw (online, 10 January 2025) <https://www.asialaw.com/NewsAndAnalysis/a-new-era-for-ai-republic-of-korea-takes-a-bold-step-with-ai-regulation/Index/2228>.
-
‘Framework Act on the Development of Artificial Intelligence and Establishment of Trust (English Translation)’, Center for Security and Emerging Technology, Georgetown University (Web Page, 9 July 2025) art II (4) <https://cset.georgetown.edu/publication/south-korea-ai-law-2025/>.
-
Ibid art 34 (1).
-
PL 2338/2023 [Bill No. 2338, of 2023] (Brazil).
-
Ibid arts 14 and 15.
-
Ibid art 17.
-
Proyecto de decreto por el que se expide la Ley Federal que regula la Inteligencia Artificial [Draft decree issuing the Federal Law that regulates Artificial Intelligence (English translation)] April 2, 2024 (Congress of the United Mexican States).
-
Kimberly Breier, Gerónimo Gutiérrez Fernández and Lorena Montes de Oca, ‘New Artificial Intelligence Legislation in Mexico’, Global Policy Watch (Web Page, 14 March 2025) <https://www.globalpolicywatch.com/2025/03/new-artificial-intelligence-legislation-in-mexico/>.
-
‘Chile Launches National AI Policy and Introduces AI Bill Following UNESCO´s Recommendations’, UNESCO (Web Page, 6 May 2024) <https://www.unesco.org/en/articles/chile-launches-national-ai-policy-and-introduces-ai-bill-following-unescos-recommendations>.
-
Ministry of Science, Technology, Knowledge and Innovation (Chile), ‘Con enfoque basado en riesgos, gobierno presenta proyecto de ley para regular usos de la Inteligencia Artificial’ [With a risk-based approach, government presents bill to regulate uses of Artificial Intelligence (English translation)], MinCiencia (Web Page, 10 May 2024) <https://www.minciencia.gob.cl/noticias/con-enfoque-basado-en-riesgos-gobierno-presenta-proyecto-de-ley-para-regular-usos-de-la-inteligencia-artificial/>.
-
Blair Attard-Frost, ‘The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Regulation in Canada?’, Montreal AI Ethics Institute (Web Page, 17 January 2025) <https://montrealethics.ai/the-death-of-canadas-artificial-intelligence-and-data-act-what-happened-and-whats-next-for-ai-regulation-in-canada/>; Artificial Intelligence and Data Act (AIDA) (Canada).
-
Nilo Divinia and Jay-r Ipac, ‘AI and the Law in the Philippines’, Asia Business Law Journal (Web Page, 15 April 2024) <https://law.asia/ai-law-philippines/>.
-
Zhu Ningning, ‘Duōmíng Dàibiǎo Tíchū Guānyú Zhìdìng Réngōngzhìnéng Fǎde Yìàn Gòujiàn Quánmiàn Kēxué de Réngōngzhìnéng Fǎlǜ Zhìdù Tǐxì [Several Deputies Proposed a Motion on Formulating an Artificial Intelligence Law to Establish a Comprehensive and Scientific Legal System for AI (English Translation)]’, Legal Daily – NPC (online, 17 June 2025) <http://epaper.legaldaily.com.cn/fzrb/content/20250617/Articel05005GN.htm>.
-
Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024) 48–9.
-
Ibid 48.
-
Ley N° 31814, Ley que promueve el uso de la inteligencia artificial en favor del desarrollo económico y social del país [Law No. 31814, Law that promotes the use of Artificial Intelligence in favor of the economic and social development of the country (English translation)] July 5, 2023 (Peru).
-
‘Peru: Implemented Law Promoting the Use of Artificial Intelligence for the Economic and Social Development of Peru (No. 31814)’, Digital Policy Alert (Web Page) <https://digitalpolicyalert.org/event/13763-implemented-law-promoting-the-use-of-artificial-intelligence-for-the-economic-and-social-development-of-peru-no-31814>; Sebastian Smart and Victor M Montori, ‘Peru’s AI Regulatory Boom: Quantity Without Depth?’, Harvard Kennedy School, Carr-Ryan Centre for Human Rights (Web Page, 23 April 2025) <https://www.hks.harvard.edu/centers/carr-ryan/our-work/carr-ryan-commentary/perus-ai-regulatory-boom-quantity-without-depth>.
-
Decreto N.o 234: Ley de Fomento a Inteligencia Artificial y Tecnologias [Decree No. 234: Law for the Promotion of Artificial Intelligence and Technologies (English Translation)] March 3, 2025 (El Salvador).
-
‘El Salvador Unveils New Law to Shape AI Development.’, El Salvador in English (Web Page, 10 February 2025) <https://elsalvadorinenglish.com/2025/02/10/el-salvador-unveils-new-law-to-shape-ai-development/>.
-
Kensuke Inueo and Chika Kamata, ‘Japan’s Emerging Framework for Responsible AI: Legislation, Guidelines and Guidance’, International Bar Association (Web Page, 16 July 2025) <https://www.ibanet.org/japan-emerging-framework-ai-legislation-guidelines>.
-
Jinkou Chinou Kanren Gijutsu No Kenkyuu Kaihatsu Oyobi Katsuyou No Suishin Nikansuru Houritsu [Act on Promotion of Research and Development and Utilization of Artificial Intelligence-Related Technologies (English Translation)] Act No. 53 of 2025 (Japan).
-
Hiroki Habuka, ‘New Government Policy Shows Japan Favors a Light Touch for AI Regulation’, Wadhwani AI Center, Center for Strategic and International Studies (Web Page, 25 February 2025) <https://www.csis.org/analysis/new-government-policy-shows-japan-favors-light-touch-ai-regulation>.
-
Phong Anh Hoang, ‘Vietnam: Landmark Law on Digital Technology Industry – New Frameworks for AI & Digital Assets’, DFDL (Web Page, 24 June 2025) <https://www.dfdl.com/insights/legal-and-tax-updates/vietnam-landmark-law-on-digital-technology-industry-new-frameworks-for-ai-digital-assets/>.
-
National Science and Technology Council, Republic of China, ‘Yùgàozhìdìng “Réngōng zhìhuì jīběnfǎ” cǎoàn [Announcement of the Draft “Artificial Intelligence Basic Law” (English translation)]’, National Development Council (Web Page, 13 September 2025) <https://join.gov.tw/policies/detail/4c714d85-ab9f-4b17-8335-f13b31148dc4>.
-
Department of Information Services, Executive Yuan, ‘Executive Yuan Approves Draft Bill for Basic Law on AI’, Taiwan’s Executive Yuan (Web Page, 28 August 2025) <https://english.ey.gov.tw/Page/61BF20C3E89B856/89da216e-5741-43e4-aac8-af4551a21499>.
-
Organisation for Economic Co-operation and Development (OECD), Recommendation of the Council for Agile Regulatory Governance to Harness Innovation, OECD/LEGAL/0464, 6 October 2021, <https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0464>.
-
Removing Barriers to American Leadership in Artificial Intelligence 2025, Exec. Order No.14179, 90 FR 8741 (2025).
-
Executive Office of the President of the United States, Winning the Race: America’s AI Action Plan (Report, 24 July 2025) <https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf>.
-
For instance, California Consumer Privacy Act of 2018: Personal Information, CA AB1008 (2024) grants consumers various rights with respect to personal information collected by AI developers and companies by amending section 1798.40 of the Civil Code; Contracts against Public Policy: Personal or Professional Services: Digital Replicas, CA AB2602 (2024) adds section 927 to the Labor Code to prohibit an employer from requiring an employee to agree to a term or condition that is known by the employer to be illegal and creates prohibitions on the use of AI-generated avatars; Health Care Services: Artificial Intelligence, CA AB3030 (2024) amends the Health and Safety Code at Section 1339.75 to require health providers to state when a communication was generated by AI.
-
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, CA SB1047 (2024).
-
Pub. Act 103-0804 2024 (Ill).
-
Ministry of Communications and Information (Singapore) and Smart Nation Singapore, National AI Strategy 2.0 (Report, 4 December 2023) <https://www.smartnation.gov.sg/initiatives/national-ai-strategy>.
-
AI Verify Foundation and Infocomm Media Development Authority of Singapore, Model AI Governance Framework for Generative AI: Fostering a Trusted Ecosystem (Report, 30 May 2024) <https://aiverifyfoundation.sg/wp-content/uploads/2024/05/Model-AI-Governance-Framework-for-Generative-AI-May-2024-1-1.pdf>; AI Verify, ‘AI Verify: AI Governance Testing Framework and Toolkit’, Personal Data Protection Commission, Singapore (Web Page, 25 May 2022) <https://www.pdpc.gov.sg/news-and-events/announcements/2022/05/launch-of-ai-verify—an-ai-governance-testing-framework-and-toolkit>; Jason Grant Allen, Jane Loo and Jose Luis Luna Campoverde, ‘Governing Intelligence: Singapore’s Evolving AI Governance Framework’ (2025) 1 Cambridge Forum on AI: Law and Governance e12, 5 <https://www.cambridge.org/core/journals/cambridge-forum-on-ai-law-and-governance/article/governing-intelligence-singapores-evolving-ai-governance-framework/5E54A373E193E2D51354ADC1F509B9B4#>.
-
Matt Clifford, AI Opportunities Action Plan: Ramping up AI Adoption across the UK to Boost Economic Growth, Provide Jobs for the Future and Improve People’s Everyday Lives. (Report, Department of Science, Innovation and Technology (UK), 13 January 2025) <https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan>.
-
Department for Science, Innovation & Technology (UK), AI Opportunities Action Plan: Government Response (Policy Paper No CP 1242, 13 January 2025) <https://www.gov.uk/government/publications/ai-opportunities-action-plan-government-response/ai-opportunities-action-plan-government-response>.
-
A concern relating to appropriate enforcement mechanisms was expressed by Consultation 28 (Monash University Digital Law Group); Gary Marchant and Carlos Ignacio Gutierrez, ‘Soft Law 2.0: An Agile and Effective Governance Approach for Artificial Intelligence’ (2023) 24(2) Minnesota Journal of Law, Science & Technology 375, 379–384 <https://scholarship.law.umn.edu/mjlst/vol24/iss2/4>.
-
Gary Marchant and Carlos Ignacio Gutierrez, ‘Soft Law 2.0: An Agile and Effective Governance Approach for Artificial Intelligence’ (2023) 24(2) Minnesota Journal of Law, Science & Technology 375, 377 <https://scholarship.law.umn.edu/mjlst/vol24/iss2/4>.
-
Ibid 384–7.
-
Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia Consultation: Australian Government’s Interim Response (Report, 2024) 22.
-
Productivity Commission, Interim Report – Harnessing Data and Digital Technology (Report, August 2025) 2, 20–1 <https://www.pc.gov.au/inquiries/current/data-digital/interim>.
-
Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024).
-
Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Discussion Paper (Discussion Paper, June 2023).
-
Department of Industry, Science and Resources (Cth), National Artificial Intelligence Centre, and CSIRO, Voluntary AI Safety Standard (Report, August 2024) <https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf>.
-
‘Australia’s AI Ethics Principles’, Department of Industry, Science and Resources (Web Page, 11 October 2024) <https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles/australias-ai-ethics-principles> The first version of these principles was published in 2019. They were updated to reflect the Voluntary AI Safety Standard in November 2024.
-
Privacy and Other Legislation Amendment Act 2024 (Cth).
-
Attorney-General’s Department (Cth), Privacy Act Review Report 2022 (Report, 2022).
-
Privacy and Other Legislation Amendment Act 2024 (Cth) pt 15.
-
Australian Government, Government Response – Privacy Act Review Report (Report, 2023); ‘Interview with the Hon Michelle Rowland MP’, Sunday Agenda (Sky News, 20 July 2025) <https://ministers.ag.gov.au/media-centre/transcripts/tv-interview-sky-news-sunday-agenda-20-07-2025>.
-
Attorney-General’s Department (Cth), Privacy Act Review Report 2022 (Report, 2022) 126.
-
Australian Government, Government Response – Privacy Act Review Report (Report, 2023) 10, 23, 28.
-
Attorney-General’s Department (Cth), Copyright and AI Reference Group – Transparency (Discussion Paper, September 2024) 3 <https://www.ag.gov.au/rights-and-protections/publications/copyright-and-ai-transparency-discussion-paper>.
-
Attorney-General’s Department (Cth), Copyright and AI Reference Group – Transparency (Discussion Paper, September 2024) <https://www.ag.gov.au/rights-and-protections/publications/copyright-and-ai-transparency-discussion-paper>; This follows on from the ‘Copyright Enforcement Review 2022-23’, Attorney-General’s Department (Web Page) 1, 3 <https://www.ag.gov.au/rights-and-protections/copyright/copyright-enforcement-review-2022-23> undertaken from November 2022 to March 2023.
-
Australian Government et al, National Framework for the Assurance of Artificial Intelligence in Government: A Joint Approach to Safe and Responsible AI by the Australian, State and Territory Governments (Report, 21 June 2024).
-
Digital Transformation Agency (Cth), Policy for the Responsible Use of AI in Government (Version 1.1, 1 September 2024) <https://www.digital.gov.au/policy/ai/policy>; This is supported by the Digital Transformation Agency (Cth), Australian Government’s AI Technical Standard (Version 1, July 2025) <https://www.digital.gov.au/policy/ai/AI-technical-standard>.
-
Department of Premier and Cabinet (Vic), Administrative Guideline Direction on the Use of DeepSeek Products, Applications and Web Services (No 2025/1, Issue:1.0, February 2025) <https://www.vic.gov.au/sites/default/files/2025-02/Administrative-Guideline-DeepSeek.pdf>; Department of Premier and Cabinet (Vic), Administrative Guideline – The Safe and Responsible Use of Generative AI in the Victorian Public Sector (No 2024/07, Issue 1.0, November 2024) <https://www.vic.gov.au/sites/default/files/2024-11/Generative-AI-Guideline-%281%29.pdf>.
-
Public Administration Act 2004 (Vic).
-
Department of Government Services, Guidance for the Safe and Responsible Use of Generative AI in the Victorian Public Sector (Report, Victorian Government, 19 March 2025) <https://www.vic.gov.au/guidance-safe-responsible-use-gen-ai-vps>.
-
Submission 25 (Court Services Victoria).
-
Such as Office of the Victorian Information Commissioner (OVIC), Artificial Intelligence – Understanding Privacy Obligations (Report, April 2021) <https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-understanding-privacy-obligations/>; Public Record Office Victoria, Artificial Intelligence (AI): Capturing and Managing Records Generated by or Using AI Technologies (Web Page, 31 July 2025) <https://prov.vic.gov.au/recordkeeping-government/a-z-topics/AI>.
-
For instance, a report by the Victorian Commissioner for Economic Growth about Victoria’s use of AI is yet to be published. Originally due in October 2024, the report was set to include analysis of ‘whole of government activities for facilitating the adoption of AI including appropriate policy, legislative and regulatory frameworks’: Commissioner for Economic Growth, ‘Review of Artificial Intelligence Use in Victoria – Terms of Reference’, VIC.GOV.AU (Web Page, 5 June 2024) <https://www.vic.gov.au/review-artificial-intelligence-use-victoria-terms-reference>.
-
Senate Select Committee on Adopting Artificial Intelligence (AI), Parliament of Australia, Report of the Select Committee on Adopting Artificial Intelligence (AI) (Final Report, November 2024) 169 <https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Adopting_Artificial_Intelligence_AI/AdoptingAI/Report>.
-
Productivity Commission, Interim Report – Harnessing Data and Digital Technology (Report, August 2025) 21 <https://www.pc.gov.au/inquiries/current/data-digital/interim> The final inquiry report is to be handed to the Australian Government in December 2025.
-
Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024) 29.
-
Ibid 28.
-
Ibid 35–42.
-
Ibid 19.
-
Charter of Human Rights and Responsibilities Act 2006 (Vic) s 21.
-
Ibid ss 8, 24.
-
Ibid s 8; See Julia Angwin et al, ‘Machine Bias’, ProPublica (online, 23 May 2016) <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>; Jeff Larson et al, ‘How We Analyzed the COMPAS Recidivism Algorithm’, ProPublica (online, 23 May 2016) <https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm>.
-
Charter of Human Rights and Responsibilities Act 2006 (Vic) s 13.
-
Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024) 25–7.
-
Consultation 14 (Office of the Victorian Information Commissioner).
-
Consultation 34 (Human Technology Institute); This was also reflected in concerns about implementation of the EU AI Act as identified by Consultation 28 (Monash University Digital Law Group).
-
Submission 26 (Supreme Court of Victoria).
-
See Kable vs DPP [1996] HCA 24; (1996) 189 CLR 51 which establishes the principle that the judicial independence of state Supreme Courts should not be impaired by state legislation, also known as the Kable doctrine.
-
Consultation 34 (Human Technology Institute).
-
Consultation 17 (Digital Rights Watch).
-
Submission 24 (County Court of Victoria).
-
Consultations 8 (Federation of Community Legal Centres Workshop), 17 (Digital Rights Watch).
-
Submission 25 (Court Services Victoria).
|
|
