AI Under the Gavel: The Imperative for Legal Regulation

The rise of AI has prompted the legal system to reevaluate its regulations regarding the use of AI tools in litigation and investigation.

Judicial figures within the Australian legal system typically hold lengthy appointments and are not subject to electoral pressures. Due to doctrines like stare decisis, once established, laws tend to be resistant to change. However, the rapid rise of AI (known for its deliberate pace) has prompted the legal system (typically evolves slowly by design), to reevaluate its regulations regarding the use of AI tools. This shift reflects a growing recognition of the need to adapt to the challenges and opportunities presented by AI technologies.

We explore the recent developments in AI regulation in Australia, set against global trends.

Australia; A Frameless Framework?

Australia currently lacks a federal, AI-specific regulatory framework. We do however, use existing laws to address certain aspects of AI technology1.  In June 2023, the Australian Government released a discussion paper seeking input on managing AI risks2.

Earlier this year, the government issued an interim response, outlining proposed actions in four key areas:

1. Preventing harm through testing, transparency, and accountability 

2. Strengthening laws to protect citizens 

3. Collaborating internationally for safe AI development 

4. Maximising AI’s benefits

Addressing the first point, the government committed to developing an AI Safety Standard and implementing risk-based guardrails for the industry. The Standard included both voluntary guidelines, and proposals for mandatory regulations in high-risk AI settings. The definition of "high-risk" AI applications remains unclear. The discussion paper suggested defining them as systems with "systematic, irreversible, or perpetual" impacts, such as AI used in medical surgery or self-driving cars, aligning with the EU’s approach3.

In September this year, the government released the voluntary guardrails, which form part of the AI Safety Standard4. These currently apply to all Australian organisations. The mandatory guardrails, outlined in a discussion paper, are still under consultation before being finalised.

Global Developments in AI Regulation

AI holds immense potential, and worldwide calls for action are growing to govern its safe development and responsible use. While Australia, the US, China, and EU, adopt different regulatory approaches, we all share the same goal: to create a cohesive framework for managing AI risks and harnessing its benefits responsibly.

United States

Like Australia, the US lacks comprehensive AI regulation, with AI currently governed by a mix of federal and state laws. On 30 October 2023, President Biden issued an Executive Order to guide federal AI adoption. This was directed to ensure the "safe, secure, and trustworthy" use of AI. Key provisions include requiring developers of foundational AI models to share safety test results with the government, establishing AI safety standards, and creating guidelines for content authentication and watermarking.

This move follows the cautionary case of Mata v. Avianca5 where two New York attorneys submitted a brief citing fictitious case law generated by ChatGPT. This was unverifiable by opposing counsel or the judge. The judge noted that using AI tools is not inherently improper. Importance was placed on lawyers ensuring the accuracy of their filings. The attorneys were fined, highlighting the importance of diligence in AI use within legal practice.

At State level, various laws have been enacted to improve transparency, address sector-specific issues, mandate impact assessments and data transparency.

China

China has been proactive in AI regulation, implementing specific laws since 20216. A comprehensive AI framework is still pending. Key regulations focus on the use of algorithmic recommendation and deep synthesis technologies (a form of generative AI) for internet services, the development of generative AI, and ethical reviews of AI research. China also regularly releases new standards for public consultation, including recent updates on data security and regulation of generative AI.

EU

The EU's Artificial Intelligence Act (EU AI Act), which took effect on 1 August 2024. It is the world’s first comprehensive AI regulation. The Act adopts a risk-based approach to regulate the entire AI lifecycle. It also imposes certain obligations on various stakeholders across the AI value chain. AI systems are categorised into four risk levels: unacceptable, high, limited, and minimal. Most obligations target developers (“providers”) of high-risk systems, such as medical devices and critical infrastructure.

General-purpose AI models, like large language models, are addressed separately, with additional requirements for those identified as posing "systemic risk."

Similar to the EU's GDPR, the AI Act has extraterritorial reach. This means is applies to organisations outside the EU, including those in Australia. It also imposes significant penalties for non-compliance, including substantial fines based on the nature of the violation.

Where to from here?

As the judicial system continues to assess the various impacts of AI, it’s encouraging to see a focus on regulation that prioritises safety without stifling innovation.

Here are a few actionable steps to help safeguard AI use:

1. Ensure your training data is diverse and representative to mitigate unintentional bias. Regularly test and audit AI models for fairness. Require vendors to disclose bias audits. Establish feedback loops to allow users to report biases and drive necessary adjustments.

2. Establish accountability by designating responsible parties to oversee AI systems, manage data security, and address performance issues. Conduct regular audits for compliance with governance standards. Maintain thorough documentation of model versions, policy updates, and key decision rationales, including human oversight.

3. Stay informed as AI governance evolves by joining AI-focused networks or working with specialists. Partnering with experts ensures compliance, aligns AI practices with business goals, and helps avoid costly mistakes.

4. Establish clear ethical guidelines for AI use in legal practice, emphasising accuracy and accountability in all submissions.

5. Training is essential. Legal professionals should be increasingly required to learn about AI tools. This includes their capabilities and limitations to better apply them in litigation, as well as AI fundamentals and ethics.

6. Judges are becoming more vigilant in scrutinising evidence and submissions that rely on AI-generated content. It is crucial to ensure all information is verified and reliable.

Whilst AI regulation is taking shape globally in various forms, there is widespread agreement on the importance of integrating ethics into its use. Ultimately, we are all on the same side with this unified consideration. That is a positive and reassuring position to be in.

For more information, contact Siera Data

Article published with Lawyers Weekly on 8 November 2024.

Share with friends
alt
alt