AI on Trial: The Future of Law in a Digital Age – Part 2

In the previous article, AI on Trial: The Future of Law in a Digital Age – Part 1, we explored the core concept of Artificial Intelligence and its transformative impact on the legal industry. We examined how AI's persuasive and pervasive nature is reshaping legal practices, highlighting its numerous benefits for both legal professionals and their clients. In AI on Trial: The Future of Law in a Digital Age – Part 2, we discuss challenges and adverse effects of AI on the legal landscape. We also discuss important considerations and risks that must be addressed when integrating AI into litigation and evidence management processes.

More than just a cup of sugar

The persistent knock of Generative AI on our doorstep asked for more than just a cup of sugar. Soon, it was requesting milk, a couple of saucepans, and an innocent crash on the couch for the night. Before we knew it, GenAI had moved right on in. It seems like it happened in an instant and sparked a variety of reactions. Some rejected the idea outright, “I have enough friends, I don’t need any more.” Others sat on the fence, pondering, “this is a bit weird, but maybe we could make it work.” Then there were those who embraced it wholeheartedly, “you’re awesome, and we’re obviously going to be besties"!

The sceptics, resistant to change, immediately raised concerns about bias and the fear of the unknown. But AI didn’t concern itself with that. Before long, it was ingrained in our way of life, forcing us to adjust the way we work and live.

Since then, we've had the chance to explore this phenomenon more deeply and address some of the possibilities and challenges it presents. While I’m grateful to be at the forefront of such incredible change, I approach it with a clear understanding of both its capabilities and limitations, always keeping legal and ethical considerations at the heart of the conversation.

Consideration of ethical, legal, and practical implications

The use of AI in evidence management within the legal field presents several challenges that require careful attention. Addressing these issues involves considering the ethical, legal, and practical implications. AI offers many benefits to the legal profession. It is so important to also recognise the potential negative impacts that may arise when integrating AI into legal processes.

Bias

AI systems are only as effective as the data they are trained on. They learn patterns from the data provided during the training phase. If that data contains inherent biases, the AI will likely learn and perpetuate those biases.

Historical Bias: If the data used to train AI models reflects biased outcomes, the AI may replicate the same biases in its predictions.

Sampling Bias: If the training data is not representative of the entire population, the AI’s predictions or decisions may be skewed leading to unfair outcomes for underrepresented groups.

When a model is shaped by bias, it becomes more prone to producing hallucinations. A hallucination occurs when an LLM generates a response that seems credible but is false. This happens because LLMs are trained to predict the next word based on context, not on accuracy. Their goal is to generate fluent, convincing responses, not to ensure factual correctness. As a result, LLMs may produce content that appears believable but is ultimately false or fabricated.

Feedback loops

Once an AI system is deployed, it often continues to learn from real-world data. If biased decisions are made and the system feeds those decisions back into its learning process. This can then reinforce and amplify existing biases.

Job displacement

Whilst automation (discussed in part 1 of this series) can enhance efficiency, it may also lead to job loss. LLMs can significantly improve the efficiency and accuracy of these tasks. This frees up legal professionals to focus more on high-value activities. Jobs may start to dwindle in one aspect, but grow significantly in another. However, fortune swings in roundabouts. Technological advancements force change. Those who can wield the power of this technology are in demand and on the rise. It is therefore becoming increasingly important to adapt, reskill, and to keep pace with the changing workplace environments.

Lack of transparency

Transparency matters regardless of whether it’s a machine-learning model or a graduate making the call. Many AI systems function as "black boxes," meaning their decision-making processes are difficult to understand or explain. In legal contexts, where transparency and accountability are critical, this lack of clarity is problematic. It raises concerns about fairness and justice. Working with experts allows for improved transparency and a better overall understanding of the process. The importance of engaging an expert is considered in our article of the same name. Good AI systems don’t just give results—they provide clear reasoning behind them. Great AI systems don’t just deliver answers or results; they explain the reason and logic behind them.

Legal accountability

When AI is involved in decision-making, questions arise about who is responsible if something goes wrong. If an AI system makes an error that leads to an unjust legal outcome, who should be held accountable? This issue complicates the legal framework surrounding liability, especially when the AI’s decision-making process is opaque or hard to understand. In such cases, it can be difficult to determine whether the fault lies with the AI system itself, the developers who created it, or the users who implemented it. Lack of transparency makes it challenging to assign clear responsibility. It complicates accountability, particularly when the AI's decision-making is not fully traceable or explainable. As AI becomes integrated into critical legal decisions, establishing clear guidelines for liability becomes crucial. This will uphold justice and provide those affected by AI-driven outcomes with a clear path for seeking redress. Regardless, it still remains the responsibility of legal professionals to review AI-generated decisions to ensure their accuracy and correctness.

Hindering innovation and human touch

While AI can improve the efficiency of certain processes, it may also hinder creativity, empathy, and human interaction in legal practice. Aspects of law like negotiation and client counselling rely on human intuition, empathy, and ethical judgment. These are qualities that AI cannot replicate. Over time, excessive reliance on AI could erode these vital human elements of the profession.

This growing dependence on AI tools also poses another risk. Legal professionals may become overly reliant on automation, potentially weakening their critical thinking and analytical skills. This could diminish the role of human judgment in complex or nuanced cases, where AI may struggle to account for all the subtleties.

Ethical dilemmas

The use of AI in legal decision-making also raises significant ethical concerns. Areas such as sentencing, parole decisions, and custody determinations, for example. When AI systems are seen to influence human judgment, it prompts important questions about the ethics of relying on machines to make decisions that directly impact people's lives.

Access to justice

There is a common assumption is that AI tools are expensive to implement and maintain. This indicates they are less accessible to the public. It widens the economic gap between those who are able to afford such luxury and those who cannot.

A deeper understanding of Generative AI reveals that certain high-quality models are actually 30-50 times cheaper than larger models. These are generally better at reporting. By making informed choices, users can significantly reduce their costs for using LLMs.

Siera Data are currently working on live, active engagements with our clients, using the latest in Gen AI technology. Crafting prompts to find valuable and immediate insights into data has been a complete gamechanger. This is at a fraction of the cost of traditional eDiscovery!

Regulatory and compliance issues

The rapid advancement of AI may outpace current laws and regulations, creating gaps in legal and ethical oversight. Governments and legal institutions may struggle to establish suitable rules. This could result in insufficient regulation and confusion in the use of AI technologies in legal contexts. Concerns were raised during a December briefing about the use of AI in practice.

Chief Justice Andrew Bell has amended the NSW SC Gen 23 Practice Note on the use of Generative Artificial Intelligence in law, responding to concerns raised during a December briefing. The original Practice Note imposed a blanket ban on using open or closed-source programs to search material covered by non-publication or suppression orders and was set to take effect on 3 February 2025, coinciding with the start of the 2025 law term.

Expressing gratitude to members of the profession for their interest and input into the development of the Practice Note, the Chief Justice has since eased the ban, subject to specific conditions. Read more about these conditions here.

Security and privacy

AI systems often require access to vast amounts of sensitive data, including personal, financial, and medical information, for training purposes. Improper handling or data breaches can expose individuals to risks such as identity theft, privacy violations, and other security threats. Additionally, AI-driven legal tools may lack sufficient safeguards against cyberattacks.

For legal professionals, this presents a serious concern. Lawyers have ethical responsibilities to protect client confidentiality. They must be vigilant to avoid inadvertently waiving legal professional or work-product privileges.

It’s important to note that LLMs do not learn from the information provided in prompts. They do not retain memories of past conversations or store data from the context window. However, questions remain about what information the system collects. Is personal information being captured? And how can anyone be certain?

Similar concerns have arisen in the past with the advent of new technologies. These include issues related to confidentiality, waiver, and the misuse of copyrighted material. For LLMs, the key takeaway is that their output must be carefully reviewed. It is not safe to assume that the generated text is always original.

Conclusion

It need not be all doom and gloom! While the potential adverse effects of AI in the legal landscape are clear, they underscore the critical need for careful planning, regulation, and oversight in its integration. By addressing these challenges, we can ensure that AI enhances fairness, transparency, and accountability in legal proceedings, while effectively minimizing risks and protecting the integrity of the legal system.

In the final installment of AI on Trial: The Future of Law in a Digital Age – Part 3, we explore efforts to tackle the challenges posed by rapid technological advancements. Featuring insights from some of the most respected tech experts, we dive into innovations on the horizon and what we hope the future holds.

Share with friends
alt
alt