Return to site

AI Accuracy Rates in Justice Tech Solutions: A Major Threat

In a striking reminder of the pitfalls of artificial intelligence, Stanford Professor Jeff Hancock recently submitted a legal declaration in a Minnesota court case regarding deepfake technology that included citations to non-existent studies generated by AI.

Hancock's experience is far from singular. Lawyers from Levidow, Levidow & Oberman and a host of others faced fines for using non-existent cases generated by ChatGPT to support their arguments in court.

These incidents serve as a wake-up call, highlighting the broader issue of AI hallucinations—where models produce incorrect or fabricated information.

If even experts like Hancock find themselves susceptible to these inaccuracies, it underscores a structural problem within AI systems. The persuasive and seductive nature of AI outputs means that the general public or entrepreneurs not trained in legal matters are at an even greater risk of encountering inaccuracies and hallucinations.

AI Accuracy Rate and the Prevalence of Hallucinations in Justice Tech Solutions

The prevalence of hallucinations in legal AI tools is alarming.

A study from Stanford found that leading systems like LexisNexis' Lexis+ and Thomson Reuters' Ask Practical Law hallucinate over 17% and 34% of the time, respectively.

In fact, general-purpose large language models (LLMs) have been shown to hallucinate between 58% and 82% of the time when asked specific legal queries.

These statistics highlight a critical challenge: as legal professionals (and the general public) increasingly rely on AI for legal research and drafting, the risk of encountering erroneous outputs rises significantly.

AI Accuracy Rate Implications in Justice Tech Solutions

The implications of AI-generated inaccuracies in the legal field can be severe.

Erroneous citations and misleading information can lead to flawed legal judgments, potentially undermining trust in both the technology and the legal system itself.

As U.S. District Judge P. Kevin Castel pointed out, “Many harms flow from the submission of fake opinions,” including wasted time, potentially injured clients, reputational harm to courts, cynicism about the judicial system, and even more trouble down the road because “a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”

Moreover, the use of biased algorithms can exacerbate existing inequalities. Research shows that AI tools used in sentencing can lead to discriminatory outcomes, disproportionately affecting Black defendants compared to their White counterparts.

In Virginia, for instance, judges using AI recommendations were found to impose harsher sentences on Black offenders despite similar risk scores assigned by the algorithms. This highlights how reliance on flawed AI systems can perpetuate systemic discrimination rather than alleviate it.

Poor Accuracy Rates Threaten the Mission of Justice Tech Solutions

If the primary goal of justice tech is to serve under-resourced actors, then the persistence of AI hallucinations poses a significant risk of exacerbating existing inequalities.

Many of these technologies are designed to democratize access to legal resources; however, if they generate inaccurate or fabricated information, they can mislead users who lack the legal expertise to discern fact from fiction and jeopardize the cases of their under-resourced clients.

This is particularly concerning given the perpetual systemic biases that already exist within the legal system. When AI tools produce erroneous outputs, they not only undermine trust in these technologies but also accelerate the very disparities they aim to alleviate.

For marginalized communities and individuals seeking justice, reliance on flawed AI systems can lead to misguided decisions, further entrenching inequalities rather than providing equitable access to legal support.

Overcoming Challenges in AI Accuracy Rate for Justice Tech Solutions

To create traction on this challenge, my goal is to landscape current approaches to mitigating hallucinations in legal AI tools.

The goal is not merely to identify problems but to foster an environment of continuous improvement and collaboration among practitioners, developers, and researchers.

In this series, I will review major strategies that can work for non-technical and technical audiences, building a foundation of best practices for the legal industry.

As we explore effective strategies moving forward, I encourage you to share your thoughts and experiences regarding AI accuracy rates in justice tech solutions.

- What strategies have worked for you? Comment below or or reach out to me directly. 

- Stay tuned to discover how you can transform your justice tech startups' approach to AI, becoming a trusted strategic advisor.

- PS -- want to get more involved with LexLab? Fill out this form here.