Use of generative AI in the legal profession accelerating despite accuracy concerns

Legal professionals discuss a case in a courtroom
(Image credit: Getty Images)

The widespread use of generative AI in the legal profession is a trend that can't be reversed despite push back from many in the field, experts have told ITPro.

Adam Ryan, VP of Product at Litera said that the integration of generative AI tools within the profession will likely accelerate in the coming years in a similar fashion to their roll-out in other global industries.

The claim follows the publication of a report from US chief Justice John Roberts examining the potential risks and impact of generative AI in the legal field and its long-term applications for workers.

“The industry will gain confidence in GenAI as it continues to have a positive impact on how we work,” he said. “Not using GenAI and LLM tools will put firms at a serious disadvantage as they will not be able to work nearly as quickly, accurately, or efficiently as firms that are leveraging these game-changing tools.”

While the technical efficiency of AI is obvious to people in Ryan’s position, the general public is less easily convinced. 

In the 13-page report, Roberts sympathized at least in part with the view that “human adjudications, for all their flaws, are fairer than whatever the machine spits out.”

The study warned that lingering concerns over issues such as ‘hallucinations’ - whereby generative AI tools present false information as factually correct - could raise serious questions about their effective use in the profession.

Roberts referred to cases where AI hallucinations caused lawyers to cite non-existent court cases in their briefs.

In December, a woman who used generative AI in lieu of a lawyer ended up submitting nine pieces of fabricated case law to support her defense.

AI is still far from the point where it can be called reliable in this sense, the report noted. However, Ryan noted that lawyers will ultimately maintain agency and control of any tools being used in daily activities.

“GenAI does not remove any of the responsibility of the lawyer to be diligent and follow their ethical obligations,” he told ITPro.

Dan Hauck, CPO at legal software company NetDocuments, echoed Ryan’s comments, adding that responsibility will still be placed on individual legal experts in the event that mistakes are made.

“While hallucinations can appear when asking a question with little context, hallucinations are far less likely to appear when the prompt is grounded in specific information, such as the lawyers' prior work product,” he said.

“Lawyers are and remain responsible for the ultimate work product submitted to court. A lawyer can no more deflect blame to an AI model than to an assistant or paralegal who provides inaccurate information."

Generative AI may constitute the new paralegal 

Hauck’s comparison of AI to a paralegal potentially points toward an emerging trend within the legal space in which tools are used in a limited capacity. 

AI tools are already being used by practitioners in software development, for example, to support coding work.

Critically, responsibility for any mistakes or failures still lies with the individual human worker, which Hauck said will also be the case in the legal profession.

Generative AI shows no shortage of potential uses cases in the legal field. 

GPT-4, for example, passed the bar examination in March 2023. With an impressive score of 297, generative AI outperformed the average human test taker by 7%.

Roberts describes the legal profession's reaction to this news as a mixture of both awe and angst, though he is careful to continue reinforcing the “great potential” of AI.

What’s clear is that there’s an element of law - the factual side, concerned with the collation of dates and case specifics - which is susceptible to AI automation


A webinar from Cloudflare on cyber security for AI

(Image credit: Cloudflare)

Discover how AI can increase your security team's productivity


The side of law not so susceptible is that which requires nuance, the side which, in Roberts’ words, looks to a “quivering voice” or “a moment’s hesitation” in the courtroom. He argues that most people still trust humans over machines to draw the right inferences in these situations.

“Foundational AI should be thought of as a tool to produce drafts of many types of legal documents, but human input and decision making must remain a meaningful part of the practice of law,” Hauck said.

“To ensure the best results and outcomes, it's important to have an experienced human "in the loop" to evaluate the output of the models and implement AI throughout their firm thoughtfully and responsibly,” he added.

Roberts predicts the longevity of human judges and lawyers, but also looks to a future in which judicial work is increasingly aided by generative AI, provided robust regulation and safeguards are implemented.

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.