8 Oct 2023

Artificial intelligence (AI) and other advanced digital technologies are rapidly changing legal practice. Many solicitors’ firms now feature in-house innovation teams to identify and implement digital transformation opportunities; barristers regularly engage with clients and the courts through virtual consultations, hearings, and electronic bundles; and the Legal Services Board recently issued draft guidance for frontline regulators including the BSB to promote technology and innovation where it may improve access to legal services.  

As barristers experiment with these tools, it is crucial to find ways to capture the significant benefits they offer while acknowledging and protecting against the potential pitfalls that come hand in hand. Recent headlines indicate how this process might play out in practice. A recent New York court case drew international attention when two lawyers used ChatGPT to draft court submissions without fully understanding how it operates, to injurious results for themselves and their client[1]. A few months later, though, a Court of Appeal judge here in the UK openly used the same tool in drafting a judgment, calling it “jolly useful”[2]. The differences between the two cases illustrate the intersection of AI and professional responsibility in legal practice.  What lessons can these cases offer barristers of England & Wales on how to navigate this changing terrain safely and effectively for clients?

What went wrong in New York?

Two lawyers, representing a client against an airline, used ChatGPT – a large language model AI – to identify relevant caselaw. One prompted the tool to draft a court submission, which they submitted verbatim on behalf of their client. However, unbeknownst to them, the AI-generated legal analysis was faulty and contained fictional citations.

Compounding the AI error was a deeper governance problem: the lawyer (lawyer A) handling the case—and experimenting with ChatGPT—was not authorised to practise in the federal courts, so his colleague (lawyer B) carried out court proceedings on his behalf. Despite  lawyer B’s  ultimate responsibility for its contents, he only reviewed the document for style and flow, rather than its legal analysis, and did not inquire about the extent of lawyer A’s  research.

When opposing counsel challenged the citations and the court requested the opinion text, lawyer B first requested a time extension, falsely claiming he was on vacation, to conceal his colleague’s involvement. They again turned to ChatGPT, duly submitting the output to the court without review. But once more, the AI output was entirely fabricated, falsely attributing nonsensical opinions to real judges and embellished with further false citations and docket numbers held by actual cases irrelevant to the matter at hand.

In the proceedings that followed, the lawyers obfuscated and downplayed the role ChatGPT played in developing the documents – first claiming it complemented their other research before acknowledging it was their only attempt at research and that they had not attempted to confirm the outputs before submitting them directly.  Lawyer B’s submissions also obfuscated—in bad faith, the court found—the role lawyer A played in the case and the outcome that he ultimately did not fulfil his responsibilities as an attorney to ensure the accuracy of his filings.

A “jolly useful” tool nonetheless?

Contrast this example with the experience of Lord Justice Birss, who recently used ChatGPT to help draft parts of a recent judgment and publicly stated generative AI tools like it had “real potential” in legal services.

In a speech at a Law Society conference, Lord Justice Birss highlighted factors that made the use of such a tool potentially useful. He prompted ChatGPT to provide a summary of the relevant law, a narrow, well-defined aspect of the draft judgment within the scope of the tool’s capabilities. As he noted, this task was well within his own area of expertise, such that he could evaluate the text it generated and identify potential issues. Indeed, he reviewed the AI output as he incorporated it into his draft judgment, which he “could recognise as being acceptable”. Most importantly, he was clear he retained “full personal responsibility” for the draft.

What do these cases mean for the Bar?

The New York case attracted international media attention focusing on how the lawyers misused ChatGPT. It is tempting to attribute these shortcomings solely to the AI system, but that would be an oversimplification. As the federal judge noted in ordering sanctions:

Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings. [Respondents] abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.[3]

While the international headlines focused on how the lawyers misused ChatGPT, the sanctions were in fact primarily driven by the professional conduct missteps downstream of the AI error. AI, while a promising tool, is not a replacement for human responsibility and oversight. A lawyer is answerable for their research, arguments, and representations under their core duties to the Court and to their client. These duties continue to hold true when utilising AI. This case demonstrates that it is more important than ever to understand the capabilities and limitations of a new technology to ensure that its contributions are genuine aids, not sources of misinformation.

Applying AI in Professional Practice

Beyond these two cases, AI presents real opportunities to improve access to justice and support barristers in day-to-day legal practice. To realise these benefits requires barristers to think critically about how to leverage these tools safely and effectively. As the technological landscape evolves, innovation and professional ethics go hand in hand. The following considerations are a useful starting point for integrating technology and AI into professional practice:

Training in Legaltech: Continuing education in legaltech and AI could help barristers set a plan for technology adoption, evaluate new technologies and incorporate them into legal practice. It may be helpful to reflect strategically, for example as part of a CPD plan, on what legal technology skills may be necessary to harness the benefits of these technologies while appropriately mitigating the risks.

Getting to Know New Technologies: When adopting new AI technologies, each tool will perform differently, and its predictive power may vary when applied in new contexts. Taking the time to understand the strengths, weaknesses, and the scope of application of each tool will help increase the value of its outputs in any particular case.

For example, a large international law firm recently shared how it implemented a bespoke tool built on the GPT chatbot, facilitated by a dedicated innovation team over a months-long trial period before extending to the full staff[4]. Depending on the tool and its anticipated uses, such an intensive implementation may not be necessary, but close attention to the nuances of the technology and how it will be used in practice will mitigate risks and help ensure its outputs are as useful as possible.

Applying AI Outputs Critically: AI outputs are potentially an aid to conducting legal analysis, not a substitute. On each occasion, it is important to verify, review, interpret, and contextualise AI outputs to confirm accuracy and adapt them to the needs of each client. While AI can expedite processes, barristers ultimately hold core duties to act in the best interests of the court and clients.

 

The BSB’s Role: Legaltech in the Public Interest:

At the BSB we are developing our understanding of technology & innovation to help realise the potential of AI and other digital technologies at the Bar through prudent and responsible implementation.  In line with our 2022-25 Strategic Plan, and forthcoming guidance from the Legal Services Board, we are developing resources to support responsible use of technology at the Bar that furthers access to justice. We have recently begun a research project to better understand how technology is used at the Bar and the opportunities and risks it presents. And we are engaging with our peer legal service regulators in the LawtechUK Regulatory Response Unit to identify and respond to regulatory uncertainty in emerging legal technologies as they arise so technology developers and legal service providers can put these tools to use safely and with confidence.

 

We look forward to engaging with barristers throughout this process, and we are keen to hear your views at [email protected]. Please note while we greatly appreciate your engagement and will take all views into account, depending on the number of responses, we may not be able to respond to all messages individually.

 

Mata v Avianca

Jurisdiction: United States District Court (Southern District of New York)

Judge: District Judge Peter Kevin Castel

Usage: Two lawyers used ChatGPT to generate significant portions of their legal submissions, including faulty legal analysis and fabricated citations and opinions. They submitted inaccurate information to the court, which was later identified by opposing counsel.

Outcome: The lawyers faced sanctions for their actions. The court determined that they acted in bad faith and made false and misleading statements, as they concealed both the use of the tool and the involvement of one of the lawyers when initially confronted. The judge emphasised the importance of proper vetting and professional conduct when using AI tools in legal submissions, noting that using a reliable AI tool in legal analysis was not in itself improper.

Impact: This case highlights the potential risks associated with using AI tools in legal practice without a proper understanding of how the technology operates. It underscores the importance of lawyers' professional conduct, independent of the technology context.

 

Court of Appeal

Jurisdiction: England & Wales (Court of Appeal)

 

Judge: Lord Justice Birss, specializing in intellectual property law

 

Usage: Lord Justice Birss used ChatGPT to provide a summary of an area of law in which he was expert, aspects of the task he would have done manually anyway. He emphasised that the AI tool helped him to quickly summarise information and that he reviewed its output and took full responsibility for the content in his judgment.

 

Impact: This case showcases the potential for generative AI tools to assist legal professionals in summarising legal information when used effectively. It demonstrates good practice in the use of AI in the legal profession, as an aid in legal research and drafting, rather than a replacement for professional expertise.

 

[1] https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html

[2] https://www.theguardian.com/technology/2023/sep/15/court-of-appeal-judge-praises-jolly-useful-chatgpt-after-asking-it-for-legal-summary

[3] https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/#entry-54

[4] https://www.allenovery.com/en-gb/global/news-and-insights/news/ao-announces-exclusive-launch-partnership-with-harvey

 

More on