Hostname: page-component-7bb8b95d7b-nptnm Total loading time: 0 Render date: 2024-09-30T16:59:32.687Z Has data issue: false hasContentIssue false

AI in the Legal Sector – an Overview for Information Professionals

Published online by Cambridge University Press:  17 November 2023

Rights & Permissions [Opens in a new window]

Abstract

If there are any two letters that represent the zeitgeist of our profession, and indeed the world at large, they are A and I. AI, of course, stands for artificial intelligence, a subject that we've covered quite extensively in LIM, and are sure to continue to do so. One such article was in our Volume 22 Number 2 (Summer, 2022) issue and was penned by Jake Hearn, the assistant librarian at the Honourable Society of the Middle Temple and a member of the LIM Editorial Board. In this article Jordan Murphy, the Chair of the City Legal Information Group, reports from a presentation Jake gave to CLIG in April 2023 in which the views expressed in his original piece were updated and expanded.

Type
Main Features
Copyright
Copyright © The Author(s), 2023. Published by British and Irish Association of Law Librarians

INTRODUCTION

Back in April of this year, Jake Hearn gave a very timely presentation to members of the City Legal Information Group (CLIG) entitled ‘AI in the legal sector’. This talk followed on from an article he had written for LIM in 2022 (Summer issue) on the same topic and contemporised the information considering recent developments, most prominently, the incontrovertible explosion of ChatGPT, its impact and the multitudinous projections this has spawned in the legal and tech spaces.Footnote 1

This article aims to pick up from where Jake's previous article left off and relay additional information from his aforementioned CLIG talk. It is designed to provide a current update to legal information professionals, particularly those working in corporate law firms. There will be an overview of generative AI and Chat GPT, a snapshot of the current regulatory landscape and, finally, suggested considerations to apply in evaluating the use of AI in legal services.

AI

For a comprehensive definition of AI, readers should refer to Jake's 2022 LIM article for a historical insight, from the coining of the term in the 1950s as a very conceptual, theoretical framework up to its modern definition as outlined in the UK's National AI Strategy document. In the latter, AI is compared to James Watt's 1776 steam engine in that it is a general-purpose technology with many different potential applications; overarchingly, AI refers to machines that perform tasks normally demanding human intelligence. Jake divides AI into three main categories: machine learning, natural language processing and knowledge reasoning. Overall, it is a product of the availability of large data sets (the amount of data created in the next few years is expected to be more than the data created in the past few decades) and the convergence of the cloud and machine learning capabilities.

GENERATIVE AI

Jake highlights document analysis, contract intelligence, clinical negligence analysis and case outcome prediction as some of the existing use cases in the legal context.

The technology dominating the conversation at present, due largely to its scalability and wide-ranging potential, both within and without the legal environment, is generative AI. The term describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations and videos and will therefore have a profound impact on our approach to content creation.

Scientists generally accept that there are two broad categories of generative AI technologies. The EU defines the first as a “foundation model”, an “AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.” Also referred to as ANI (artificial narrow intelligence) or ‘weak AI’, these models are generally built with an intended purpose in mind. Examples include speech recognition systems; they are trained on select, well-labelled datasets to perform specific tasks and operate within a predefined environment.

Distinctly, the second broad category, AGI (artificial general intelligence), or ‘strong AI’ are trained on broad, unlabelled data (i.e. the internet). This latter category poses the biggest perceived threat as it can be used for a wide range of tasks with minimal interference and applied to unintended use cases.

CHATGPT

ChatGPT is a prime example of the latter. Released in November 2022 by OpenAI, it is the most prominent example of a natural language processing tool based on large language model (or LLM) technology. In layman's terms, the tool generates responses to prompts in the format of text and images.

In his plenary session at the 2023 BIALL conference,Footnote 2 Robin Chesterman, Global Head of Product at vLex, explained that LLMs are based on a neural network; an idealisation of how a brain works. Although the technology is sophisticated, Chesterman warned that ChatGPT is essentially a glorified predictive / automated text model. Based on probabilistic equations, the application has zero sentient understanding and bases its responses on the most probable words and sequences of words that it thinks should appear next according to what it has already encountered and has been ‘trained’ on. Its architecture is two-part; the encoder network processes the input sequence while the decoder network produces the output sequence. Essentially it asks, given the words in the prompt, which words should probabilistically follow or “what sounds like the right answer?” The fact that the end product is so ‘human-like’ is apparently a happy accident. It was an unanticipated by-product of the development of the model. Chesterman warned that, although the technology marks a profound leap forward, ChatGPT can only teach us about the predictability of language, not about the meaning.

Known limitations of the tool include ‘hallucinations’; outputs that sound plausible but are factually incorrect. These can arise from misunderstanding or a lack of contextual understanding. Additionally, the dataset ChatGPT works from has a cutoff point of September 2021, meaning that it has little to no framework for anything that happened after this date.

In legal practice firms have a cautionary approach and at this early stage most of them have a strict policy against it. For the moment individuals can use the tool without paying a monetary fee (although this will likely change in the not-too-distant future), they do though immediately forego any rights to any data they input. These use terms act as an immediate barrier for law firms whose priorities lay with client data confidentiality and security. A business model does exist but will realistically be unaffordable for most. Microsoft has made a huge investment in the integration of Teams and ChatGPT, again, taking advantage of this involves a pricey premium.

Jake relayed the case of Judge Juan Manuel Padilla Garcia, the South American judge who admitted to using the tool to decide that an autistic child's medical expenses should be paid by his insurance. The judge's admission sparked contention and an urgent call for digital literacy training in the profession. This was a direct illustration of the importance of the role information professionals have to play in the responsible and appropriate use and adoption of such technology.

The librarian's skillset of advocating properly conducted legal research with a variety of authoritative sources and a comprehensive understanding of surrounding issues, such as copyright, are required to mitigate risk and identify opportunities for law firms, and this is more relevant than ever, says Jake. He encourages us to step up to the plate and develop our skills and understanding of the technologies to stay at the forefront of conversations and initiatives within our firms. This is a pertinent reminder that we must position ourselves if we wish to take advantage of the potential opportunities here.

As far back as 2019 Sam WigginsFootnote 3 (then Senior Manager of Library & Research Services, EMEA & Asia at Bryan Cave Leighton Paisner) noted how many AI systems and other legal tech were underpinned by the principles that librarians specialise in, such as Boolean logic, hierarchies, information architecture, analysing user interfaces and facilitating user experiences.

Returning to the story of the judge who used the predictive tool, it is interesting to note that his final decision was not considered incorrect. Despite the controversy surrounding his decision to a) use the tool and b) not disclose this fact initially, he argued that the bot did a similar job to that which he would expect from his legal secretary: to conduct initial research and outline the main points from key sources. The judge would then conduct his due diligence and fact checking to consolidate with his expertise and knowledge of the law. If there was no expectation for the judge to disclose the use of his legal secretary in this process, how was ChatGPT any different, he asked. Experts in the law will always be needed but this early argument and application could be a taste of things to come.

In a recent webinar hosted by The Lawyer on ‘AI in Action – real life examples of how law firms and individuals are using AI today’, Peter Lee, CEO of Simmons and Simmons’ Wavelength, a team of legal engineers, technologists and lawyers who solve business problems with data science, gave further examples of how generative AI might be used to empower the buyer of legal services in the future. He proposed that those requiring legal services may use the tool to refine the answer they are looking for prior to acquiring legal services, then shop around at smaller firms who can confirm their initial findings, rather than being at the mercy of the expensive big firms who will do it all for them.Footnote 4

In reality, current use of generative AI in law firm operations is rare. Just 3% of respondents to a survey conducted by Thomson Reuters, the findings of which were published in April 2023, said it is currently being used at their firms; and about one-third of respondents are considering its use. Interestingly, six-in-ten respondents said their firms have no current plans for generative AI use in their operations.Footnote 5

Some firms are investing in the technology; Mishcon de Reya has notably been recruiting for a ‘prompt engineer’, a specialist who can leverage the tool and help the firm to understand how to get the best results from it. Shoosmiths considered banning ChatGPT but have decided to allow staff to experiment with it whilst still maintaining a strict policy against using it in the business of law.

REGULATION

A proposed regulation for the EU was produced in 2021, now referred to as the EU AI Act. This sets out important requirements to minimise harmful risks to individuals by differentiating between prohibited and high-risk AI systems. It defines several obligations for the development, placing on the market and use of AI systems.

Voting was postponed due to the meteoric rise of foundational models just like ChatGPT so that the EU could ensure the legislation was still appropriate.

On June 14 2023, the European Parliament approved its version of the draft EU artificial Intelligence Act.Footnote 6 The proposed legislation, which will come into force in 2025 at the earliest, focuses on the promotion of trustworthy AI and defines a tiered risk system which has recently been expanded to include voting systems and recommendations of “very large online platforms”. Additionally, prohibited use cases now include systems involving real time biometric ID in public spaces for law enforcement and predictive policing systems.

EU lawmakers want foundation model providers to comply with certain requirements such as testing and mitigating risks to health, safety, fundamental rights, the environment, democracy and the rule of law, with the involvement of independent experts.

Being obviously exempt from this prospective law the UK has not devised a generally applicable regulatory framework for AI yet. Its white paper, published in March 2023,Footnote 7 advocates a pro-innovative approach to AI regulation underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy:

  1. 1. Safety, security and robustness

  2. 2. Appropriate transparency

  3. 3. Fairness

  4. 4. Accountability and governance

  5. 5. Contestability and redress

This white paper sets out principles to guide existing regulators (including the Information Commissioner's Office, Competition and Markets Authority, Health and Safety Executive, the Equality and Human Rights Commission and the Financial Conduct Authority) to develop a targeted approach to AI and its development in their respective sectors and regulatory areas.

CONSIDERATIONS WHEN USING AI

In the 2022 UK Top 200 report produced by Legal IT Insider, outlining the major IT systems used by the largest UK law firms, 57 firms stated they have incorporated machine-learning based technologies into various practice areas, while a further 119 said they were using document automation technology.

In conclusion, the growth in commercial law firm adoption of these technologies, and potential library and information services (LIS) professional involvement, has sparked questions around ethics and legality. We will finish with the following points, as outlined by Jake:

  1. 1. Can we trust a machine to draw conclusions about a legal point that could have huge impact on a largescale deal or individual?

  2. 2. What is the right balance of human involvement where use of these technologies is concerned? Human checking will always be critical no matter how far advanced these technologies become.

  3. 3. Who has access to the sensitive information that is being uploaded to a system? The technologist? Or other third-party staff? What about jurisdictional implications around privacy and data sharing if, say, for example, a London-based firm is using a system whose servers are stored in another country or continent?

  4. 4. How do we manage the ethical quandary of machine learning mirroring human biases from those who develop it? We could even ask about the wider sociocultural and political biases imposed from a historical perspective too.

  5. 5. What happens if mistakes are made, or the machine fails to highlight serious errors? Who is responsible further down the line? The lawyer using it, the machine itself, or the technologist who created it?

  6. 6. Can lawyers continue to justify the costs if a portion of the work is being delegated to a machine?

References

Footnotes

1 Hearn, Jake“A library is a growing organism”: redefining artificial intelligence and the role of the information professional in the corporate legal world’, Legal Information Management 2022, 22(2), 81-85CrossRefGoogle Scholar

2 Robin Chesterman “Let's Chat about Chat GPT” (2023) Plenary session 1, BIALL conference 2023

3 Wiggins, SamuelReflections on current trends and predictions for commercial law librariesLegal Information Management (2019) 19(2) 94-9CrossRefGoogle Scholar, discussed in Garingan, Dominique and Pickard, Alison JaneArtificial intelligence in legal practice: exploring theoretical frameworks for algorithmic literacy in the legal information professionLegal Information Management (2021) 21(2) 97-117CrossRefGoogle Scholar

4 “Webinar On Demand: AI in Action: Real-Life examples of how law firms and individuals are using AI today” (The Lawyer, May 2023). Available at <www.thelawyer.com/ai-in-action-real-life-examples-of-how-law-firms-and-individuals-are-using-ai-today/> Accessed 15 July 2023.

5 Thomson Reuters Institute,“ChatGPT and Generative AI within Law Firms” (2023), page 4. <https://on24static.akamaized.net/event/42/03/42/4/rt/1/documents/resourceList1686256379808/chatgptgenerativeaiinlawfirms1686256364964.pdf> Accessed 15 July 2023.