There’s lots of talk about AI and machine learning and how those tools will or will not impact the practice of law.
One school—perhaps buoyed by all the talk and little perceived impact—says it’s all hoopla. That AI won’t affect how lawyers do their job one iota. The other group—the sky is falling group—focuses on the possibility that robots will soon replace lawyers. They believe that machines will ultimately rule the human race. Neither extreme is entirely accurate.
I recently had a chance to hear Richard Susskind speak on AI in law and, as always, found his comments perceptive and spot on. Susskind spoke as part of a series of lectures entitled Legal Tech Essentials 2022. This year, the series was a joint effort between Bucerius Law School’s Center for Legal Technology and Data Science and Singapore Management University’s Centre for Computational Law at the Yong Pung How School of Law.
Susskind noted the evolution from early AI programs that could deliver simple decision trees. Today, Susskind notes that AI programs that can do much more sophisticated work. Says Susskind, this evolution was enabled by the web and the advent and lower costs of brute force computing. Perhaps most significantly, though, the evolution was enabled by the transition from AI programs that had to be pre-programmed to those that could actually learn.
These developments led to an increased ability of AI to make future predictions based on large amounts of data. These AI predictions are often just as accurate, if not more so, than those of humans. And these predictions can be done in a fraction of the time that humans could do them, even if humans could. For lawyers, accessing these more accurate and certainly more economical predictions is extremely useful (and valuable) when it comes to legal research and litigation analytics, among other things.
AI predictions are not based on “thinking” like humans. And it’s wrong to think of what they do in this way
But Susskind cautions that these AI predictions are not based on “thinking” like humans. And it’s wrong to think of what they do in this way. AI predictions are instead based merely on vast amounts of data. While that’s an oversimplification, the fact is machines don’t and probably never will copy our way of thinking. Instead, they efficiently and effectively accomplish tasks in different ways. Susskind’s best example of this difference: a self-driving car doesn’t have a robot driver sitting behind the wheel. Instead, the car collects data from various sources and uses this data to make predictions and decisions.
The real question AI poses for the legal profession, says Susskind, is to what extent machines can be used to reduce uncertainty posed by problems. The fundamental question, says Susskind, is thus what problems lawyers are currently trying to solve that machines can solve better and quicker. The lawyer’s job in the future will be to focus on what clients really want: outcomes. Machines can’t provide outcomes, only reduce the uncertainty surrounding the potential outcomes, according to Susskind.
By understanding how AI programs conceptually arrive at predictions, human lawyers can allocate work between themselves and AI programs more effectively.
What this could mean for legal is significant, though. By understanding how AI programs conceptually arrive at predictions, human lawyers can allocate work between themselves and AI programs more effectively. And proper allocation enables lawyers to get the best end result for their clients. A lawyer needs to understand what questions AI can answer and how robust those answers are. From that understanding, lawyers can then provide their clients with the best possible outcome. And, using that increased input from the lawyers’ decisions, AI will provide better and faster predictions and solutions in the future.
A good example of how this dichotomy can work between humans and machines can be seen with a recently developed Thomson Reuters legal research tool. (Disclaimer: I haven’t used the product and can’t comment on how well it works, only the process).
Thomson Reuters recently introduced Westlaw Precision, an improved version of Westlaw. According to the Thomson Reuters press release, the tool is designed to improve research speed and quality by enabling lawyers to target precisely what they are looking for: legal issues, issue outcomes, fact patterns, motion types, motion outcomes, causes of action, party types, etc. In addition to precise searching, Thomson Reuters says the new tool offers new capabilities like expanded KeyCite functionality and optimized workflow tools.
The challenge for tools like Westlaw Precision and other AI legal research tools has been that different courts, judges, and even lawyers, use different terms and words to describe the same things and concepts. Comprehensive research using machine learning and AI is challenging without the ability to understand the context surrounding the words. It’s difficult for the programs to provide comprehensive results (or predictions) with the necessary certainty. Simply put, the AI programs can’t necessarily find all the results that a human might (if the human had all the time in the world). And hence the machines that are used, while perhaps reducing uncertainty, don’t get the optimum outcome.
But Precision tackled this problem by layering on the AI results the kind of human input that the AI program cant apply. Says Leann Blanchfield, head of Primary Law, Editorial, Thomson Reuters. “To enable more Precision in search, we added more than 250 new attorney editors to mark up and classify case law in more useful ways for our customers. For more than 100 years, we have classified legal issues with the West Key Number System. Now we are also classifying cases by issue outcome, fact pattern, motion type, motion outcome, cause of action, and party type. This enables customers to specify precisely what they want and retrieve it quickly.”
AI in law needs to be viewed through the prism of what the AI can do—and what it can’t
In other words, the Thomson Reuters lawyers worked in tandem with Precision to get a better outcome. This outcome and results are better than what either humans or AI alone could achieve. Using this human input, the AI programs learn more about what to look for and can provide even better future results, further reducing the uncertainty Susskind talks about. The lawyers using Precision should be able to provide better analysis and outcomes. But those outcomes and the heightened functioning of the AI program Thomson Reuters could not be achieved without human direction. Improved outcomes require humans and machines work in tandem. And, for now, that’s how lawyers should view AI.
AI in law needs to be viewed through the prism of what the AI can do—and what it can’t. AI can find ways to reduce uncertainty. To do tasks lawyers may have historically done but which they are not as well suited as machines. Tasks like preliminary research to gather the universe of relevant materials. But at some point, just like the Thomson Reuters lawyers, humans must enable the AI to go to the next level to get the best outcome.
Humans and AI working together: wins every time.