If a recent Thomson Reuters Report is any indication, lawyers and law firms plan to approach generative AI like they do most technology. Slowly and with skepticism. The Report, entitled, ChatGPT and Generative AI Within Law Firms, came out on April 17, 2023.
Thomson surveyed lawyers in mid-size (30-179 lawyers) to large law firms (more than 180 lawyers). The lawyers were based in the U.S., Canada, and the U.K.
There were some 443 respondents: 62% from mid-size firms and 38% from large law firms. The majority of those answering the survey were from the U.S.
Granted, it’s still early for a profession that is notoriously slow in adopting technology and innovating anything. But it’s interesting and even a bit humorous how predictable the responses were. As with the cloud and, before that, email, lawyers look for ways not to use generative AI instead of thinking of the opportunities it holds.
For example, almost 50% of the lawyers surveyed are not sure if generative AI should be used at all. 60% of the lawyers say they have no plans to use it. Only 5% say they are using or are planning to use. (Planning to use to me is a euphemism for no plans to use). 15% of the firms have warned their lawyers against using general AI, and five have banned it outright.
In keeping with lawyer arrogance, some 72% of the lawyers surveyed said, oh yes, we can certainly use generative AI for that low-brow nonlegal work. It just can’t be used for us lawyers’ high-level, special snowflake work.
And this is in keeping with their reasons for not using. Lawyers are really concerned about privacy and confidentiality. They are really, really worried about the accuracy of generative AI tools.
And, of course, they are highly apprehensive about ethics. Such reasoning says a lot about lawyers’ mentality. We always look for ways to say no; we almost always let the perfect be the enemy of the good. And we always use ethics as a crutch to avoid doing something we just don’t want to do.
If you are concerned about the accuracy, then verify what the tool is telling you before relying on it. Duh
But seriously. If you don’t want to breach client confidence by using generative AI, then don’t type anything confidential in what you ask generative AI to do. Or use generative AI platforms (like Casetext’s CoCounsel) that ensure privacy and confidentially. And as Sharon Nelson and John Simek pointed out in their recent excellent article entitled ChatGPT: How AI is Shaping the Future of Law Practice, if you are concerned about the accuracy, then verify what the tool is telling you before relying on it. Duh.
And the ethical concern is humorous. A number of lawyers surveyed said it’s somehow not ethical for AI to do tasks human lawyers have done. Some seem to think that the ethical rules require something that cant be programmed by a human, whatever that means.
I’m not sure what ethical rules these lawyers are reading. There’s the competency rule. Competancy includes the notion that lawyers should keep abreast of the risks and benefits of technology. I don’t think that one says you can’t use AI.
The idea we have to protect client confidences doesn’t outright preclude using technology tools. The rules just require lawyers to make sure the confidences are protected.
There are also confidentiality rules. But as with any tool (pen and paper, letters, etc.), the obligations under this rule fall on the lawyer. The idea we have to protect client confidences doesn’t outright preclude using technology tools. The rules just require lawyers to make sure the confidences are protected.
And, oh yes, there is the rule that requires us to bill reasonably. Hmm…That would seem to suggest if there is a tool that can help us do things more efficiently so that our bill is lower, perhaps we should use it.
I don’t see any rules that say we can’t use tools that are somehow algorithmically based, as some suggest. And if there were, I guess we would have to quit using computer legal research, data analytics, e-discovery tools, and, of course, Google. All use algorithms.
And speaking of our duty to keep up with technology, we need to understand generative AI and how we can use it to be better and more efficient lawyers. The guest on last week’s Geek In Review podcast was Josh Kubicki, who authors the Brainyacts Newsletter, which is all about generative AI. Kubicki says using generative AI is different from using a tool like Google, where you type in keywords to elicit a response. Generative AI is a conversational tool.
Kubicki says it is like walking down the hall and talking to a colleague about a problem you are having. Or talking over a tough issue with a friend over dinner. The more articulate you are and the more information you provide, the better the response.
The good news from the Thomson Reuters Report is that it’s still early in the adoption curve for lawyers
Thinking about generative AI this way helps solve the twin concerns of privacy and accuracy. You wouldn’t reveal client confidences to a friend not in your firm and from whom you seek advice. And you would always verify the information you got from talking over an issue before relying on it in, say, a brief.
The good news from the Thomson Reuters Report is that it’s still early in the adoption curve for lawyers. If you did a Survey when email first became a thing or when cloud computing was in its infancy, you would probably get the same kinds of responses. The end result, though, was the wholesale adoption of both, even in legal.
The arc of change in legal bends. It just bends slowly.