On May 8, the ABA Standing Committee on Ethics issued its formal Opinion 511 entitled Confidentiality Obligations of Lawyers Posting to Listservs. As Bob Ambrogi rightly pointed out in his recent post on the Opinion, it seems odd that the ABA would issue an opinion now about a technology that has been around since the late 90s. For Bob, it brought to mind Rip Van Winkle, who slept for 20 years only to wake up in a unrecognizable world.

I agree that the timing seemed strange. However, the substance of the Opinion could relate to and reveal the Committee’s thinking about the use by lawyers of large language models (LLMs).

The Opinion deals with when a lawyer can post questions or comments on a ListServ without their client’’s “informed consent.” According to the Opinion (and clearly, under the Rules), a lawyer can only do so if there is not a reasonable likelihood that a reader could determine either the identity of the client or the matter. The Opinion also discusses what “informed consent” entails.

Applicability to LLMs

One only needs to read the Opinion’s initial summary to see where the reasoning could take the Committee with LLMs. Here is a redlined version of the Opinion’s summary with the abbreviation LLM substituted for ListServ:

“This opinion considers whether, to obtain assistance in a representation from other lawyerson a listserv discussion group, or post a commenta Large Language Model or Generative AI, a lawyer is impliedly authorized to disclose information relating to the representation of a client or information that could lead to the discovery of such information.”

The upshot of course is that if it is not reasonably likely that the identity or matter could be inferred, then the lawyer is free to use a LitServ under the implied authorization exception of Rule 1.6 (a). This authorization is assumed as part of the lawyer’s need “to carry out the representation.”

Seeking advice from knowledgeable colleagues is an important, informal component of a lawyer’s ongoing professional development

The Opinion recognizes that “Seeking advice from knowledgeable colleagues is an important, informal component of a lawyer’s ongoing professional development. Testing ideas about complex or vexing cases can be beneficial to a lawyer’s client.”

So, all is well and good for the use of LLMs, right? Not so fast. The language of the Opinion and its general thrust seems to be toward blunting the use of ListServs without informed client consent. And that poses issues for the use of LLMs as well.

The Scope of the Confidentiality Duty

A lot of lawyers misunderstand the scope of the confidentiality duty and confuse it with the concept of attorney client privilege. The confidentiality duty is in fact much broader. Under  Rule 1.6, a lawyer is not to reveal any information relating to the representation, irrespective of its source. As set out in the Opinion:

“Comment 3 explains that Rule 1.6 protects ‘all information relating to the representation, whatever its source’ and is not limited to communications protected by attorney-client privilege. A lawyer may not reveal even publicly available information, such as transcripts of proceedings in which the lawyer represented a client. As noted in ABA Formal Opinion 04-433 (2004), ‘the protection afforded by Model Rule 1.6 is not forfeited even when the information is available from other sources or publicly filed, such as in a malpractice action against the offending lawyer.’”


“Among the information that is generally considered to be information relating to the representation is the identity of a lawyer’s clients. Lawyers are generally restricted from disclosing such information even if the information is anonymized, hypothetical, or in abstracted form, if its reasonably likely that someone learning the information might then or later ascertain the client’s identity or the situation involved.”

[Emphasis added].

So, it’s not just privileged information. The breadth of the confidentiality duty, when interpreted literally, should give one some pause when chatting with ChatGPT. And in an age where AI can be used to analyze so much data, the concept of what is “reasonably likely” may be broader indeed.

Informed Consent

The Rules do, of course, provide exceptions. If the client gives their “informed consent,” it would be ok to use listservs and, by extension, LLMs. But what must be done to obtain such consent is a bit daunting. Again, to quote the Opinion:

“Informed consent” is defined in Rule 1.0(e) to denote ‘the agreement by a person to a proposed course of conduct after the lawyer has communicated adequate information and explanation about the material risks of and reasonably available alternatives to the proposed course of conduct.’”

Comments to the Rule make it clear, says Opinion 511, that this means the lawyer has to confer with the client, explain the advantages and disadvantages, and get affirmative consent. The latter means that merely plugging a use in an engagement letter without requiring the client to sign off is not enough.

The concept of informed consent becomes a little murky when it comes to LLMs

The concept of informed consent becomes a little murky when it comes to LLMs, which are constantly evolving and not yet fully understood. What is “adequate” when even computer scientists aren’t sure how LLMs do some things they do? What does “material” mean in this context? And what about “reasonably available alternatives”? Is spending 200 hours of research time to do what an LLM could do in minutes “reasonable”?

Lots of questions raised by this Opinion.

Implied Authorization

The second exception is also found in Rule 1.6: the use of a ListServ can be impliedly authorized to carry out the representation. The Opinion has this to say about the implied authorizations:

“A lawyer is impliedly authorized to consult with an unaffiliated attorney in a direct lawyer-to-lawyer consultation and to reveal information relating to the representation without client consent to further the representation when such information is anonymized or presented as a hypothetical, and the information is revealed under circumstances in which ‘the information will not be further disclosed or otherwise used against the consulting lawyer’s client.'”

But the authorization only goes so far as to protect disclosure of non-prejudicial information

But the authorization only goes so far as to protect disclosure of non-prejudicial information. Of course, most of the time, you want to use tools like listservs and LLMs to deal with problems which by definition could involve prejudicial information.

Another Danger

There is another thing lurking in the Opinion that may spell trouble for the use of LLMs. The Opinion says disclosing information to another lawyer you know, even if not affiliated, is far different than using a ListServe, where the information is disseminated to a broader group which may include people (lawyers) you don’t know. According to the Opinion:

“Participation in most lawyer listserv discussion groups is significantly different from seeking out an individual lawyer or personally selected group of lawyers practicing in other firms for a consultation about a matter. Typical listserv discussion groups include participants whose identity and interests are unknown to lawyers posting to them and who therefore cannot be asked or expected to keep information relating to the representation in confidence… Additionally, there is usually no way for the posting lawyer to ensure that the client’s information will not be further disclosed by a listserv participant or otherwise used against the client. Because protections against wider dissemination are lacking, posting to a listserv created greater risks than the lawyer-to-lawyer consultations envisioned by ABA Formal Ethics Opinion 98-411.”

Under this restrictive reasoning, using an LLM may be even more problematic

Under this restrictive reasoning, using an LLM may be even more problematic since not only is the information going to dreaded “non-lawyers,  you often aren’t sure where it’s going.

The Impact

The conclusion of the Opinion says it all. A lawyer can use a ListServ without consent if and only if “discussions such as those related to legal news, recent decisions, or changes in the law, without a client’s consent, if the lawyer’s contributions will not disclose information relating to a client representation.”  And even here, the Committee cautions that even general comments about the law may permit other users to identify the client.

Anything else: you need consent, a consent that’s designed to be arduous to get. If you follow this reasoning, the Committee is setting up to attempt to severally curtail the use of LLMs in the profession.

By taking such a restrictive view, the Committee may be setting itself up to either walk back some of its comments or double down on LLMs in the future. Only time will tell.

Don’t tell me about your effort. Show me your results. Tim Fargo

 I have recently attended several legal tech conferences and other lawyer meetings, which were dominated by Generative Artificial Intelligence (GenAI) discussions and presentations.

Despite the rapid advancement of GenAI and AI technologies, the content at these events is pretty repetitive. Most of the presentations center on the dangers of lawyers using GenAi, and how the GenAi systems work. The presenters drone on about the “black box”. They pontificate endlessly about how the systems will revolutionize the practice of law in short order. How they will drag the moribund legal profession kicking and screaming into the 21st century. In many of these conferences, IT professionals and vendors outnumber bewildered lawyers and legal professionals. In most of these conferences, the event planners and speakers are not lawyers; if they are, they have advanced pretty far up the GenAI legal curve and speak geek more than legal.

Continue Reading From Theory to Practice: The Need for Real-World GenAI Demonstrations For Lawyers and Legal Professionals

It goes without saying that one of the most critical functions of a law firm is to train its associates adequately. But time constraints and a lack of consistency, as I have previously discussed, make good, sound training of associates problematic in many firms. However, large language models and GenAI, even open models, may offer potential solutions. Provided, of course, that the firm and its partners understand the risks and benefits of these models and how to use them.

Continue Reading Revolutionizing Law Firm Training with AI: The Power of Large Language Models

It seems like every day, there is a new vendor survey about what’s happening in the legal marketplace. Sometimes, these are designed to reveal a result that the vendor thinks will help sell its products. Sometimes, they offer beneficial and, in some cases, remarkably candid insights.

Thomson Reuters’ GenAI Study

Thomson Reuters released its 2024 Generative AI in Professional Services Survey Report earlier this week. The release coincided with a couple of new release announcements by Thomson Reuters in the GenAI space. TR has invested a lot of money in this area and obviously believes in its future in the legal ecosystem (I know. The term “legal ecosystem” is a grating cliché).

What’s interesting about the Survey Report is that, unlike surveys that confirm what the vendor wants, this one goes a little against the grain. It also seems to confirm what I am noticing and previously wrote: Lawyers just aren’t rushing—yet—to embrace GenAI.

As the TR Survey notes: “GenAI usage is not widespread among professional services…The most common emotion surrounding GenAI is one of caution and hesistance”. (To be fair, the TR Report does conclude that the industry may be on the cusp of changing its view of GenAI. According to the Report, there is a feeling of “optimism and excitement” in the legal community).

Continue Reading Lawyers’ GenAI Hesitancy: Insights from the 2024 GenAI Professional Services Survey

Working with outside counsel is like getting thrown in a pit of rattlesnakes and hoping one won’t bite you. Anonymous

Axiom, the 14,000-person alternative legal service provider, launched in 2000, together with Wakefield Research, recently conducted and published a Study of U.S. in-house counsel. They conducted a 15-minute Survey online in January and February of this year. Some 300 general counsels of small, mid-size, and large businesses responded.

Continue Reading Law Firms on Notice: Adapt to In-House Counsel’s Concerns in the Wake of Axiom’s 2023 Findings. Or Else

The less there is to justify a traditional custom, the harder it is to get rid of it.

Mark Twain

More and more law firms are opting to require lawyers and certainly associates to be in the office at least four days a week. At some point, this may convert to five-days in the office. Most of the time, management declares that those lawyers (read associates) who don’t comply could see their compensation reduced. (A pretty strong suggestion is that five days is better than 4 for advancement). 

Continue Reading The Cost of Tradition: Unpacking Law Firms’ Return-to-Work Policies

A loophole in Microsoft’s Azure OpenAI Service terms of use could expose privileged information to third-party review. Lawyers need to undertake reasonable diligent vetting of vendors and their terms. Reliance on vendor assurances alone is not enough. But what is?

Last week, I ran across a good piece of reporting by Cassandre Coyer and Isha Marathe in law.com. The report highlighted an important issue.

Legal tech vendors have aggressively marketed Gen AI products over the last 18 months. To a vendor, they all assure potential customers that the inquiries and responses are protected, that they will not be used to train the system, and that third parties will not have access to confidential materials. In short, trust us. But can lawyers rely on these assurances, and to what extent? Do they need to do more?

Continue Reading Navigating Legal Tech: Can Lawyers Trust Gen AI Vendor Confidentiality Assurances?

The lack of lawyers in rural areas has attracted much attention lately. Rural pockets with few or no lawyers living there, the so-called legal deserts, are on the upswing.

According to some surveys, 14% of the population lives in rural areas, but only 2% of lawyers do. A 2020 ABA study found that 40% of all counties in the US have fewer than one lawyer for every 1000 residents. Fifty-two counties have no lawyers, and another 182 have only one or two.

Continue Reading Serving the Underserved: Innovative Solutions Needed to Solve the Rural America’s Lawyer Drought

There can be no higher law in journalism than to tell the truth and to shame the devil. Walter Lippmann

As most of you know, I frequently attend conferences–both legal tech related and those related to technology in general, like CES. I do this because I am interested in the field and because I like to think what I write as a former practicing lawyer is valuable. The latter idea, of course, carries the responsibility to be candid and to “call em as I see em”. I have tried to do that since I started blogging some seven years ago.

Continue Reading Integrity Over Access. Why I Said No Thanks to a Conference’s Demand for Positive Coverage

We best be careful, or we will find ourselves in a closet talking to ourselves too much.

Once upon a time, I had a good client who was fond of saying, “We best be careful, or we will find ourselves in a closet talking to ourselves too much.” Meaning, of course, that you get into trouble if you don’t get diverse viewpoints from people who perhaps see the problem and the world differently than you.

My client’s wisdom was recently brought home to me in connection with the Gen AI hoopla. The last two months have been a whirlwind of conferences for me. During that time, I attended three technology conferences. One was CES, which was generally directed toward consumer electronics and technology. The other two, LegalWeek and ABA TechShow, were both directed at legal technology in particular. 

Continue Reading Gen AI in Law: A Lawyer Reality Check