Steve Schwartz, the lawyer in New York who improperly used ChatGPT recently, was all the news last week. For those who don’t know, Schwartz says he used ChatGPt to prepare a Brief filed with a court. The Brief included some case citations that ChatGPT supplied. The problem was the cases didn’t exit. They were hallucinations.

Photo by ilgmyzin on Unsplash

While many were quick to blame the tech, the real problem was not the tech. It was that Schwartz didn’t check the citations. He didn’t read the cases. I would guess that he wouldn’t have read cases supplied by online legal research. He wouldn’t have read the cases found by manual legal research and cited by his associates in a memorandum.

And Schwartz should have been wary of the ChatGPT output. The hallucination problem is well known. And OpenAI, GPT-4 Technical Report, 14 March 2023, states, “In particular, our usage policies prohibit the use of our models and products…for offering legal or health advice.” (page 6).

But a bigger problem than the blame being heaped on ChatGPT instead of the lazy lawyer is the knee-jerk reaction by some judges.

For example, Texas Federal District Judge Brantley Starr has a new rule for lawyers in his courtroom. No submissions written by artificial intelligence can be submitted. Unless the lawyer using AI certifies humans checked the AI output.


Judge Starr’s Order

According to Judge Starr’s Order, “All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,” 

Judge Starr went even further: He “will strike any filing from an attorney who fails to file a certificate on the docket attesting that the attorney has read the Court’s judge-specific requirements and understands that he or she will be held responsible under Rule 11 for the contents of any filing that he or she signs and submits to the Court.”

Judge Vaden’s Order

On June 8, Judge Stephen Vaden, a U.S. Court of International Trade judge, issued a similar Order. Lawyers are required to disclose the use of generative artificial intelligence tools to create legal documents. If someone uses a generative AI tool in a filing in his courtroom, they must also file a notice that discloses which program was used. And designate “the specific portions of text that have been so drafted.”

Magistrate Judge Fuentes’ Order

And then there is Magistrate Judge Fuentes recent Order. Judge Fuentes, a federal District Court Magistrate out of Chicago, correctly identifies lawyers responsibilities: “Rule 11 of the Federal Rules of Civil Procedure continues to apply, and the Court will continue to construe all filings as a certification, by the person signing the filed document and after reasonable inquiry, of the matters set forth in the rule, including but not limited to those in Rule 11(b)(2).”

But rather than leaving it at that, he also requires any attorney using any generative AI disclose that it was used, how it was used and the specific tool used. Judge Fuentes applies this requirement not with respect to any document that is filed but also to any research that was done.

Rule 11

While these orders may be well intended, they miss the point. The problem is not the ChatGPT tool; it’s that the lawyers misuse it. For years, lawyers have misstated case holdings with little impunity. It’s no different here.

Rule 11 requires that by signing a pleading, the lawyer certifies to the court that the facts and law support the claims and arguments they are making

And there are ample legal remedies for misuse. Since 1983, Rule 11 has been a powerful tool for misrepresentations to the court. Rule 11 requires that by signing a pleading, the lawyer certifies to the court that the facts and law support the claims and arguments they are making.

Photo by Wesley Tingey on Unsplash

What Schwartz did was an affront under Rule 11: he represented cases that didn’t exist and thereby advanced an argument not supported by the law. Pure and simple.

The Fall Out

Not only were the orders by Judge Vaden and Starr unnecessary, but they also confuse the issue and court requirements. Judge Starr’s Order requiring certification if any portion of a pleading was in any way drafted by “generative AI’ is confusing. What’s included in generative AI? If you use digital legal research tools, are you using generative AI? What about changes Google has proposed to its search tools that harness the power of large language models? Would that bring Google searches under the rubric of these orders? 

And requiring the lawyer to designate what portions of the text were drafted by AI is problematic and chilling. If ChatGPT is used drafts any part of a filing -even if it is not central to the legal argument and only used as background–then the lawyer needs to tell Judge Starr and Vaden and the other parties.

And what about the future generative models that get more sophisticated and less error-prone? LexisNexis, and Thomson Reuters, are developing models that use proprietary and verified data as a starting point. These models then refer the response to ChatGPT for synthesis and wording. Casetext has already developed a tool to do this as well. These modifications reduce hallucinations and errors generated by ChatGPT by itself. But Judge Starr and Vaden would still require certification and human verification.

Judge Fuentes’ Order is problematic in one other way. Requiring lawyers to disclose how they used AI tools and which ones for legal research start to intrude on the lawyer-client privilege. It’s one thing to say I used ChatGPT to help me write a particular brief. its another to require the lawyer to disclose the nature of research that they have done.

Would I Use Generative AI in Judge Starr’s Courtroom?

But you say neither Judge Starr nor Vaden banned generative AI; they just want to be sure if you use it, you tell them. And you certify that you checked the results. What’s the harm?

Clearly, Judge Starr and Vaden are not favorably disposed toward generative AI. They don’t understand large language models very well. Read what Judge Atarr said in his Order: “While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. [AI} systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.”

So what will these Judges do when confronted with the required certification? Will they have their clerks double down on checking the citations? Give arguments of the lawyer making the certification more scrutiny than one who doesn’t. I suspect—consciously or unconsciously– they will doubt the credibility of the lawyer making the certification. They will give more credence to the lawyer who doesn’t use generative AI. 

The real victims here, though, are the clients in cases in these courtrooms

Given Judge Starr’s attitude reflected in his Order, if I’m a lawyer in his courtroom, I’m not using any generative AI tool—even those models using verified data. And even if I verify the result from any generative AI inquiry. Under no circumstance will I or anyone on my team use any generative tool since if I do, no matter what I use it for, I have to repeat that I have. And that could diminish the credibility of my arguments in Judge Starr’s and Vadon’s courtrooms along with my credibility. I wouldn’t even use generative AI modles that verify the results of legal research.

The end result from these orders: a tool that today and even more so tomorrow could be used to make lawyers more efficient and save time won’t be used.

The real victims here, though, are the clients in cases in these courtrooms. They can’t get the benefit of generative AI efficiencies and improvements that these tools can offer. Their lawyers–and lawyers of clients who find themselves before other judges—will cite these orders as reasons not to use generative AI at all. What clients will get in the end is higher legal bills and increased intransigence toward tools that could help lower legal fees. 

Using generative AI is not much different than pulling up information from a Google search and then rewording it. Or pulling from public posts by other lawyers that may be relevant to the subject of the case. Or, for that matter, using cases found by an associate doing manual legal research (if any still do). I’ve done all these things, as have most lawyers. But I’m not required to tell the judge where the research came from or that I double-checked it. It’s presumed as part of my responsibility as a lawyer to check the accuracy of what was found.

And if I misstate the holdings of the cases I cite, or cite cases that don’t exist, no matter how or where I found the information, I should be sanctioned. We don’t need a separate, confusing, and chilling rule for ChatGPT.