Another week. Another law firm caught citing cases that don’t exist. But this time it was Sullivan and Cromwell, one of the most influential firms in the world. 36 errors. Three pages to describe them. Fabricated passages from real cases. S&C said its AI policies weren’t followed. That it had training designed to prevent exactly this.

We can keep talking about education, policies, and fines. We’re doing all of that. But it doesn’t seem to be working. Moreover, it isn’t just a lazy lawyering problem anymore. When the public hears that lawyers are citing cases that don’t exist, their first instinct, as I discovered recently, is that the lawyer made them up to win. Of course, 

that’s not what typicaly happened. But the fact that it’s the immediate reaction says something about where we are. As Judge Bernes Aldana put it at a recent ABA conference: “The legitimacy of our courts depends on the public’s trust and confidence.” That’s why AI hallucinations aren’t just a sloppy practice problem. They’re helping to erode of the rule of and trust in law itself.

What can we actually do about it? Stronger penalities? Bar discipline? Better education and awareness? Maybe all three.

My post for Above the Law.

Just returned from ILTA’s Evolve 2026 conferance in Denver and here’s my Above the Law post. Three days of content at a breathless pace. A record 500+ attendees that tested but didn’t break the small-conference vibe that makes Evolve worth it. An opening keynote from Zach Abramowitz that framed the AI moment as well as anything I’ve heard on a conference stage this year. And a vendor experience that reminded me that hallway proximity often beats cavernous exhibit halls. Once again, ILTA got most things right. 

I will have to say, however, the AI content conversation in legal tech needs to grow up a bit. We should be past simple prompt writing exercises and talking about the repetitive tasks AI can do. Addressing critical questions like where is GenAI taking the profession, and what does that mean for long-range planning are still largely missing from most conferance agendas.

Evolve’s cyber content, on the other hand, was exactly right: technical, practical, and even emotionally honest about what a data breach actually does to the people who have to manage and live through one.

I do hope ILTA doesn’t let success turn Evolve into a spring version of its sprawling late-summer flagship. The small conferance mission is still the magic. 


Law firms are panic buying AI to satisfy client demands and it’s backfiring. Clients are making demands that their firms get AI but often don’t know what they really want. Firms don’t know what they need. It ends up being a hot mess of wasted money, unused tools, and unhappy clients. It’s a classic perish for lack of knowledge. The solution starts with education. Not more AI slop.

Here’s my post for Above the Law. 

Here’s my Above the Law post on Zach Abramowitz’s keynote at ILTA’s Evolve conferance. The keynote lived up to its title: Most Law Firms Are Doing AI Wrong. Here’s How to Do It Right.

His argument is that the failures in GenAI adoption from ineffective training to analysis paralysis to hallucination panic, all trace back to the same root cause. Firms are deploying AI without really understanding it. And you can’t use what you don’t understand.

Some key observations:

Hallucinations aren’t a glitch. They’re a feature of how GenAI works. Understanding that changes how you deploy it.

Stop asking what GenAI can do. Start asking what you ought to be doing because of it. That’s a fundamentally different question.

AI ROI isn’t efficiency. It’s the improved results, expanded capabilities, and competitive differentiation it can bring.

Here’s my preview of ILTA’s Evolve conference that starts today in Denver for Above the Law. This is the thrid year in a row I have attended. I keep coming back for the same reasons: it’s small, focused, and cuts through the noise that dominates many large legal tech conferences.

This year ILTA has added a third pillar to the conference’s program: Leadership in Legal Tech. With all that law firms are trying to navigate right now, GenAI disruption, escalating cybersecurity risks, and pressure to do more with less, that addition seems pretty timely.

Looking forward to two strong keynotes, over 20 sessions, and three focused topics. And the Klickers for which ILTA is famous.

Billing. The bane of a lawyer’s existence. The process is clunky, error-prone, and ripe for effective AI disruption. Elite’s new Validate tool could mean more effective billing guideline compliance, better client communications, and, most interestingly, flip the switch on the leverage third party bill reviewers have, reducing write-offs.

My new post for Above the Law.

You’re up against a deadline. You run to ChatGPT. You tell yourself the privacy toggle will protect you and your clients confidential information. Guess what: it may not, at least in ways consistent with the ethical rules.

Lawyers and legal professionals may have gotten a little too lax about putting confidential client information into public-facing AI tools in part due to a  false sense of security that comes from toggling off that training switch. 

Here’s why the privacy settings may not meet the obligations under Model Rule 1.6, why privilege waiver is a threat, and some bedrock rules every lawyer should be thinking before hitting send on any prompt.

As the old Hill Street Blues captain used to say: let’s be careful out there..

My post for Above the Law.

Despite what seems to be an accepted truism, AI hallucinations aren’t necessarily completely random. That’s the key insight from a new physics-based analysis by a group of scientists and engineers and it may change how we should be using GenAI tools.

The key finding: GenAI systems have a deterministic mechanism that causes output to flip from reliable to fabricated at a calculable step. And that step arrives exactly when a lawyer’s need is greatest. On novel, unsettled legal questions where training data is sparse.

That’s good and news. The good: if failure is somewhat predictable, more verification is needed when you are in ambiguous areas. More confidence on well-known information. 

The bad: the stretch of accurate output that precedes the failure builds false confidence by the uninformed, making the fabrication harder to catch, not easier.

My post for Above the Law.

Managing by walking around used to be standard practice. But with remote work, Zoom, and billable hour pressure, the concept lost some of its luster .

But with AI we may need it more than ever. When we rely only on LLMs to make decisions and summarize work, we lose something critical: the senior lawyer who stops by, asks questions, catches problems and offers ideas and solutions.

I had a practice group leader who walked the halls every afternoon. At the time, we all thought he was just checking to see who left early. Maybe. But in hindsight he stopped, he asked what we were up to, he listened. And he made all of us better lawyers.

Here is my post for Above the Law on why MBWA matters more than ever in the age of AI.


Deep fakes are coming to our courtrooms. They are going to change how we try cases.

Here’s what the rise of deep fakes may mean for judges, juries, and trial lawyers. Along with the impact of the so-called “liar’s dividend”: the risk that repeated exposure to AI-generated fakes causes people to disbelieve all digital evidence, real or not.

Here are three ways courts may respond. And what trial lawyers need to do to prepare. Sometimes less technology in the courtroom, not more, may be the better call.

My post for Above the Law.