Part Three of the AI crisis series from myself and Melissa Rogozenski : The trust breakdown that’s making legal practice unsustainable. When senior partners spend evenings checking associates’ citations and local counsel can’t trust national counsel’s briefs, we’re not just dealing with verification costs, we’re watching decades-old professional relationships crumble. The AI bubble isn’t just economically problematic; it’s systemically destructive. Infrastructure issues, verification problmes and erosion of trust. Why that rumbling sound might be the volcano about to erupt.

Here is our Post for Above the Law. #LegalTech #AI #LegalPractice #Pompeii 

Every day we see another lawyer sanctioned for using AI hallucinated case citations. But the problem may not be just lazy checking. It may have something to do with economic reality. 

When AI verification costs exceed savings, what happens? If it takes 8 hours to verify what AI does in 2 hours, are we actually saving time? And how long can that continue? Here’s my and Melissa Rogozinski’s Part Two Post for Above the Law why legal’s AI volcano may about be ready to blow. #Pompeii

While everyone debates AI hallucinations, we may be missing a bigger threat. The infrastructure powering AI may not be able to sustain everything vendors are promising.

Think about it: 26 major US utilities already have requests to supply an additional 711 gigawatts of new data center power. That’s nearly equal to the entire continental US summer peak demand. 

You can’t just snap your fingers and create more power. Building that additional capacity takes years of regulatory approval and massive investment.

What happens when legal tech tools slow to a crawl because there’s not enough infrastructure to support them? When your contract review platform crashes during a deal closing? When eDiscovery tools fail right before a deadline?

Law firms are betting big on AI without asking the hard question as to where the power will come from. That’s a dangerous gamble.

Melissa Rogozinski and I explore this infrastructure volcano and why legal needs contingency plans before it erupts in our post for Above the Law. Part 1 of a 3 part series. 

A new Washington Post analysis of 47,000 ChatGPT conversations reveals a troubling pattern. People are sharing deeply personal information, getting advice that tells them what they want to hear (not necessarily what’s accurate), and creating potential discovery goldmines for future litigation.

The study found users discussing emotions, sharing PII and medical info, and asking for drafts of all sorts of stuff. Whatever is asked, ChatGPT says ‘yes’ 10x more often than ‘no.’

As lawyers, we have an ethical duty to understand technology risks. The question isn’t any longer whether GenAI will impact our practices. It’s whether we’ll educate our clients about these dangers before they create their own smoking guns. Here’s my post for Above the Law. 

The grade school game seemed simple enough. Grab the other team’s flag without getting tagged. But for a kid like me with not much athletic talent, the chances of being a factor other than getting quickly tagged out were pretty remote. Or so it seemed.

One of the questions I am often asked by young lawyers is how I get to where you are and develop a successful practice and career. 

It All Started With Capture the Flag

I tell them it all started with Capture the Flag.  For the uninitiated, Capture the Flag is a school yard game where the playing area is divided into two territories. Each team has a flag placed somewhere within their side of the field. Players try to grab the flag but if they cross into the other side’s territory they can be tagged and are out. To win you have to get the flag.

The game seems a little illogical since winning requires the sacrifice of enough players to cross the line and get tagged until there are so few left that you can successfully avoid being tagged and get the flag. It’s a brute force kind of thing.

Let’s Play

Or so it seemed. I was in grade school and the teacher decided today’s game would be Capture the Flag. She explained the above rules to us, blew her whistle, and play commenced.

Now you need to understand the configuration of this particular area. We were playing on a baseball diamond with the line between the two teams running along a line from first to third base. The opposing team’s flag was placed roughly at home plate. The diamond was surrounded on two sides by a protective fence typical of most baseball diamonds. There was an opening that allowed access to the field itself. 

Here’s a diagram. I was on Team B.

Now as I said, I wasn’t the world’s greatest athlete so I was toward the back of our team flag. But I’m looking at this and it occurs to me that charging directly into enemy territory was hopeless. Too many bodies between the line and the flag. 

But then I saw it

But then I saw it: a path on the outside of the fence would enable me to get behind the mass of bodies and grab the flag if undetected. The broken line shows my path. 

And it worked like a charm, slipped in, grabbed the flag and ran back with it much to everyone’s amazement. Of course the other side complained that I had broken the rules, to which the teacher explained I never said you had to stay on this side of the fence (to her credit by the way).

So What’s the Point?

So you’re wondering what’s the point of the game and the relationship to my career. The point is that incident taught me a valuable lesson. You don’t always, if ever, have to follow conventional wisdom about how to solve problems. The best solutions are those that are outside of the conventional wisdom, that go around a problem in a new way. Stay within the rules, yes. But use them to your advantage.

I have used that lesson when I needed to change my career path and develop a national practice from a smaller city. When I needed to change my practice specialty. When I made pitches to clients. When I thought of a new way of billing for a matter to solve the predictability problems of a client. When I came up with solutions to client problems and thorny cases. When we came up with new ways to tell the story and define what a case was about. Even when I made the shift from full time practice to writing and blogging.

Not all crazy ideas are great but all great ideas are crazy

Some might call this some kind of unique gift. But I don’t think so. I think it’s about staying open to new approaches. To being flexible and nimble. As Mike Posner put it, “Not all crazy ideas are great but all great ideas are crazy”.

And thinking about the problem first. As Albert Einstein, also a guy with little athletic talent but who did alright for himself, advised: “If I had an hour to solve a problem, I would spend most of the time thinking about the problem and only a small part working on the solution.”

Another way to think about it: I once had a professor who was fond of saying, ‘the problem…is the problem.” What he meant was that defining the problem—how to somehow get to the flag without running the gauntlet of people in the way—is often the key to the solution.

But We Have ChatGPT Now

I think this is more true today than ever.

I was once asked at a presentation if I thought a GenAI tool could ever write like Ernest Hemingway. I thought for a minute and said, maybe. But I’m not sure it will ever write like the next Ernest Hemingway. Hemingway’s style and voice at the time was original and new. A GenAI tool, had it existed at the time, would not have found a similar style because it didn’t exist.

So it is with innovative solutions to problems. To get there, you have to think about the problem and look for a solution that hasn’t been developed yet. That involves critical thinking. I have written before on the dangers of using GenAI to shortcut critical thinking. Indeed, an oft cited 2025 Study by Michael Gerlich found, as Stephen Klein of Curiouser.Ai put it, “AI usafe is inversely correated with critical thinking…The more we use it, the less we think.”

Can a GenAI tool help? Maybe. But can it be innovative on its own? I’m not sure, at least not yet.

What Would ChatGPT Do?

I gave this Capture the Flag diagram to ChatGPT and pointed out that the area was in a baseball diamond surrounded by a fence and asked it the best strategy for team B to capture the flag. It gave a long answer starting with the observation, “Team B must approach the flag through a  narrow, easily defended zone because the fence limits access.” It also said don’t bunch near the fence since that would create a trap. 

Never did it consider what my 5th grade brain figured out: go outside the fence and around the back side. Why? Because like the conventional thinkers in the game, it didn’t consider that option and assumed players couldn’t go outside the fence.

I know some would say I didn’t give it enough of a prompt to figure that out and maybe so. But lots of us fall down on prompts because we don’t critically think about the problem first. We don’t define the problem first. We assume limitations that aren’t there. 

That changes the entire game

Using ChatGPT to Help, Not Think

Of course, when I told ChatGPT that no one said we had to stay within the fence but everyone assumed we had to, it found the strategy: “That changes the entire game. If the fence is assumed to be a boundary but is not actually a rule, then the openings shown on your diagram create secret escape routes. Almost everyone will keep playing as if the fence is a wall. That gives Team B a hidden advantage.”

Want to do great things? Think differently. Go outside convention. Change the entire game.

It starts with looking at the problem and for innovative ways to capture whatever flag you’re after.

Two AmLaw 100 firms are doing something unusual: sacrificing billable hours to train associates in AI.

Ropes & Gray lets first-years spend up to 400 hours (20% of their requirement) on AI training. Latham & Watkins flew 400 associates to DC for a two-day AI Academy.

The revenue hit? Probably minimal. First-years aren’t profit centers anyway, and clients often won’t pay for junior associate time.

The learning boost? Likely significant.

As Thomas Suh from LegalMation told me: mastering AI requires giving lawyers the “grace to dabble” , time to experiment and fail without billing pressure.

Most firms are still wringing their hands about AI’s impact. Smart firms are training lawyers to leverage it.

Here’s my post for Above the Law.

New research from Disco and Ari Kaplan reveals a striking contradiction in legal’s relationship with AI and eDiscvovery. While 70% of legal professionals recognize AI’s efficiency benefits, only 35% have actually incorporated it into routine processes.

Even more telling: 42% of law firms report zero external pressure to adopt AI solutions. .

The reasons for resistance? The usual : billable hour concerns, “perfect vs. good” thinking, and the classic “we’ve always done it this way” mentality.

Here’s why this matters: eDiscovery has historically been the proving ground for legal tech adoption. When court deadlines force efficiency over billable hours, innovation happens.

With 96% reporting increasing eDiscovery workloads and new data sources (including AI prompts and outputs), the pressure for change is building.

The irony? The same factors making lawyers resist change like time constraints and risk aversion, may ultimately force the adoption they’re avoiding.

Here’s my post for Above the Law.

Small firm lawyers keep telling me they can’t afford the AI tools big firms use. They’re not wrong, I’ve heard vendors literally laugh at affordability concerns. So when I came across Descrybe, a legal research platform with free core features (and paid plans at only $10-20/month), it got my attendtion and I dug deeper. Here’s my post for Law Technology Today.

New study from Thompson Hine: 95% of in-house counsel see little innovation from their law firms. But clients might be part of the problem. When they reward the status quo with an average 7.4% rate increases year after year while claiming innovation is “crucial,” what message are they really sending?

The gap between what clients say they want and what they actually demand reveals a lot. Law firms aren’t going to change much unless and until their clients demand it. Here’s my post for Above the Law.

When we talk about GenAI for the legal profession we frequently focus on the risks. But Comment 8 to Model Rule 1.1 requires us to also the understand the benefits. Sometimes we make AI a little too complicated.

Two fundamental rules: Don’t put client confidences in prompts and check the output for accuracy. Here is my post for Above the Law.