Despite what we hear from many vendors and pundits, the commoditization of GenAI may be inevitable. Here’s my thoughts on what that could mean for legal tech vendors and for the lawyers and legal professionals relying on GenAI tools. And what we can do to prepare. Here is my Above the Law post on this subject.

Ads are “like a last resort for us for a business model…ads plus AI is sort of uniquely unsettling”. Sam Altman May 2024 as quoted in Hacker News.

“To start, we plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.” OpenAI, January 16, 2026, also from Hacker News.

And so it begins.

——————————-

Come closer, famous Odysseus, Achaea’s pride and joy.

Stop your ship and listen to our song.

There is a famous story in Book 12 of the Greek mythological classic, The Odyssey by Homer in which the above quote appears. Book 12 tells the story of the Sirens who lured sailors, drawing them close enough that they abandoned caution and defenses, only to destroy them once they were trapped. The Sirens did this by promising insight, mastery, and understanding. They persuaded the sailors to abandon their own safeguards by offering something that sounded helpful, even enlightening. 

Sounds a little like GenAI.

It’s so easy to get an answer. It’s so tempting to ask it anything. It’s part of everyday life. Like the Sirens, it promises insight and mastery. It says come closer, listen to my song. 

I use it every day. Probably more than I should.

The Sirens Risk

Why is the Siren Song relevant? Because my friend, there are some real risks that, like the sailors in The Odyssey, we are in the process of being drawn in by all the hype and GenAI hoopla and ignoring some very real risks.

I just finished reading Cory Doctorow’s most recent book, Enshittification: Why Everything Suddenly Got Worse. Doctorow is a well know and successful author and commentator on the tech scene.

Enshittification perfectly describes what tech giants do to customers as those giants become more powerful

It’s no secret that I’m a fan of Doctorow. When I was co-chair of the ABA TechShow, I successfully lobbied for him to be a keynote speaker. I’m told (although I did not witness it) that several people in suits walked out of his keynote. I’m sorry they missed what he had to say. They might have learned something. Like how his word enshittification perfectly describes what tech giants do to customers as those giants become more powerful.

The Hypothesis

For those who don’t know, here is Doctorow’s hypothesis.  He argues that dominant digital platforms predictably move through stages: first they treat users well to gain scale, then they degrade the experience to increase value for their business customers, and finally they degrade both customers’ and business’ experience to maximize profits for themselves and their shareholders. They do this by making it very hard for their users, first individual customers and later their business customers, to quit using the platform.

It’s the classic ability to gouge. Why? Because they can.

Indeed, it’s hard to exit a platform where everyone else is since by leaving you lose all your contacts and if you are a business, your ability to reach customers. And there is no place else that supplies the service that is so engrained in your daily life.

This allows the platforms to worsen their products and charge more for them. The phenomena is aided by weak antitrust enforcement, a legal framework that protects the platforms lock-in through proprietary ecosystems, and DRM. Think Facebook, Google, Amazon to name a few. It’s the classic ability to gouge. Why? Because they can.

In each of these cases, it all began with the promise of a shiny new tool that was going to save us all time, make our lives easier and more meaningful, blah, blah, blah. And the siren song was so tempting. One click and you can buy anything, search for anything, connect with anyone and everyone.

Sound familiar? GenAI answers anything you ask. It’s an assistant always at your fingertips. One click and it has the answer to every question. It can even solve your personal problems. So we rely on it more and more. And everywhere you look there are vendors trumpeting its benefits. Encouraging you to be all in.

And there is little question that these tools are becoming more and more powerful very quickly. There’s not universal use, but we aren’t far from it.

Even in legal. The talk is always about its marvel. How it will disrupt how we practice law. How only lawyers who use it will have jobs in the future. How it’s indispensable to our work.

Almost every vendor is using LLM in some form or another in the products they hawk. And many of them depend in part on the large LLM providers who are growing more powerful by the day. 

The Enshittificatoin of GenAI

Power tends to corrupt and absolute power corrupts absolutely. 

Lord Acton 1887

And that corruption is what we could soon be facing.

If you follow Doctorow’s logic and for that matter, history, GenAI users, once hooked, will have a hard time abandoning a tool that is believed to be indispensable. That is so engrained in everything we do. And that is exactly what gives the providers of these tools such incredible power. It’s how enshittification begins. 

The providers take advantage of their unrestrained power to stifle competition and make their products cost more and do less. And to corrupt outputs with advertising, influencers and manipulation.

indeed, even as I was writing this, various media outlets reported OpenAI announced it was going to test placing advertisements at the bottom of OpenAI answers. Of course, OpenAI assures us that it will not sell users data. Right. Facebook doesn’t technically sell user data either. You see where that got us.

Think about what an enshittified GenAI could mean.

Think about what an enshittified GenAI could mean. What happens when the world—legal, medicine, finance and people—depends on tools in the hands of a few powerful corporations whose interest is profit for shareholders?

What happens when the large LLM providers on whom so many legal tech providers depend, degrade their service and offer less for more? 

What happens when you learn to hate the tools due to advertiser and special interest group influence but you can’t leave because there is no other place to go or at least anyplace that’s any better?

As a lawyer, can you trust the answer an LLM gives you on an important issue, not only because it hallucinates, but also because an advertiser or someone with a special interest has paid for an ad to be included in the answer that’s given you? With today’s OpenAI announcement, that will soon no longer be a hypothetical.

A Problem for Legal

It’s a particular problem for legal. Does it give anyone some pause that the companies owning the LLM universe might have reason to want the judicial and legislative branches to behave in certain ways that help them?

I’m not talking about direct manipulation but the subtle kind that these companies are good at. Tools provided to the judiciary for help with workloads that subtly influence their decisions. Or responding to lawyers prompts in way that somehow jimmies what’s argued, influencing how an argument is directed so that the result favors the interest of an LLM provider even if not directly. The same kind of influence could be used with legislators to preclude effective laws and regulations constraining the providers.

And lord help us if one of the providers was a direct party.

The truth is there’s precious few life preservers to save us from the fate Doctorow describes. Who is going to be sure that this kind enshittification doesn’t happen when these tools become so engrained in our daily and professional lives that GenAI providers are free to do whatever they want? 

Legislatures have already proven to have little desire to regulate GenAI or its providers. Bar associations have little power to regulate entities that supply services to their members. Not to mention the fact that our key skill—critical thinking— is being systematically eroded by these tools. The chances we could think our way out may become practically nil.

The Sirens sailors were hit with something so irresistible that they abandoned caution and defenses. That’s where we may be headed: following a GenAI Pied Piper.

We need to ground ourselves to a mast of reality and critical thinking

Tying Ourselves to the Mast

But if you recall The Odyssey, the hero survives by binding himself to the mast while his crew blocks their ears. Perhaps we need a bit of that thinking now. We need to ground ourselves to a mast of reality and critical thinking.

LLMs are here to stay, there is no denying that. But we can’t ignore the risks of what they can become. And we can’t hope that the large providers, one of whom is Google who has a track record of doing just what we fear, will protect us out of the goodness of their hearts. 

Doctorow is right about what’s happened before. And I think it’s a forecast of what could happen next with GenAI. So, we need to tie ourselves to some firm masts and resist believing everything we hear. We need to keep ourselves from the FOMO that many are falling into. We need to plug our ears to some of what is being said and think critically about the benefits and the risks. Let’s not be so quick to make LLMs indispensable.

Before we go all in with GenAI, let’s think about what could happen and not abandon our caution and defenses. Consider what options we’ll have if GenAI does indeed become enshittified.

Because once you’re lured too close to the rocks, it may be too late to plug your ears.

After watching lawyers get sanctioned almost daily for GenAI hallucinations and inaccurcies while many others claim they’ve never used it, maybe its time for mandatory GenAI CLE. Three states already require tech training. 39 states have adopted Comment 8 to the model competency rule. So there is precedent.

GenAI is too impactful to leave GenAI competency to chance. It’s not as crazy as it sounds. Here’s my Above the Law post making the case.

Here’s my post for Above the law on the troubling disconnect in legal tech identified by Hwang Jae Hyuk: 70% of investment flows to vendors targeting the 40% of time lawyers spend on research and analysis, while only 30% goes toward solving the administrative burdens that actually eat up most of our days.

It’s a bit ironic since GenAI struggles with the complex legal work it’s being funded to handle, but actually excels at the boring back-office tasks that get overlooked by investors chasing shiny objects. I guess billing software just isn’t as sexy as AI that hallucinates case law.

Back from CES 2026. Here’s my top ten impressions from this year’s show. Not surprisingly, the headline this was AI everywhere all the time. Lots of discussions about agentic AI, wearables and robotics all powered by AI. But precious little about the AI challenges like the infrastructure gap, the erosion of critical thinking skills and the threat to privacy and our legal processes. 

Consumer electronics are predictive of legal tech risks AND benefits. Which is why I go every year.

Here’s my post for Above the Law.

AI wearables were everywhere at CES 2026. Smart glasses that whisper answers in your ear, AI enabled contact lenses, AI necklaces. They see what you see and hear what you hear. Impressive tech, but what happens when a witness testifies while wearing smart glasses feeding them answers? How do we handle discovery demands for everything someone’s wearables recorded? Who’s liable when the AI whispers wrong advice in a critical moment?

We’re still scrambling to deal with deepfakes. We better add wearables to our list of AI challenges.

Here’s my post on these issues for Above the Law.

At CES 2026, a McKinsey & Company  panel outlined the  “new” skills employers will value in the age of agentic AI. These include things like asking the right questions, showing judgment in gray areas, and demonstrating passion and resilience. All things GenAI can’t do or can’t do very well. 

But these are the skills that have always separated the exceptional lawyers from everyone else. They will be even more essential in the future though. 

Here’s my recent post for Above the Law

I’ve been an agentic AI skeptic. But after this week at CES, trying ChatGPT Atlas to book my flights and hearing Nvidia CEO Jensen Huang explain why 2026 might be the year of agents, I’m at least convinced legal can’t ignore this technology or the blessings and curse it could bring.

In any event, the biggest risk isn’t agentic AI making mistakes, it’s lawyers who stop thinking critically about what it suggests.

Here’s my post for Above the Law.

At CES, I discovered an AI tool from Qlay that detects when people use AI to cheat during remote interviews. The creator estimates 40% of candidates are doing this. If that’s even half true, what does it mean for remote depositions? 

Has some sort of AI proctoring become necessary for remote depositions and testimony or would it create more problems than it solves?

Here’s my post for Above the Law.