How can the appropriate use of AI in courts and transparency? UNESCO‘s new guidelines for AI in courts highlight three critical risks : 1) Private companies controlling judicial tools focus on profit, not fairness, 2) There are subtle manipulation opportunities through biased AI outputs and 3) There may be public and legislative pressure on courts to adopt AI without proper safeguards The rule of law depends on transparency and independence. How can we insure that GenAI is not compromising both?

Here’s my Above the Law post.

Off to Las Vegas for my 7th year covering CES, where the AI hype machine often runs at full throttle. I’m going to try to separate substance from noise and get an idea of trends and issues that may impact legal.

I’ll be especially interested in the gaps between vendor promises and implementation reality. Will also be looking for honest discussions about AI’s limitations along with its capabilities. Will the practical challenges of scaling AI be addressed or will it all be sizzle?

Understanding what’s coming in consumer electronics helps us in legal prepare (and avoid the hype).

Here’s my post for Above the Law.


I was impressed by a recent interview of Antti Innanen on the Artificial Lawyer Law Punx podcast that I not only commented on it on LinkedIn, I decided to devote a whole article to his comments. After 30+ years in legal practice and covering legal tech, I’ve learned to spot industry BS when I hear it. The latest such platitude is: “AI will free us all up to do strategic work.” But the 12-minute podcast discussion by Innanen (who just so happened to have actually practiced law) cuts through the nonsense. The reality: Most legal work is neither tedious nor strategic, it’s just regular work. How much of this regular work will AI replace? Moreover, there’s not an infinite queue of high-level thinking waiting for displaced lawyers. We need to start asking harder questions about AI’s real impact instead of accepting feel good platitudes.

Here’s my post for Above the Law.

Here’s a scenario could become routine: You’re in trial, opposing counsel shows a video that’s damaging to your client, but your gut says something’s not quite right. What do you do? How should the Court approach it?

There’s a good new University of Colorado study on deepfakes and video evidence. Even though we talk about it a lot, I’m not sure we are ready for what may be coming.

Here’s my post for Above the Law.

Imagine this: You’re on deadline, procrastinated on research (don’t judge), and ChatGPT that you counted on to help suddenly dies. That was my Nov 18. Turns out the Cloudflare outage revealed some uncomfortable truths about tech dependency and cybersecurity gaps. Maybe lawyers  and legal professionals need to pump the brakes on wholesale AI adoption. And always have a Plan B. Here is my post for Above the Law.


AI here, AI there. AI everywhere. But are we willing to cede good lawyer skills to a bot? A new Thomson Reuters white paper should scare the us all.  Research shows AI is actively eroding critical thinking skills. The future belongs to those who figure out how to retain and enhance their analytical abilities while everyone else lets the bots do their thinking. Is “thinking like a lawyer” becoming “thinking like a bot?” Here is my post for Above the Law.

Traditional law firms vs. tech-affiliated AI-first firms: The future may not be what we think it is. Blackstone recently invested  in the legal tech complinace vendor Norm AI, which then immediately launched its own law firm offering “AI-native legal services.” We’re starting to see tech companies create captive law firms to deliver legal services at scale.

Will this disrupt traditional BigLaw? Are we prepared for what happens when “lawyer in the loop” becomes less the standard and more an anomaly? What this could mean for access to justice, firm economics, and the profession itself. 

Here is my Post for Above the Law.

Back from Summit AI in NYC with some hard questions still unanswered. While 5,000+ attendees celebrated AI’s potential, critical discussions about infrastructure challenges, verification economics, and workforce displacement were largely missing. My ten takeaways from a conference that felt more like an AI love fest than a serious examination of where we’re headed. Legal professionals especially need to pay attention not only to the opportunities but also the challenges.Here is my post for Above the Law

Business leaders from Unilever, EY, and NBC Universal shared a consistent message at the AI Summit: embrace the ‘I don’t know’ and think holistically about AI transformation.

The contrast with how most law firms approach AI couldn’t be more striking. While other industries talk about reimagining entire workflows, legal still treats AI as something to bolt onto existing processes.

They also discussed the concept of  “cultural debt”:following the cow path just because it’s visible, even when it leads nowhere productive. Sound familiar?

After 30+ years in legal, I’m convinced the survivors from the AI revolution will be firms that think less like lawyers and more like business leaders who see the big picture. Here is my post for Above the Law.

A new Washington Post analysis of 47,000 ChatGPT conversations reveals a troubling pattern. People are sharing deeply personal information, getting advice that tells them what they want to hear (not necessarily what’s accurate), and creating potential discovery goldmines for future litigation.

The study found users discussing emotions, sharing PII and medical info, and asking for drafts of all sorts of stuff. Whatever is asked, ChatGPT says ‘yes’ 10x more often than ‘no.’

As lawyers, we have an ethical duty to understand technology risks. The question isn’t any longer whether GenAI will impact our practices. It’s whether we’ll educate our clients about these dangers before they create their own smoking guns. Here’s my post for Above the Law.