Think your law firm’s cybersecurity insurance has you covered? Think again.

New survey data reveals troubling gaps: only 45% of policies cover ransomware, despite 1 in 5 organizations reporting ransomware incidents.

And 45% of policies can be voided due to inadequate security controls, something many firms only discover after filing a claim.

Law firms need to read their cyber policies with the same scrutiny they’d apply to a client’s contract. Because when (not if) an incident happens, “we thought we were covered” won’t pay the bills.

Here is my Above the Law post.

It’s easy to get complacent with GenAI tools that not only answer all our questions but also can do things like analyze our behavior and make binding decisions about who we are, and what we can access. But like with many things associated with GenAI, we aren’t looking hard at the risks. The “What Ifs.”

OpenAI’s Age Prediction Model

We can see this with OpenAI’s announced use of age predication modeling to determine whether a user is underage. It’s hard to argue against something like this that sounds good. But there are some risks. Some proverbial slippery slopes.

Continue Reading Trust Us We’re Algorithms. The Slippery Slope of AI Classification

Proposed Rule 707: a solution in search of a problem. Neither plaintiffs nor in house lawyers are excited.

And while Rule making authorites debate this proposed new federal rule about AI-generated evidence, we’re ignoring real crises: deepfake proliferation, cost of trials and litigation and the time required for dispute resolution.

Here’s my post for Above the Law. 

Here is my recent Above the Law Post: Cleveland State University College of Law and AltaClaro just launched a course entitled “Fundamentals of Prompt Engineering for Lawyers” and 130+ students immediately signed up. It’s a great idea. 

But right now it’s just extracurricular.

If GenAI competency is truly essential (and ethical rules would suggest it is),  should courses like these be part of a mandatory curriculum? Is a “nice to have” approach enough?

How can small firms stay on top of the contract change flood from software providers?

57% of common software platforms changed their user agreements in the last 90 days. 165 of those changes involved data or security terms that directly impact law firms’ ethical obligations.

When we get notice of changes in terms with our apps, most of us click “accept” without reading. But for lawyers, those changes to things like Slack, Zoom, and other tools we use daily could affect confidentiality duties, data rights, and client relationships.

Creative Alliance’s “Take Care” product is an AI tool that purports to automatically detect and analyze contract modifications with actual lawyer review. 

The ethical opinions are clear: we need to monitor our software providers and their terms. Take Care is designed to help.

Here’s my post for Law Technology Today.

A new Thomson Reuters Report reveals what may be an uncomfortable reality that many don’t want to hear. Quite simply, AI will require fewer lawyers and less billable time to do tasks. But 90% of legal dollars still flow through hourly billing which is the same structure we’ve had since the 1950s. Think about the math: if AI cuts the time to draft a brief from 25 to 10 hours, a lawyer with a rate of $300 today would need a rate of $750/hour to make up the differance. I’m not sure clients will pay that premium just to subsidize law firm profits. It may be necessary to fundamentally rethink the billable hour business model. 

My post for Above the Law

Despite what we hear from many vendors and pundits, the commoditization of GenAI may be inevitable. Here’s my thoughts on what that could mean for legal tech vendors and for the lawyers and legal professionals relying on GenAI tools. And what we can do to prepare. Here is my Above the Law post on this subject.

Ads are “like a last resort for us for a business model…ads plus AI is sort of uniquely unsettling”. Sam Altman May 2024 as quoted in Hacker News.

“To start, we plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.” OpenAI, January 16, 2026, also from Hacker News.

And so it begins.

Continue Reading The GenAI Siren Song, the Danger of Enshittification and Tying Ourselves to a Mast

After watching lawyers get sanctioned almost daily for GenAI hallucinations and inaccurcies while many others claim they’ve never used it, maybe its time for mandatory GenAI CLE. Three states already require tech training. 39 states have adopted Comment 8 to the model competency rule. So there is precedent.

GenAI is too impactful to leave GenAI competency to chance. It’s not as crazy as it sounds. Here’s my Above the Law post making the case.