recent survey by DeepL, an AI translation service, reveals a risk of continued hallucinations and inaccuracies with the use of AI. Spoiler alert: 96% of those surveyed are using AI, 71% are using it without approval of their organization (aka shadow use) mainly to deliver work faster.

Why It Hit Home

The survey resonated with me for several reasons. I recently wrote an article talking about the pressures being placed on lawyers and legal professionals to use AI but not spend the time checking results. My concern was, of course, the propensity for hallucinations and inaccuracies in the AI outputs.Continue Reading Billable Hour Demands, Shadow Use of AI and Law Reality: It’s a Hot Mess 

OpenAI CEO Sam Altman recently said something that should grab ry lawyer’s attention. Appearing on the Theo Von podcast, Altman said,” “So if you go talk to ChatGPT about your most sensitive stuff, and then there’s … a lawsuit or whatever, … we could be required to produce that.

The Risk

Wait. What? That could mean anything and everything someone puts on ChatGPT (or any other publicly facing LLMs) could be discoverable. If you’re not a litigator, that might not mean much but for those of us who are, it’s a chilling danger for both our clients and even ourselves.Continue Reading Sam Altman’s Warning: Everything You Tell ChatGPT Could End Up Being Used Against You

A new Thomson Reuters Report highlights a phenomenon unique to legal and big law: clients aren’t talking to their lawyers about things that could disrupt the status quo—especially around AI and billing.

The report is full of interesting findings, but here’s one with broad and troubling implications: 57% of clients want their firms to use GenAI, but 71% don’t even know if their firms are actually doing so. 89% of all respondents see a real use case for GenAI in their work. Nevertheless, the report notes that just 8% of in-house counsel are inserting GenAI provisions in RFPs or outside counsel guidelines.Continue Reading The AI Conversation Law Firms and Clients Aren’t Having And Why It Matters

As reported elsewhere, vLex, a legal research and database provider, recently announced significant enhancements to its AI tool, Vincent. These upgrades are designed to provide lawyers and legal professionals with enhanced practical solutions, address knowledge gaps, and offer valuable insights.

With the release, vLex is offering significant new workflows. The new features will be particularly helpful for litigators. What I like about the vLex products is that they seem to offer options in ways that can help litigators with real-world problems and improvements. The new enhancements seem to be consistent with this approach.Continue Reading Practical AI for Litigators: Vincent is Closing Knowledge Gaps

As most of you know, I covered the world’s largest consumer products show, CES, in early January for Above the Law. I offered various stories on what I thought was important from a legal standpoint, which you can find here.

One thing I didn’t mention in my coverage was quantum computing. CES offered some 3 hours of presentations on quantum computing. I didn’t write on it. It’s because a), like most of you, I don’t really understand it, and b) I’m not sure what it can and can’t do for legal that’s different than what we have now. The second question begs the question about its impact on lawyers and legal.

Recent DevelopmentsContinue Reading Quantum Computing: It Giveth. But May Taketh Away

We have all heard over and over again about lawyers who use Gen AI and fail to check the citations the tools provide. The dangers of hallucinations and inaccuracies when using Gen AI tools are well known, and a Court will likely have little sympathy for the lawyer who fails to check sources.

But what if an expert witness uses Gen AI to come up with nonexistent citations to support their declarations or testimony?

That very thing just happened in a case pending in Minnesota federal court, as reported by Luis Rijo in an article in PPC Land. Ironically, the expert in question, Professor Jeff Hancock, the Stanford Social Media Lab Director, offered a declaration in a case challenging the validity of a Minnesota statute regulating deepfake content in political campaigns. Hancock subsequently admitted using ChatGPT to help draft his declaration. The declaration included two citations to nonexistent academic articles and incorrectly attributed the authors in another citation.Continue Reading Did Your Expert Use ChatGPT? You Might Want to Ask


Lies. Scams. Disinformation. Misinformation. Voice cloning. Likeness cloning. Deepfakes. Manipulated photographs. Manipulated videos. They all pose tough questions for lawyers, judges and juries.AI has exploded the possibilities of all these things to the point that it’s almost impossible to trust anything. Lack of trust has enormous implications for lawyers, judges, and the way we resolve