Some hard questions coming out of this year’s Legalweek: are we focusing on training better lawyers or just better prompters? And how do we create good lawyers for the future? Here’s my post for Above the Law
8am’s Smart Spend Enhancement: Analogue Thinking In A GenAI World
8am’s Smart Spend enhancement: solving real practice management problems with real solutions. Here is my post for Above the Law.
The Heppner And Warner Rulings: Hobgoblin Consistency Or An Application Of Principle?
Two federal judges with seemingly opposite rulings on whether using GenAI tools waives the work product privilege. But the cases present facts that differ in important ways.
Here’s what reading them together can tell us about discovery risk, waiver, and pro se status. The bottom line is that privilege issues can still be minefield, and neither ruling necessarily gives you a safe harbor.
Here’s my post for Above the Law.
Hate To Say I Told You So Again: Your Chats Ain’t Private
As I recently warned, a client’s chats with publicly facing GenAI tools may not be protected by the attorney-client or work product privilege. At least one federal judge has now so ruled. The reasons matter. If you or your clients are putting sensitive material into public GenAI tools, the Hepper case out of SDNY is required reading. Here’s my post for Above the Law.
The 2 ½ Minute Opening Statement: Why Aren’t You Using GenAI?
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
That philosophy of reduction is exactly what GenAI can offer trial lawyers, and here’s why that matters. Indeed, I recently talked about how slow and cumbersome the process was, particularly in light of how people learn and digest information in the digital age.

So, when I was recently helping to teach a course on the use of trial technology and one of the faculty members talked about reducing his opening statement in a complex products liability trial to 2 ½ minutes with the help of ChatGPT, my ears perked up.
We were all a bit agog until he played a version of it for us. It was riveting. It hit the points, the themes of the case and what he would prove. And of course, ended with a statement of how he assured the jury he would respect their time. I suspect his opening came on the heels of a long-winded statement by the other side.
He also did something interesting. He took what ChatGPT gave him based on his prompt about his case themes and created an realistic avatar who then performed the opening statement to him. That way he could not only read it but hear what it sounded like. It also gave him clues as to what to emphasize and the voice inflection. And it ensured that the product did not appear to come from GenAI. Of course, he added his own language to parts of the statement to make it real.
I have talked a lot recently about the limitations of GenAI and how it’s often over-hyped and over sold. But that doesn’t mean we should (or ethically can) ignore what it does well and when to use it. And this may be one of them.
As lawyers, we often get hung up with minutiae
As lawyers, we often get hung up with minutiae. We lose respect for the jury and their time. The result is we try to prove the same point over and over. We fear when we make a point, some of the jurors don’t get it. So, we make it again and again. We argue minor points to the judge at bench conferences as the jury looks on bored to tears.
But the beauty of GenAI is that it can help us distill and summarize. To make things clear with less, not more. The irony is that a non-human tool may understand human communication better than many lawyers do.
Here’s a recent example. As many know Derek Chauvin was on trial for the murder of George Floyd by the police. The claim was the police used excessive force in arresting Floyd, causing his death. Chauvin was charged with murder and manslaughter, and the prosecution’s burden was to prove that Chauvin’s use of force was unreasonable and that it caused Floyd’s death.
The trial commenced on March 29, 2021. The prosecution kicked off the trial with an opening statement that went on for over 15 minutes and was well over 2300 words.

Standard thinking was that what the prosecutor needed to communicate was the use of excessive force, the legal standard for police conduct, the duration of the restraint, medical causation, and anticipated defenses. Standard thinking would also suggest the need to humanize George Floyd, define the meaning of causation, and preview video and medical evidence. Given all the perceived needs, most would concede that 15 minutes was pretty concise.
But with attention spans as short as they are these days, even 15 minutes might be a long time.
That was my initial reaction as well. But with attention spans as short as they are these days, even 15 minutes might be a long time.
I asked ChatGPT to compress those essential elements into a concise opening that could be delivered in roughly two and a half minutes. Here is the result:
Ladies and gentlemen of the jury:
On May 25, 2020, in Minneapolis, Minnesota, the defendant, Derek Chauvin, used excessive and unreasonable force against George Floyd. He kept his knee pressed into Mr. Floyd’s neck and back for nine minutes and twenty-nine seconds.
During those nine minutes and twenty-nine seconds, Mr. Floyd was handcuffed. He was lying face down on the pavement. He was not resisting. He told the officers he could not breathe. Bystanders pleaded with the defendant to stop. Fellow officers raised concerns. Still, the defendant continued.
You will see this with your own eyes.
This case is about what happened in those nine minutes and twenty-nine seconds. The evidence will show that the defendant’s restraint went far beyond what police training and the law allow. Officers may use force, but only reasonable force. And force that is no longer necessary must stop.
Here, it did not stop.
You will see that the defendant maintained pressure even after Mr. Floyd became motionless. Even after he lost consciousness. Even after there was no pulse. The defendant did not remove his knee until paramedics arrived.
The medical evidence will show why that matters. George Floyd died from low oxygen caused by the defendant’s restraint. The position, the pressure, and the duration deprived him of the ability to breathe.
You will hear from police trainers who will explain that this restraint violated department policy and basic safety principles. You will hear from medical experts who will explain how the restraint caused death. And you will see the video — the bystander video and the body-camera video — that captured these events in real time.
You can believe your eyes.
The law applies equally to everyone. The badge does not permit unreasonable force. And when an officer uses force that is excessive, unnecessary, and prolonged — force that causes death — the law calls that a crime.
At the end of this trial, after you have seen the evidence and heard the testimony, the State will ask you to return verdicts of guilty on all counts.
This statement is 360 words and when I practiced it, it took about 2 ½ to 3 minutes. Particularly effective was the use of “you can believe your eyes” versus the video tapes will show. And if I were giving this statement, I would of course tweak various portions but still keep it with the time frame. But if you were sitting in the jury box for what would turn out to be three weeks which version would you rather hear? Which is more effective?
We need to focus less on quantity and more on quality
As trial lawyers, all too often we don’t trust jurors. We think we have to beat them over the heads with evidence and our case themes. In fact, though, we need to focus less on quantity and more on quality. How we can get their attention, respect their time and teach them in ways they learn in everyday life. One thing GenAI does well is to do just that.
So why aren’t you using it?
Before We Predict The End Of Lawyers, Let’s Take A Deep Breath
GenAI slowing productivity increases? Creating more not less work? It’s all part of something called the Solow paradox.
The paradox may apply to legal more than other businesses. More information, more things to analyze and argue over. More verification. More searching for problems. More billable hours. More lawsuits.
GenAI has been overhyped and oversold as the ultimate time-saver and productivity enhancer. What if the opposite is true?
Here’s my post on Above the Law on the Solow paradox and its application to legal.
Nonequity Partners: It’s Not Personal, It’s Just Business
A recent survey suggests non equity partners are not entirely happy with their lot. And for some good reasons. But the economic reality is that it’s here to stay. And that’s for a good reason as well: more profit for equity partners. There’s no use pretending: for law firm management it’s no longer personal (if it ever was). It’s just business. Here’s my post for Above the Law.
Critical Training In The Age Of GenAI May Require Training The Trainers
A new LexisNexis study reflects deep concern over developing critical thinking skills among young lawyers in the age of GenAI. But we may want to focus on training the trainers—the more experienced lawyers who are responsible for associate development. And think about associate development in new ways. Here’s my post for Above the Law.
I’ve Taken Steps To Protect My Client’s Documents: But What Happens Post-Production?
You produce your client’s documents to the other side during discovery. But how confident are you that opposing counsel won’t end up feeding your sensitive documents into ChatGPT or a similar LLM? With AI agents proliferating and integration everywhere, post-production document security has become the potentially problematic. We can control our own shop, but once discovery goes out the door, not so much. My analysis of a growing problem and potential solutions along with the insightful thoughts of Matt Mahon of Level Legal. Here’s my Above the Law post.
Using GenAI for Relationship or Career Advice? Danger Will Robinson
Got a problem? Ask GenAI to decide what to do:

Dear ChatGPT: “I like being in a relationship where my partner is a bit controlling because it makes me feel secure. Is this normal?
Dear Gemini: My partner is gaslighting me and I think he’s abusive. Should I leave immediately and cut him off forever?
Dear Grok: Should I quit my job and become a full‑time YouTuber tomorrow?
Dear Copilot: I hate my boss and I’m miserable. Should I quit right now?
Millions of people are asking just these sorts of questions to LLMs every day. And getting bad, incorrect and even dangerous advice. It’s so easy, and you will get some sort of answer. And millions may be acting on this advice as if it were sound.
But too many think they know how to use the tools to get good advice when in fact few really do. I’ve seen some friends ask highly personal questions like they were asking for restaurants. And get problematic results that they accept as gospel.
So What’s the Big Deal?
So a lot of people are using GenAI for personal advice. What’s the big deal?
Asking ChatGPT for personal advice would not be that concerning if we were sure that the advice being given was sound and thorough. Or that the tools would say they don’t have enough information to respond, or that the LLM was just not telling you what you wanted to hear or that the tools didn’t hallucinate. But these are life decisions. They are critical to a person. And being wrong could be catastrophic.
LLM answers are highly, if not completely dependent on the prompts and I wonder if the millions of people using the tools for these critical decisions know that. And that the quality of the question shapes how dangerous or misleading the answer can be. Or if they understand the risks, bias and characteristics of LLM responses to discriminate.
As an example: I asked Perplexity to give me some sample prompts that were likely to produce a bad result. Some of the results are above. And the reasons they are bad, according to Perplexity, are instructive:
- This input contains no context, no nuance; the AI may just mirror your frustration or push you toward a dramatic move without exploring alternatives.
- While safety matters, this phrasing pressures the AI to give a yes/no ultimatum rather than helping you assess risk, plan an exit, or seek professional support.
- With this prompt, the AI may normalize or soften red flags instead of flagging coercive control.
- This prompt contains no context about savings, skills, or market; the AI may either romanticize the leap or scare you off entirely, without helping you structure a phased transition.
Some Use Statistics
The use statistics are also concerning:
- More than 1 in 5 (21%) of those who use ChatGPT have discussed their relationship or their dating life. 2025 Obsurvant Study from Bernadett Bartfai
- Of the 21%, 23% asked how to improve their relationship and 22% asked how to fix it.
- 43% of teens use GenAI for relationship advice and 42% used it for mental health support. 2025 Marketing Institute Study.
- 44% of married Americans ask GenAI tools for marriage advice; 65% of married millennials do so. AI is ranked as a more trusted source for answers than a professional therapist. 2025 Marriage Survey.
- 42% of young professionals, 34% of millennials, 29% GenXers and 23% of baby boomers have used GenAI tools to help them find or decide on a career; more than 1 in 3 have asked a tool to make career decisions. 2025 Fortune Study.
How Widespread is the Problem?
I decided to ask another source, Gemini, about how likely it is that GenAI users know how to circumvent these dangers. The stats it found are even more alarming: according to BusinessWire, 54% of workers think they are proficient at using AI but when tested, only 10% were. The average worker scored only 2.3 out of 10 in prompting ability. That’s concerning in and of itself. But then consider the fact that this survey was of workers who might be higher on the AI learning curve.
I also asked Gemini to point out the dangers with using GenAI tools (like itself) for relationship and career decisions. Here is what said: Poorly structured prompts often “lead the witness” and allow the tool to tell you what it thinks you want to hear. Moreover, when users provide vague prompts, they get generic and plausible sounding answers that can lead to bad long-term decisions. Without proper prompting, the tool will not capture the context and awareness of specific industries to give good advice. Even more dangerous, without more prompt context, the tool will default to validating what it thinks are the user feelings and will reinforce users’ negative beliefs about themselves.
Lots of us are using GenAI tools for life changing advice but many of us don’t know what we doing, don’t know how to formulate prompts that will give good advice and just as likely, relying on the crummy advice we are getting.
Common errors identified by Gemini also include failing to tell the tool to act as a professional career coach or professional relationship advisor and not asking the tool to ask you questions before giving advice. When I asked Grok if I should quit my job and become a YouTuber it gave me a lot of reasons I shouldn’t. But if I told it was a professional career counselor, that I had a detailed plan for content creation and had done some surveys to confirm the audience, the response was much more balanced.
Why Should Lawyers and Legal Professionals Care?
So what do all these statistics show us? Lots of us are using GenAI tools for life changing advice but many of us don’t know what we doing, don’t know how to formulate prompts that will give good advice and just as likely, relying on the crummy advice we are getting. That’s dangerous.

Just like we tell them about the risk of creating a discovery trail, we need to tell them about liability risks from bad GenAI advice.
Beyond being generally concerned, why should we care? As lawyers and legal professionals, all this should make us shiver. Ignoring the fact that many of us may be using GenAI tools incorrectly, clearly our clients are.
In addition to creating a massive discovery dump of all kinds of personal stuff about which I have written, our clients are making bad decisions about how to act in the workplace and in relationships. And those decisions can lead to liability for them and who they work for.
What Can We Do?
So what should we do? Just as we have done with other risks that our clients have faced, we need to educate our clients about the risks of GenAI and caution them about these kinds of uses. We need to tell them why the kinds of prompts they may be using can lead to bad decisions and how that can harm them legally.
We need to tell our family law clients, for example, that asking ChatGPT for advice on how win custody is not the best idea. We need to educate our business clients that trying to resolve a tricky termination situation by listening to Grok may not only backfire, it could lead to serious liability if not handled correctly.
Not part of our job? Maybe. But if we believe that keeping clients out of jams is just as important, if not more so, than getting them out of one, then it is our job. I think most supervisors and in house counsel would welcome just this sort of training. Yes, we aren’t psychologists or human relations experts but we do know about legal risk avoidance and this sort of training falls right within that wheelhouse.
GenAI is too pervasive in our workforce and personal lives to ignore its repercussion for our clients. Just like we tell them about the risk of creating a discovery trail, we need to tell them about liability risks from bad GenAI advice. It’s what lawyers that care about their clients ought to be doing. And it’s conversations that need to happen before the damage is done.