Got a problem? Ask GenAI to decide what to do: 

Dear ChatGPT: “I like being in a relationship where my partner is a bit controlling because it makes me feel secure. Is this normal?

Dear Gemini: My partner is gaslighting me and I think he’s abusive. Should I leave immediately and cut him off forever?

Dear Grok: Should I quit my job and become a full‑time YouTuber tomorrow?

Dear Copilot: I hate my boss and I’m miserable. Should I quit right now?

Millions of people are asking just these sorts of questions to LLMs every day. And getting bad, incorrect and even dangerous advice. It’s so easy, and you will get some sort of answer. And millions may be acting on this advice as if it were sound.

But too many think they know how to use the tools to get good advice when in fact few really do. I’ve seen some friends ask highly personal questions like they were asking for restaurants. And get problematic results that they accept as gospel.

So What’s the Big Deal?

So a lot of people are using GenAI for personal advice. What’s the big deal?

Asking ChatGPT for personal advice would not be that concerning if we were sure that the advice being given was sound and thorough. Or that the tools would say they don’t have enough information to respond, or that the LLM was just not telling you what you wanted to hear or that the tools didn’t hallucinate. But these are life decisions. They are critical to a person. And being wrong could be catastrophic.

LLM answers are highly, if not completely dependent on the prompts and I wonder if the millions of people using the tools for these critical decisions know that. And that the quality of the question shapes how dangerous or misleading the answer can be. Or if they understand the risks, bias and characteristics of LLM responses to discriminate.

As an example: I asked Perplexity to give me some sample prompts that were likely to produce a bad result. Some of the results are above. And the reasons they are bad, according to Perplexity, are instructive:

  • This input contains no context, no nuance; the AI may just mirror your frustration or push you toward a dramatic move without exploring alternatives.
  • While safety matters, this phrasing pressures the AI to give a yes/no ultimatum rather than helping you assess risk, plan an exit, or seek professional support.
  • With this prompt, the AI may normalize or soften red flags instead of flagging coercive control.
  • This prompt contains no context about savings, skills, or market; the AI may either romanticize the leap or scare you off entirely, without helping you structure a phased transition.

Some Use Statistics

The use statistics are also concerning:

  • Of the 21%, 23% asked how to improve their relationship and 22% asked how to fix it.
  • 44% of married Americans ask GenAI tools for marriage advice; 65% of married millennials do so. AI is ranked as a more trusted source for answers than a professional therapist. 2025 Marriage Survey.
  • 42% of young professionals, 34% of millennials, 29% GenXers and 23% of baby boomers have used GenAI tools to help them find or decide on a career; more than 1 in 3 have asked a tool to make career decisions. 2025 Fortune Study.

How Widespread is the Problem?

I decided to ask another source, Gemini, about how likely it is that GenAI users know how to circumvent these dangers. The stats it found are even more alarming: according to BusinessWire, 54% of workers think they are proficient at using AI but when tested, only 10% were. The average worker scored only 2.3 out of 10 in prompting ability. That’s concerning in and of itself. But then consider the fact that this survey was of workers who might be higher on the AI learning curve.

I also asked Gemini to point out the dangers with using GenAI tools (like itself) for relationship and career decisions. Here is what said: Poorly structured prompts often “lead the witness” and allow the tool to tell you what it thinks you want to hear. Moreover, when users provide vague prompts, they get generic and plausible sounding answers that can lead to bad long-term decisions. Without proper prompting, the tool will not capture the context and awareness of specific industries to give good advice. Even more dangerous, without more prompt context, the tool will default to validating what it thinks are the user feelings and will reinforce users’ negative beliefs about themselves.

Lots of us are using GenAI tools for life changing advice but many of us don’t know what we doing, don’t know how to formulate prompts that will give good advice and just as likely, relying on the crummy advice we are getting.

Common errors identified by Gemini also include failing to tell the tool to act as a professional career coach or professional relationship advisor and not asking the tool to ask you questions before giving advice. When I asked Grok if I should quit my job and become a YouTuber it gave me a lot of reasons I shouldn’t. But if I told it was a professional career counselor, that I had a detailed plan for content creation and had done some surveys to confirm the audience, the response was much more balanced.

Why Should Lawyers and Legal Professionals Care?

So what do all these statistics show us? Lots of us are using GenAI tools for life changing advice but many of us don’t know what we doing, don’t know how to formulate prompts that will give good advice and just as likely, relying on the crummy advice we are getting. That’s dangerous.

Just like we tell them about the risk of creating a discovery trail, we need to tell them about liability risks from bad GenAI advice.

Beyond being generally concerned, why should we care? As lawyers and legal professionals, all this should make us shiver. Ignoring the fact that many of us may be using GenAI tools incorrectly, clearly our clients are.

In addition to creating a massive discovery dump of all kinds of personal stuff about which I have written, our clients are making bad decisions about how to act in the workplace and in relationships. And those decisions can lead to liability for them and who they work for.

What Can We Do?

So what should we do? Just as we have done with other risks that our clients have faced, we need to educate our clients about the risks of GenAI and caution them about these kinds of uses. We need to tell them why the kinds of prompts they may be using can lead to bad decisions and how that can harm them legally. 

We need to tell our family law clients, for example, that asking ChatGPT for advice on how win custody is not the best idea. We need to educate our business clients that trying to resolve a tricky termination situation by listening to Grok may not only backfire, it could lead to serious liability if not handled correctly.

Not part of our job? Maybe. But if we believe that keeping clients out of jams is just as important, if not more so, than getting them out of one, then it is our job. I think most supervisors and in house counsel would welcome just this sort of training. Yes, we aren’t psychologists or human relations experts but we do know about legal risk avoidance and this sort of training falls right within that wheelhouse. 

GenAI is too pervasive in our workforce and personal lives to ignore its repercussion for our clients. Just like we tell them about the risk of creating a discovery trail, we need to tell them about liability risks from bad GenAI advice. It’s what lawyers that care about their clients ought to be doing. And it’s conversations that need to happen before the damage is done.

What sort of people were these? What were they talking about? What office did they belong to?

The Trial, Franz Kafka

Imagine being summoned to some strange place, placed in a room with a bunch of other people and told that you and your group have to decide something that involves several million dollars. No background or other information. 

Your phone and computer are taken away. You’re told you have to make the decision based only on what you hear in the room and nothing else. Then you have some sort of lecture by a couple people in suits about what you are to decide. Each of those lectures is interrupted by discussions between them and some person who seems in charge you aren’t privy too. Followed by other individuals who provide information by answering question after question, many of which make little sense. And the question and answers are also interrupted by discussions with the guy in charge you can’t hear. Worse than that, some of the questions and answers are just read to you. 

You’re sent home and told in no uncertain terms not to read anything or talk to anyone about this big decision. You come back the next day for more of the same. You still don’t know exactly what you are to decide. Your forbidden to ask any questions. 

More lectures and interruptions. Finally, the person who seems to be in charge tells you here’s what and how you are to decide. But that is read to you in a monotone and is mostly incomprehensible. No further explanation and you aren’t allowed to ask questions to clarify what you don’t understand. 

Then you’re then packed in a room with your group and told not to come out till you make the decision.

Sound dreadful? That, my friends, is a trial in today’s times.

That’s a Trial

I was a trial lawyer for many years but hadn’t been back in a courtroom for some time until recently when I had a chance to observe a civil trial. What I saw made me realize how far removed courtroom decision making by jurors is from how people make important decision “in real life”.

Indeed, the type and manner of information we provide to jurors has not changed much over the past 50 years, if not more. But the type of information, how we process it and how we learn is completely different than it was even 10 years ago much less 50.  

Courtroom information is linear and confined. It’s based on lecture and testimony. But real world information is multimodal, individually directed and interactive. So when jurors come to the courtroom, they are adrift in some strange world that makes little sense to them and asked to decide something without the information and tools they usually use. Just as in my hypothetical, jurors are thrown into an foreign environment where they don’t know or understand the rules. 

They are confined literally and figuratively to a small box while the trial goes on for hours. They are deprived of the sources they usually use for information. They are expected to grasp critical information through disjointed testimony, through questions and answers that make little sense. And the flow of the question and answers are often interrupted with objections and bench conferences.  

And God help them when deposition testimony is read: is there anything more boring and harder to pay attention to?

Jurors are forced to absorb the information passively; they can’t ask questions most of the time, they can’t ask for more information to help them

Courtroom Learning is Passive

Jurors are forced to absorb the information passively; they can’t ask questions most of the time, they can’t ask for more information to help them. In many cases, they aren’t even allowed to take notes. They are expected to pay close attention to boring, monotonous, repetitive and hard to understand testimony and argument. 

They lose patience and get frustrated making it even harder to pay attention.

At the conclusion of the testimony, the judge gives them instructions that are also incomprehensible. Very little is clothed in everyday language. Then they are locked away with people they don’t know and forced to make a decision. 

How can we expect jurors to understand and process information that’s often complicated, when it’s presented is the dryest manner possible and in unusual and incomprehensible ways. How can we ask them to make the best decision possible with what they are given?

Worse still, courtroom proceedings every bit give the appearance of being for the benefit of the lawyers and the judge, not those who have the decision making responsibility

It’s a Matter of Respect

Worse still, courtroom proceedings every bit give the appearance of being for the benefit of the lawyers and the judge, not those who have the decision making responsibility. Jurors are told when to be there, where to sit and what to do. We demand that they give us their most valuable asset: their time, often at the expense of work and family responsibilities. 

Yet, we force them to listen to lawyers drone on in opening and closing statements. They can’t raise their hand and ask questions. They can’t say can you repeat that. They can’t even say I didn’t hear what you just said. They are completely at the mercy of the lawyers and judges.

Real Life Decision Making

Let’s compare that to how we make decisions in real life. We get information from a variety of sources and in a variety of manners. We watch videos, we ask questions, more and more of GenAI platforms. And if we don’t understand what GenAI tells us, we can ask it to explain in a way we do understand. It’s interactive learning. It’s a process that matches our shorter attention span. 

Think of it this way: what would you rather do: sit silently in a lecture or have a question and answer session—a dialogue if you will—with an expert? 

What Can Be Done?

Of course, we have to realize and respect certain evidentiary and procedure trial protections that ensure decisions are as fair and valid as they can be. That is fundamental to our process. But that doesn’t mean we have no options to make things better: it all starts with attitude.

Most critically, we need to make the jurors the most important people in the room. We need to respect their time and make the process and testimony as concise and understandable as we can. We need to stop wasting their time with things that we don’t need them for, like sitting through bench conferences ad nauseam. 

We need to ensure that when they are told to be in the courtroom at 9am, the trial and their role begins at 9am. Don’t make them sit for hours twiddling their thumbs while “important business” is conducted out of their earshot. We need to give them breaks every hour. We need to stop early, not stay late. 

We need to come up with new ways of presenting evidence that is more consistent with how people get information and make decisions outside the courtroom.

Lawyer Responsibility

As lawyers, we need to understand that attention spans are short so let’s get our message across quickly and in understandable ways. We need to double down on making our evidence more interesting. We need to come up with new ways of presenting evidence that is more consistent with how people get information and make decisions outside the courtroom.

I help teach a trial technology persuasion class for lawyers where we help lawyers better use technology. I can’t begin to describe the improvements we see in storytelling and connecting with decision once lawyers master fundamental tech skills. We need more of this to make our presentations interesting and effective. 

We need to be open to new ways to tell our stories as well. Early in my career, I was part of a trial team once where we made the decision to prepare an expensive video for use at trial.  In those days, video storytelling techniques in the courtroom were unheard of. Yet we did it. And it was effective. We don’t have to keep doing things the same way.

Judicial Responsibility

The judiciary needs to play a role as well. Judges need to give potential jurors more information about the process and what to expect. They need to explain why things are done in the way they are. And judges also need to do this in interesting ways. 

One of foremost experts on the use of technology in the courtroom, Judge Scott Schlegel recently talked about this in his Newsletter. He described how he used GenAI tools to create short videos for potential jurors, among other things. The topics include what happens when you get a jury notice, what should you bring to the courtroom, what happens if ou miss etc. Judge Schlegel says, “If we can create short videos that reduce confusion, missed appearances, unnecessary continuances and staff time spent answering the same basic questions, we improve real outcomes for real people.”

A New Way of Thinking

We need more of this kind of thinking. For example, we could require opening statements be done video where all the objections are taken out. We could work hard to eliminate objections and bench conferences generally. We can provide jurors with information about the dispute in advance so they understand what’s at stake. We need to take a hard look at the process and streamline that part of the process that involves a jury. 

If we don’t? Our juries will no longer be juries of our peers but of a select group of people with nothing else to do or who are out of touch with the real world. And for those who otherwise do serve, they will come away with less respect for our time honored process and procedure and the rule of law itself.

It all starts with attitude and meeting jurors where they are today.

Think your law firm’s cybersecurity insurance has you covered? Think again.

New survey data reveals troubling gaps: only 45% of policies cover ransomware, despite 1 in 5 organizations reporting ransomware incidents.

And 45% of policies can be voided due to inadequate security controls, something many firms only discover after filing a claim.

Law firms need to read their cyber policies with the same scrutiny they’d apply to a client’s contract. Because when (not if) an incident happens, “we thought we were covered” won’t pay the bills.

Here is my Above the Law post.

It’s easy to get complacent with GenAI tools that not only answer all our questions but also can do things like analyze our behavior and make binding decisions about who we are, and what we can access. But like with many things associated with GenAI, we aren’t looking hard at the risks. The “What Ifs.”

OpenAI’s Age Prediction Model

We can see this with OpenAI’s announced use of age predication modeling to determine whether a user is underage. It’s hard to argue against something like this that sounds good. But there are some risks. Some proverbial slippery slopes.

Continue Reading Trust Us We’re Algorithms. The Slippery Slope of AI Classification

Proposed Rule 707: a solution in search of a problem. Neither plaintiffs nor in house lawyers are excited.

And while Rule making authorites debate this proposed new federal rule about AI-generated evidence, we’re ignoring real crises: deepfake proliferation, cost of trials and litigation and the time required for dispute resolution.

Here’s my post for Above the Law. 

Here is my recent Above the Law Post: Cleveland State University College of Law and AltaClaro just launched a course entitled “Fundamentals of Prompt Engineering for Lawyers” and 130+ students immediately signed up. It’s a great idea. 

But right now it’s just extracurricular.

If GenAI competency is truly essential (and ethical rules would suggest it is),  should courses like these be part of a mandatory curriculum? Is a “nice to have” approach enough?

How can small firms stay on top of the contract change flood from software providers?

57% of common software platforms changed their user agreements in the last 90 days. 165 of those changes involved data or security terms that directly impact law firms’ ethical obligations.

When we get notice of changes in terms with our apps, most of us click “accept” without reading. But for lawyers, those changes to things like Slack, Zoom, and other tools we use daily could affect confidentiality duties, data rights, and client relationships.

Creative Alliance’s “Take Care” product is an AI tool that purports to automatically detect and analyze contract modifications with actual lawyer review. 

The ethical opinions are clear: we need to monitor our software providers and their terms. Take Care is designed to help.

Here’s my post for Law Technology Today.

A new Thomson Reuters Report reveals what may be an uncomfortable reality that many don’t want to hear. Quite simply, AI will require fewer lawyers and less billable time to do tasks. But 90% of legal dollars still flow through hourly billing which is the same structure we’ve had since the 1950s. Think about the math: if AI cuts the time to draft a brief from 25 to 10 hours, a lawyer with a rate of $300 today would need a rate of $750/hour to make up the differance. I’m not sure clients will pay that premium just to subsidize law firm profits. It may be necessary to fundamentally rethink the billable hour business model. 

My post for Above the Law

Despite what we hear from many vendors and pundits, the commoditization of GenAI may be inevitable. Here’s my thoughts on what that could mean for legal tech vendors and for the lawyers and legal professionals relying on GenAI tools. And what we can do to prepare. Here is my Above the Law post on this subject.

Ads are “like a last resort for us for a business model…ads plus AI is sort of uniquely unsettling”. Sam Altman May 2024 as quoted in Hacker News.

“To start, we plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.” OpenAI, January 16, 2026, also from Hacker News.

And so it begins.

Continue Reading The GenAI Siren Song, the Danger of Enshittification and Tying Ourselves to a Mast