OpenAI CEO Sam Altman recently said something that should grab ry lawyer’s attention. Appearing on the Theo Von podcast, Altman said,” “So if you go talk to ChatGPT about your most sensitive stuff, and then there’s … a lawsuit or whatever, … we could be required to produce that.

The Risk
Wait. What? That could mean anything and everything someone puts on ChatGPT (or any other publicly facing LLMs) could be discoverable. If you’re not a litigator, that might not mean much but for those of us who are, it’s a chilling danger for both our clients and even ourselves.
I can’t tell you how many times over the years I’ve reviewed client documents, emails, and text messages only to find something that shouldn’t be there. The story is always the same: ‘I didn’t think anyone other than the recipient would ever see that.’ And that was for documents that were at least being exchanged with another person. “And the story was always the same: I didn’t think that anyone else other than the recipient would ever see that.
Then there were the file memos that people thought were private. I even had one engineer tell me, when I asked if he had given me all his documents, that yes, he had—except for materials in his ‘private’ file at home.” It was a good lesson in appreciating the mindset differences between lawyers and their clients when it comes to discoverability and privacy.
GenAI Use
But I shudder to think of all the stuff being shoved into large language models. People are encouraged to and do use these models to brainstorm ideas. To seek personal advice. To draft documents. To tell their secrets too. All without a thought that what they put in and what they get out may be fair game for discovery down the road.
That means that their thought processes revealed by their communications can be revealed. What they considered. What they rejected. Or even the advice from the LLM they didn’t follow. The offend comments. Emotional outbursts. Plotting. All is fair game.
But What About Privilege?
I also wonder about how lawyers are using the tools. Many of us have certainly encourage lawyers to use the tools to find weaknesses in the other sides argument and even our own. To batt ideas back and forth. To use it like a first year associate.
Have we waived the work product privileged?
But there is a big difference between talking to an associate in your own firm and talking to a non human over which you have no control who may or may not be keeping your materials confidential. Who has no relationship with the client. We have all been cautioned to not place client confidences into the public LLMs. Put what about what could be considered attorney work product is placed into the LLM. The stuff that shows our thought process. The ideas we thought about and rejected. Our theories of the case. Our assessment of the witnesses. The other sides case. Have we waived the work product privileged?
Let’s Ask ChatGPT
I’m not sure of the answer but here is what ChatGPT thinks. “Placing thoughts and ideas into a large language model (LLM) like ChatGPT could potentially waive work product protection. If a lawyer uses a public, consumer-facing LLM (like ChatGPT or Copilot) without a confidentiality agreement or enterprise-level protections, inputting sensitive legal analysis or impressions might be considered disclosure to a third party. If the provider reserves the right to retain, review, or use the input data, a court might find that confidentiality was not preserved.
Let’s say you arguing to the Court that your use of an LLM does not waive any privilege. Your opponent stands up, smiles smugly and proceeds to quote ChatGPT. Not a pretty scene.
What About Protections?
Yes, the various LLMs have certain confidentially provisions but is that enough? Do you have a clear understanding of what they are? And can you trust the provider not to adherently or inadvertently violate those provisions? That very thing happened in 2024 when Microsoft disclosed it had a contract provisions which allowed Microsoft employees to retain and review certain prompts that fell into certain categories like hate speech, violence etc. Google and Claude have revealed the same. How can you be sure that what you say to an LLM is not being looked at by a human? And even if not, does that still protect you?
Certainly, the private, enterprise systems that are designed to be used by and protect lawyers appear to offer greater protections to materials. But even these providers, when faced with a subpoena, might be forced to produce inputs and outputs.
Another thing to remember: most LLMs keep the material perhaps forever. Inputs and outputs don’t just disappear.
What Can We Do?
Courts and rule making bodies are going to have to grapple with these issues soon and the results may be far from consistent. In the meantime, the best we can do as lawyers is to let our clients know what not to do. Offer training sessions, just as we did with emails and other tools.
Don’t put anything in an LLM that you would be uncomfortable appearing in a New York Times article.
And as lawyers, we need to be thoughtful about what we put in the LLM and how we use GenAI tools. Think through the implications if what you input and get back would somehow become discoverable. Think through the privilege and waiver issues. Read the terms and conditions.

I’m all for using Gen AI and think the tools should be used in many ways. But we can’t forget the risks.
Some Final and Best Advice?
The best advice comes my good friend and Mississippi trial lawyer, Jimmy Wilkins: always follow the New York Times rule. Don’t put anything in an LLM that you would be uncomfortable appearing in a New York Times article.
A tip of the hat to Judge Scott Schlegel whose recent post put me on to this issue and the potential dangers. If you aren’t following his blog, you’re missing some of the most insightful thoughts and ideas out there.