A new Washington Post analysis of 47,000 ChatGPT conversations reveals a troubling pattern. People are sharing deeply personal information, getting advice that tells them what they want to hear (not necessarily what’s accurate), and creating potential discovery goldmines for future litigation.

The study found users discussing emotions, sharing PII and medical info, and asking for drafts of all sorts of stuff. Whatever is asked, ChatGPT says ‘yes’ 10x more often than ‘no.’

As lawyers, we have an ethical duty to understand technology risks. The question isn’t any longer whether GenAI will impact our practices. It’s whether we’ll educate our clients about these dangers before they create their own smoking guns. Here’s my post for Above the Law.