It’s easy to get complacent with GenAI tools that not only answer all our questions but also can do things like analyze our behavior and make binding decisions about who we are, and what we can access. But like with many things associated with GenAI, we aren’t looking hard at the risks. The “What Ifs.”

Proposed Rule 707: a solution in search of a problem. Neither plaintiffs nor in house lawyers are excited.

And while Rule making authorites debate this proposed new federal rule about AI-generated evidence, we’re ignoring real crises: deepfake proliferation, cost of trials and litigation and the time required for dispute resolution.

Here’s my post for Above

How can small firms stay on top of the contract change flood from software providers?

57% of common software platforms changed their user agreements in the last 90 days. 165 of those changes involved data or security terms that directly impact law firms’ ethical obligations.

When we get notice of changes in terms with our

Ads are “like a last resort for us for a business model…ads plus AI is sort of uniquely unsettling”. Sam Altman May 2024 as quoted in Hacker News.

“To start, we plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.” OpenAI, January 16, 2026, also from Hacker News.

And so it begins.Continue Reading The GenAI Siren Song, the Danger of Enshittification and Tying Ourselves to a Mast

After watching lawyers get sanctioned almost daily for GenAI hallucinations and inaccurcies while many others claim they’ve never used it, maybe its time for mandatory GenAI CLE. Three states already require tech training. 39 states have adopted Comment 8 to the model competency rule. So there is precedent.

GenAI is too impactful to leave GenAI