Despite what seems to be an accepted truism, AI hallucinations aren’t necessarily completely random. That’s the key insight from a new physics-based analysis by a group of scientists and engineers and it may change how we should be using GenAI tools.
The key finding: GenAI systems have a deterministic mechanism that causes output to flip from reliable to fabricated at a calculable step. And that step arrives exactly when a lawyer’s need is greatest. On novel, unsettled legal questions where training data is sparse.
That’s good and news. The good: if failure is somewhat predictable, more verification is needed when you are in ambiguous areas. More confidence on well-known information.
The bad: the stretch of accurate output that precedes the failure builds false confidence by the uninformed, making the fabrication harder to catch, not easier.
My post for Above the Law.