A nuclear verdict represents every defense attorney’s nightmare scenario: a jury award that far exceeds what was considered a reasonable expectation of the case’s apparent value, often reaching into the tens of millions of dollars for cases that traditionally would have settled for far less. 

From the defense perspective, these verdicts are particularly devastating because they often emerge from an exposure that appeared manageable during pre-trial evaluation. A case with what is believed to have modest damages can explode into a massive liability, destroying budgets, exceeding policy limits, and creating personal exposure for defendants.

What makes these verdicts particularly concerning for defense teams is not just their size, but their unpredictability. Nuclear verdicts often don’t announce themselves. They lurk beneath the surface of what appears to be routine litigation.

The Disconnect Between Case Exposure and Nuclear Potential

One of the most challenging aspects of nuclear verdicts from a defense standpoint is that traditional case evaluation methods often fail to predict their likelihood. Defense attorneys and insurers have long relied on economic calculations—medical expenses, lost wages, pain and suffering multipliers—to estimate case value and set reserves. It’s been the way we’ve always done things. But as the profession grapples with changing client expectations and evolving jury attitudes, this approach may not be enough.

For example, a wrongful death case involving a minimum-wage worker might traditionally be valued at $500,000 to $1.5 million based on economic loss calculations. Yet the same case, in the right circumstances with the right jury, can generate a $20 million verdict. The disconnect is profound and troubling for defense practitioners who must advise clients on settlement values and trial strategies.

Lawyer Client Dynamics

There’s another dynamic at play here as well. When lawyers sense the potential for a nuclear verdict, they seek to take steps to better defend the case and prepare. That means spending more time and billing more hours. It could mean increased expert costs. It could mean sophisticated jury consultants and mock trials.

But when the nuclear verdict happens, the blame game inevitably starts. The client wonders why they weren’t warned more forcefully. The lawyer wonders why the client didn’t listen. 

The client, on the other hand, often is in denial and can’t or won’t see the risks. What the client sees is a lawyer wanting to do more work to run up billable hours. They see little to no empirical evidence of risk but the gut instinct and intuition of the lawyers. 

But when the nuclear verdict happens, the blame game inevitably starts. The client wonders why they weren’t warned more forcefully. The lawyer wonders why the client didn’t listen. 

The Imperative for Early Recognition

Given all this, it would seem a no brainer: early and accurate recognition of nuclear verdict potential is crucial. It would allow for a realistic case evaluation and reserve setting, preventing the shock and panic that occurs when a supposedly modest case explodes.

It would enable more aggressive settlement postures. If you knew what cases could go nuclear, you could shift your strategy early on. Perhaps most importantly, early identification allows defense teams to implement specialized trial strategies designed to defuse nuclear verdict factors.  

And it would reduce the tension between lawyer and client.

The challenge is developing systematic approaches to identify these cases early in their lifecycle, before the nuclear potential becomes apparent to opposing counsel or explodes in the courtroom. 

The common defense wisdom (aka gut instinct) is that there several red flags that serve as markers for a potential nuclear verdict

Factors We Think Signal Nuclear Verdict Potential

The common defense wisdom (aka gut instinct) is that there several red flags that serve as markers for a potential nuclear verdict. Here’s some factors commonly cited:

  • Large corporate defendants, particularly publicly traded companies that juries often perceive as having deep pockets.
  • Companies that have faced recent negative publicity, regulatory actions, or prior litigation.
  • Cases involving vulnerable plaintiffs such as children, elderly individuals, or disabled persons that generate higher emotional responses. 
  • Catastrophic injuries which generate sympathy and anger.
  • Safety-related cases, especially when internal documents suggest the defendant knew about dangers but failed to act. 
  • Cases with systemic or widespread impact such as one that involves a single incident affecting multiple victims, or a pattern of similar incidents, 
  • Certain jurisdictions with the reputation as being plaintiff-friendly, with local juries imposing large verdicts. 
  • Recent nuclear verdicts in a jurisdiction that create precedent effects

And In today’s hyper-connected world, cases that attract media attention or social media buzz carry elevated nuclear verdict risk. Public outrage, whether justified or not, can influence potential jurors before they enter the courtroom. 

Sounds pretty easy. If some or all of these factors are present, there’s clearly a greater potential for a nuclear verdict. And if these factors aren’t present, lawyers and clients should sleep well comfortable that there is little danger.

But if these factors told the whole story, we should not have many nuclear verdicts. The fact that we do means something more may be at play. The reality is that these factors are present in thousands of cases that never go nuclear. Meanwhile, cases that seem to check none of these boxes occasionally explode. 

Clearly, we’re missing something.

Clearly, we’re missing something. The current approach is like trying to predict the weather by looking out the window: sometimes accurate, often not, and pretty useless for planning ahead.

Enter Data Analytics

It occurred to me recently that we should be looking at something more. What if the answer isn’t in completely in courtroom experience or conventional wisdom, but also in the data? What if we could take all the public information about cases, download that data and run AI type analytics to see what factors might really be predictors? 

While it might not be 100% accurate, it could be more reliable than reliance only on gut instinct. Plus, it would provide some empirical evidence that clients might believe is more reliable.

I put this question to Damien RiehlvLex’s VP for Litigation Workflow and Analytics Content, Riehl has done extensive work with data analytics and created the song copyright matrix that accurately analyzes if a piece of music was previously created and used in songs written by someone else.

Riehl was pretty quick to say what I envisioned could absolutely be with a very big IF.

The problem is that it’s public information that is expensive to obtain

The Big If

To do what I am suggesting and to get reliable results, you need the data. Ah, you say, that’s easy—it’s all public information. The problem is that it’s public information that is expensive to obtain. The federal system relies on an online service called Pacer that houses all the digital data of federal courts.

At a cost of .10 per page to copy and download, the cost of obtaining the data needed from federal is simply prohibitively expensive. And many states also charge to access case files and records as well. Even in states that don’t charge, obtaining the records could be a daunting task.

Riehl also told me that his company, vLex, does have a lot of data about court cases. But it’s not all. Neither Thomson Reuters nor LexisNexis has all the data either. If you combined the data from all three companies, you might come close says Riehl. But that’s probably not happening.

Riehl says there is another solution: convincing Congress to eliminate the charges for the Pacer data so that anyone and everyone could access it. And do the same with state legislatures. But it’s been tried several times and while it came close at least in Congress, each time it has failed. The argument is that the some $140 million in Pacer charges each year is the only way to fund the federal court system.

But this is not just a defense bar issue argues Riehl. Plaintiffs’ lawyers’ would benefit as well. Insurers would benefit. And the public would have greater access to records that are supposed to be public.

Is It the Answer?

On its face, using data seems like a good solution. Says, Riehl, “If you had detailed historical verdict data—especially with granular factors like plaintiff profile, defense strategy, juror demographics, and judge tendencies—you could train a model that beats intuition and simple red-flag checklists.”

But it’s not a panacea. According to Riehl, “better doesn’t mean perfect. Some cases go nuclear because of unique trial moments, unexpected witness testimony, or last-minute evidentiary rulings—none of which show up in structured pre-trial data.The most predictive variables (e.g., witness likeability, counsel skill, juror reactions, community sentiment that week) aren’t captured in standard case files.”

And changing jury culture makes looking backward at the data problematic as well: juror attitudes can shift radically and quickly based according to Riehl due to cultural events, social media influence and political. climate. And the data we have often contain “incomplete signals.”

Riehl puts it well, data would help but you still “need to combine it with qualitative insights from trial lawyers who know the jurisdiction, judge, and opposing counsel.”

So trial lawyers may have some work to do for some time yet.