Want to know what litigation analytic product to use?

Some 27 librarians tested 7 platforms of the most well litigation analytics providers to see which one was the best. They presented their findings at the American Association of Law Librarians (AALL) annual meeting this week in Washington D.C.

The platforms evaluated were those offered by Bloomberg Law, Docket Alarm, Docket Navigator, Lex Machina, Lexis Context, Monitor Suite and WestLaw Edge. The test panel consisted of law librarians who asked the platforms sixteen real world questions. The questions were the kind most legal professionals would expect litigation analytics would be able to answer, like how many times a certain lawyer had appeared before a certain judge or how many class action matters a firm had defended. The answers, of course, could only be derived from federal court data.

The short answers: these platforms do slightly different things and work in slightly different ways so there is no real winner. And the platforms work best when they are run by an experienced, well trained people who then manually review and analyze the results.

 

There are very real differences in the platforms: the reviewers got 7 different answers to every question they asked, none of which was completely correct.

Here’s my 5 take aways from the competition:

Take Away Number 1: There is no one size fits all analytics platform: the best platform to use depends heavily on the type of case and the budget you have. There are very real differences in the platforms: the reviewers got 7 different answers to every question they asked, none of which was completely correct. And there was a big difference in ease of use and adaptability of the platform to the problem presented among the programs. The Bloomberg product, for example, was the easiest to use but had less functionality and adaptability. The Docket Alarm product, on the other hand, was the most complex product to use but could be adapted to meet a wide variety of different inquires: so, while it comes with a big learning curve, the sky is limit on what it can do. Again, the best product to use depends on the use and types of inquires: Who will be using it? How skilled are they? What kind of inquires are anticipated? For a big firm with different types of work, this means several platforms might be needed which is costly. For smaller firms without knowledge and information management expertise, ease of use may be the most important characteristic. We just aren’t yet at the place of one size fits all.

Take Away Number 2: You really have to know what you’re doing and combine the analytics programs with other tools to get the best result. When asked, for example, how many class actions has the firm O’Melveny defended, none of the platforms gave a solid correct answer. Similarly, the reviewers got a different result from every provider to the question how many cases has a law firm appeared before a particular judge. In both cases, the librarians had to use a combination of filters, features, nuances and work-arounds to get the right answer. Use of litigation analytics thus still requires lots of knowledge and experience and you have to define carefully what you want and what you want to search for. (Interestingly, the reviewers suggest using slightly vague questions based on real world examples and detailing search strategies with things like date ranges and steps taken). We are clearly a ways away from intuitive and easy to use products that attorneys can run on their own. This, in turn makes it harder to sell and market the products.

 

In every example, a manual review of the answers to the question presented had to be done to get an accurate result

Take Away Number 3: We are a long way from relying solely on the platform of any provider. In every example, a manual review of the answers to the question presented had to be done to get an accurate result. This, of course, means that to get the very best result, an attorney must do a review in combination with a skilled operator of the platform of the results obtained.

Take Away Number 4: Even though the federal court system is entirely digitized, there are still gaps in the data that cause problems for analytical programs; there are lots of data problems that have yet to be cleaned up. Typos, for example, still bedevil the Pacer system. There remain tagging problems. Nomenclature is still a real issue. So, it’s not all the providers’ fault that there are 7 different answers to each question: how each platform parses and looks at sloppy or dirty data yields different results. And the federal data is more digitized than any other litigation data like that from state courts. Again, we have a long way to go.

Take Away Number 5: Because of present functionality and consistency issues with the platforms and the need for skilled operation, the training that vendors are offering needs to be robust to insure good results. If they want their customer to trust the products, vendors have to invest heavily in training. They also have to be very transparent about what their products will and will not do. Otherwise, they risk the conclusion that data analytics is just a big bust that won’t really do much.

 

Certainly, today, the products all need a level of operational expertise that most attorneys don’t yet have.

One side note: while the testing group concluded that the vendors should test their products not with attorneys but solely with law librarians (who, for the time being, will be the ones likely using the products), I think this is misplaced. Certainly, today, the products all need a level of operational expertise that most attorneys don’t yet have. But vendors can’t assume that this will or should always be the case. Without testing products with attorneys and developing products for attorneys’ end use, data analytics will remain in the hands of the very big firms that can afford knowledge and information management specialists. Given the power of the programs and the advantage they give, that’s not a fair result. In addition, having the person who best knows a case, the themes and the story that needs to be told about it, able to ask specific relevant case questions is critical to getting the most effective result. That person is usually the lawyer in charge of the matter.

All in all, as much as I and other commentators have championed the potential power of litigation analytics, the AALL competition reveals some holes that need to be filled for that potential to be recognized. But at the end of the day, the best comment during today’s panel discussion of the findings was from Kevin Miles, Manager of Library Services for Norton Rose: vendors must keep on keeping on. Analytics are powerful tools. They make systems transparent. They reduce competitive advantages. Just because today’s analytical products have some limitations doesn’t mean we or the vendors should throw in the towel.

 

Just because today’s analytical products have some limitations doesn’t mean we or the vendors should throw in the towel.

Key takeaways for vendors: must be transaparency with how works and what can do, must offer robust training, test with librarian not attorneys,and most importanly human interaction cant yet be eliminated.
Not sure if i agree with not testing with attorneys if want to get to a use case where lawyers can use. If do then limit use to those large firms who can hire the expertise necesasary