, , , , , ,

The Algorithm as a Human Artifact: Implications for Legal {Re}Search, by Joe Hodnicki, Law Librarian Blog


Susan Nevelow Mart is a law professor at the University of Colorado’s Law School. Her article has earned significant attention and recognition, and for good reason.

Most lawyers and paralegals learn legal research using Westlaw and Lexis, with an emphasis on using headnotes to research relevant law. Because humans write the headnotes and the search algorithms, there is a considerable variation in the results in our legal research.

[W]hen comparing the top ten results for the same search entered into the same jurisdictional case database in Casetext, Fastcase, Google Scholar, Lexis Advance, Ravel, and Westlaw, the results are a remarkable testament to the variability of human problem solving. There is hardly any overlap in the cases that appear in the top ten results returned by each database.

Hardly any overlap? Imagine how this affects cases argued by the parties and decided by the courts. But, there’s more. The percentage of relevant sources differs for all providers.

One of the most surprising results was the clustering among the databases in terms of the percentage of relevant results. The oldest database providers, Westlaw and Lexis, had the highest percentages of relevant results, at 67% and 57%, respectively. The newer legal database providers, Fastcase, Google Scholar, Casetext, and Ravel, were also clustered together at a lower relevance rate, returning approximately 40% relevant results.

Professor Mart reminds us that thorough legal research has always involved redundancy. We already know that different search terms give us new results to investigate. She recommends using multiple resources with multiple searches, and calls for more accountability by legal database providers.

We cannot change what the legal database providers have already done. We do have control over the thoroughness of our research and our search strategies. -CCE