Predicting Supreme Court votes based on oral argument metrics

Applying forecast techniques to patent cases

Court watchers and interested parties pay close attention to the ebb and flow of oral argument in individual cases. Usually the best way to understand oral argument is to read the briefs and listen to the argument, but there is also mounting evidence that a more quantitative approach is useful in predicting the votes of individual justices and, ultimately, case outcomes. A big data approach allows for analysis that is not possible in a case by case analysis: for instance, it is not feasible to listen to every case in order to discern what historical trends are emerging in judicial behavior since 1955.

The disagreement gap

In our forthcoming article, The New Oral Argument: Justices As Advocates, we show that the justices overwhelmingly tend to have more to say to the party they ultimately vote against. Sarah Shullman made this suggestion in 2004 based on a study of cases from the 2002 Term; John Roberts came to the same conclusion in a 2005 article before he was appointed to the Bench; and Johnson, et al, among others revisited the issue in 2009.

In The New Oral Argument we show that this disagreement gap has been a feature of Supreme Court oral argument since at least the 1960s but that the size of the gap ballooned in the mid-1990s and has been increasing ever since. We also have a number of other metrics, taking into account more granular data including word counts, interruptions, and the difference between questions and comments, all of which we show follow patterns that help predict case outcomes.

Reduction to practice

Our disagreement gap analysis in the figures below shows the difference between the number of times each justice spoke to counsel for petitioner and counsel for respondent. Because being spoken to more often is actually a very bad sign for the advocate, we then invert those numbers so that a positive score reflects a gap favoring the petitioner (dark navy bars) and negative scores favor the respondent (red bars). Those bars reflect not only speech episodes to each side, but the ratio of questions to comments, patterns of interruptions, and other factors. We also indicate uncertainty or a “weak signal” from the data where appropriate (gray bars). The actual votes of the justices in these cases are indicated on the right-hand side of the figure.

In preparation for this year’s Supreme Court IP Review at the Chicago Kent Law School, Professor Ed Lee asked us whether there was anything interesting to report from the oral argument data for last term’s patent cases. We thought it was a great opportunity to test out some predictive models we have been working on, by applying our metrics to last Term’s intellectual property cases, and seeing if the outcomes could have been predicted based on their oral arguments.

WesternGeco LLC v. ION Geophysical Corp

Prediction based on oral argument in WesternGeco v. ION Geo [CLICK TO ENLARGE]
In WesternGeco v. ION the Court held 7-2 that lost profits in overseas markets attributable to patent-infringing exports are recoverable in patent litigation. Our model correctly predicted that Justices Gorsuch and Breyer would vote in favor of the respondent and that Justices Kennedy, Sotomayor, Alito and Kagan would vote in favor of the petitioner. The WesternGeco analysis highlights how useful the model can be in predicting outcomes that don’t fall along traditional liberal-conservative fault-lines, such as the Breyer-Gorsuch coalition.

SAS Institute Inc. v. Iancu

Prediction based on oral argument in SAS Institute Inc. v. Iancu [CLICK TO ENLARGE]
In SAS Institute v. Iancu, the Court held 5-4 that when the Patent Trial and Appeal Board institutes inter partes review of a party’s challenges to the validity of an issued patent, it must make a decision on all of the patent claims contested by that party. The majority arrived at this conclusion by holding that that the word “any” meant “every” in the relevant statute. In SAS Institute, our model correctly predicted the votes of all of the justices except for the habitually silent Justice Thomas. Given the liberal-conservative divide evident in the eight speaking justices, we would have predicted that Justice Thomas would vote with the conservative majority in favor of the petitioner.

Oil States Energy Services, LLC v. Greene’s Energy Group

Prediction based on oral argument in Oil States v. Greene’s Energy [CLICK TO ENLARGE]
In Oil States, the Court rejected a constitutional challenge to the system of inter partes review introduced in the 2011 patent reform legislation, the America Invents Act. Oil States is the most intriguing of the three cases because so much rested on the outcome of the case and because, as seen below, our predictive model misread Justice Breyer’s eventual vote. Earlier this year, we were engaged as paid consultants and asked to predict the outcome of Oil States.

After reading the briefs, listening to the argument, and crunching the numbers, we predicted a 7-2 vote in favor of the respondent, with Justice Breyer concurring. Happily, this proved to be the exact outcome. Understanding the issues in the case and the substance of Justice Breyer’s comments and questions, we were confident that our model was misleading in this particular instance. However, the model was extremely useful in helping us to read the intentions of Justices Alito and Kennedy.

Patterns are not rules, and so even a very accurate metric will not accurately predict every judicial vote in every case. Hence in Oil States, we adjusted the empirical prediction in accord with what we heard of Justice Breyer’s tone. But by analyzing trends and patterns, we are able to go beyond impressionistic accounts in predicting case outcomes. The proof will be in the pudding, so check in here for our forecasts, and check back to see how they line up with the ultimate case outcomes.