Lessons For Litigators: What Trump’s Win Tells Us About Litigation Research
The recent election has brought the subject of confirmation bias and the hazards of interpreting social science research to the fore. Why were the predictions so off-base? This topic is highly relevant to lawyers and clients who hire consultants to conduct and analyze litigation research, as litigation research relies on some of the same techniques that political polling and analysis do – and is subject to the same pitfalls and hazards.
In my assessment of the polling data, as well as that of political analysts who review this in a lot more depth than I did, the polling data were actually on the mark– within the margin of error, which is something most people who aren’t social scientist often ignore. I know from our research experience that the hardest group to get to participate in research projects, whether a telephone survey or a mock trial, is white, high-school educated men. It’s therefore likely that most of the polls somewhat under sampled this group, something that proved highly significant in this particular election. It also may be that some people were reluctant to publicly voice support for Trump. That’s why a margin of error exists, to take into account ways in which the sample fails to accurately match the population that you are trying to model.
The polls accurately showed that it was a very tight race. The polls performed as well as they did in predicting the 2012 and 2008 elections – within that pesky margin of error.
Fox News’ poll showed Clinton ahead by 3 points in the popular vote, one indication that polling numbers were not distorted by the political orientation of the outlet which sponsored them.
Where confirmation bias took over is in the interpretation by Hillary Clinton and her campaign organization, as well as by many media pundits prognosticating on the election outcome. Anyone looking at the polling data objectively could see that it was going to be a tight race, but they over-interpreted the available statistical data in the direction that made them feel good and confirmed their own beliefs about what would be the best outcome.
If journalists and pundits had spent less time discussing the polls and more time going out and talking to actual voters, we might have had more informative and accurate coverage of what this election was actually about and where it might end up. Filmmaker Michael Moore, who did just that, was one of the few people on the left who correctly predicted Trump’s victory. My experience and my bias as an anthropologist is that you always get the best data by going to the horse’s mouth, which is why I believe so strongly that mock trial and focus group research is the best investment a client can make when facing high stakes litigation.
Over my many years as a social science researcher and consultant, I have seen time and again how over reliance on big ideas and numbers without good grounding in the social and cultural realities that those numbers purport to describe leads to big errors and sometimes disastrous decisions. (That’s why over-reliance on “big data” can also cause big mistakes.)
If you have worked with me before, you’ve heard me say that people are complex and have cross-cutting and often contradictory attitudes and opinions. That’s relevant because numbers can measure only what is on the surface at the moment. One of the keys to trial success, therefore, is understanding the hidden stories that are circulating in your population that are relevant to your case. Understanding that, you can then craft your trial presentation to activate the existing stories that support your case. You can elicit this only if your consultant advises you on themes and presentation before your mock and knows the right questions to ask, in the right way. It’s harder than it seems.
While the election polls might have been accurate, there is no question that there is a lot of poorly designed litigation research, both quantitative and qualitative. Over interpretation of the outcome of a poorly designed study or a very small sample can lead clients to be falsely overconfident about their case, and that’s something you must vigilantly guard against.
The persistent dangers of confirmation bias are why, in our reports, we always focus on discussing the key problems of your case – not your strong points or successes (although we incorporate observations about these as well). As an advocate, you must believe in your case. As a representative of your law practice, you must also convey confidence to your client. To have the best chance of winning, though, you need to anticipate your opponent’s best case and assume that you will lose most or all of your key rulings. You need to see your case through objective eyes and through the eyes of your opponent. Our jobs, as consultants, is to be the Devil’s advocate, that voice that whispers in your ear about all the things that could go wrong – so that you can be better prepared at trial to make things go right and avoid the shock of a failed prediction.
Here’s a good online article that discusses the polls versus their interpretation in more detail: http://www.realclearpolitics.com/articles/2016/11/12/it_wasnt_the_polls_that_missed_it_was_the_pundits_132333.html