Reasonable reflections from jury duty
August 25, 2021 | General | 4 Comments
I was called for jury duty yesterday and spent about 5 hours experiencing the process. I was not ultimately selected for the jury – I wasn’t even close given where I sat in the long line (3rd from the end). I’m not sure if my position near the end was based on information in the questionnaires we submitted (mainly about employment and education) or just random. Regardless, I got to listen to the juror selection process (voir dire) – and had to be ready to address the same questions myself should enough people have been excused that they ever got to me.
As someone who thinks a lot about uncertainty in science, and life, I found the questions posed by the attorneys, and the discussion around them, fascinating. There are a few things I want to remember and so decided to write about them briefly here.
p-values?!
I was expecting some discussion of making judgements in the presence of uncertainty, but was not expecting the word “p-value” to pop up in the couple hours of questioning I witnessed. Not as surprising as p-values coming up in the conversation was that a wrong definition and interpretation of p-value was provided by a potential juror (with a PhD in a natural science). I don’t really fault the scientist, but it was very clear example of how confident people can be in their incorrect understandings the concept – so confident that they might voluntarily repeat it several times in front of a relatively large audience under oath! They provided the court with the information that “a p-value gives the probability of a hypothesis being true.” I admit I squirmed a bit in my seat — but I wasn’t one of those being directly questioned and had to stay quiet. Plus, I couldn’t see how it would really affect anything related to the case or selection of the jurors, as I suspect that any connection between p-values and duties of a juror was lost on most, if not all, others in the room.
More interesting was the set of questions from the defense attorney that prompted the scientist to give the mini-pseudo-lecture on p-values. The attorney noted the PhD’s involvement in doing research and asked about uncertainty in conclusions and how that uncertainty is typically handled – and whether they ever know anything without a doubt. The researcher first described that they use modeling, and that they report uncertainty using statistical techniques – like confidence or credible intervals — giving a nice description of not relying solely on a point estimate. The attorney then asked how they test hypotheses – which I think he meant in a general sense, but was immediately interpreted as statistical null hypothesis testing; the researcher responded by saying they don’t test hypotheses in practice and then that lead to the topic of p-values. I guess my main point in telling this story is that it was an interesting example of someone trying to engage a scientist in a high level discussion of making decisions in the face of uncertainty — and it quickly ending up in the weeds of null hypothesis tests.
Beyond reasonable doubt
This was all part of a larger period of questioning around the standard of proof to be used in the trial – centered around the concept of reasonable doubt. I’m a little embarrassed to say I think I’ve taken the phrase for granted and never thought through its implications to the extent I should have. It was fun to see the attorneys create this realization among most, if not all, the people there. What does it really mean to establish guilt beyond reasonable doubt? I certainly didn’t have an easy answer and neither did any of the potential jurors actually questioned. And, it got me thinking about the potential relevance of the term to a scientific context.
Given the vagueness of the phrase, it must be common practice for attorneys to start with a discussion about what the phrase “beyond reasonable doubt” means during voir dire. Interpretation of the phrase is clearly challenging and varies by state (to get a sense, do a web search for “what does ‘beyond a reasonable doubt’ mean?”)! The interpretation (I hesitate to call it a definition) given verbally to us was the following (or close to it):
Proof beyond a reasonable doubt is proof of such a convincing character that a reasonable person would rely and act upon it in the most important of his or her own affairs. Beyond a reasonable doubt does not mean beyond any doubt or beyond a shadow of a doubt.
From https://lawofselfdefense.com/jury-instruction/mt-1-104/
I wish it would have been provided visually as well, but as I sat there for two hours after hearing it once, I kept thinking about the implication that “reasonable” apparently refers not to the “doubt” itself, but to the person expressing the doubt. So, not only do we maybe need to asses what qualifies as “beyond” reasonable doubt, but we should invoke some assessment of what makes a person reasonable. It also seems to imply that “reasonable” people should have some very similar threshold for what constitutes “proof of such convincing character.” The interpretation thus brings in other vaguely defined terms, like “convincing character” to give people a “common sense” feel for what the standard is after — since there really is no way to explicitly and clearly define it (or I’m sure it would have been done by now!). The attorneys also focused on making the point that the term does not mean “beyond any doubt” or “beyond the shadow of doubt.” But there is no way to clearly define the line and the line must be expected to vary by individuals.
At one point, one the attorneys even asked one of the potential jurors something like the following: “If you were in a position to uphold the standard of proof (given it was decided upon by courts), would you do so?” I admit, I was thankful to not be on the spot for that one. There are clearly problems with its vagueness and openness to different interpretations – but it’s not something that can be abandoned without an alternative that is deemed better. Does it result in mistakes being made? Sure. Can we come up with different wording for a standard that would lead to fewer mistakes in judgements? Probably, but it’s not so easy to come up with, implement, or measure and compare something like number of mistakes without knowing the truth.
Reasonable?
The conversation made me reflect on how often I rely on the term “reasonable” when discussing use of statistical inference in science. I have found it really useful, but maybe I have relied on it too much – without an adequate definition of what I mean by it. I remember being laughed at once at a conference when I used it as a suggestion for how we can refer to assumptions (rather than being “met”) – I think the person maybe thought I was joking and muttered something like “right – like we use with toddlers!” I remember being surprised by the comment – which is probably why I still remember it — and obviously it missed the mark of what I intended.
To me, there is an important difference between the questions “How reasonable are the assumptions?” and “How correct are the assumptions?” and “reasonable” captures something for me that’s difficult to capture with other words. But what do I really mean by it? I guess I connect it to sound reasoning or good judgement, but with an air of practicality. Is the reasoning good enough that others who are knowledgeable on the subject would make the same decision about its usefulness? I’m usually using it in the context of whether something seems good enough to be relied on, but not in the sense of being perfect or correct or without uncertainty. Does that really send us to the same place as burden of proof in Montana courts? Maybe it’s not that far off — in that it’s beyond any doubt that would be considered useful or that should be acted on?
I suppose I also implicitly invoke the “by a reasonable person” part of it. If the decision is one that is judged to be okay by many reasonable people (if given the justification) or represents a decision that would be made by many reasonable people, then that would lead me to saying the decision was reasonable. But, then I’m left with having to define who counts as a reasonable person. Ugh. It’s sure easy to end up in the downward spiral.
As is painfully clear in public discourse today, for most people, the judge of “a reasonable person” is probably more a statement of how much the person judging who’s reasonable agrees with the views of the person whose reasonableness is being questioned. As far as I know, there isn’t some objective measure or threshold for “reasonable” that could stand up to judgement from individuals with diverse views and backgrounds.
Clearly I don’t have answers, but I will certainly stop and think more before I use the term reasonable and try to give it a more meaningful interpretation in context when I can. But, it also seems like it’s a word that crops up to capture something that is otherwise hard to put into words – making it inherently hard to drill down through.
For fun, I briefly looked through different definitions of “reasonable” in on-line dictionaries. They didn’t offer much more insight, but did open up additional cans of worms like “what is fair?” “what constitutes sound thinking?” “who counts as a rational or just person?” etc. Here are a few:
Cambridge English Dictionary: “based on or using good judgment, and therefore fair and practical.”
Vocabulary.com: “showing reason or sound judgment,” “marked by sound judgment,” “not excessive or extreme”
Some of the “Best 16 definitions of Reasonable” as given here:
- Governed by or being in accordance with reason or sound thinking.
- A standard for what is fair and appropriate under usual and ordinary circumstances; that which is according to reason; the way a rational and just person would have acted.
- Having the faculty of reason; endued with reason; rational.
- Just; fair; agreeable to reason.
- Not excessive or immoderate; within due limits; proper.
- Being within the bounds of common sense.
Finally, a query into the legal definition here, brings me full circle by giving the following disclaimer: “The term reasonable is a generic and relative one and applies to that which is appropriate for a particular situation.” There you have it.
I guess you get to judge if this is a reasonable post by a reasonable person. I hope you are a reasonable person to provide a reasonable opinion.
4 Comments
Nathan A Schachtman
Dr Higgs,
I am just stumbling across your blog and website, and I read with your account of jury selection with absolute fascination. I am a lawyer who has written some about statistics in the law, and I was astonished that p-values came up in the voir dire. Was the case a criminal or civil case? Can you share what was at issue? It seems that most judges and lawyers think that the p-value exists to permit them direct access to the probability of truth of ultimate facts in their cases. The transposition fallacy is everywhere, or so it seems.
Nathan Schachtman
MD Higgs
Thanks so much for the comment and making me re-read the post. I’m so glad I did write my thoughts down as I had forgotten many of the details (a good lesson in general). The case was a civil case that ended up being dismissed by the judge after the first day or two because of lack of a case (I was updated by a friend who was selected as a juror, but I don’t know the details and am probably using incorrect terms here). The p-value discussion didn’t come up in the specific context of the case, but rather in response to questions from the attorneys aimed at getting potential jurors to think about how they make decisions in the face of uncertainty. The person with the PhD they questioned brought it up voluntarily and then came back to it multiple times, but the attorneys did not emphasize it. So – in this case, the misinterpretation came from the potential juror they were questioning and not the judge or lawyers.
Thanks for sharing your blog in an email — I have not had a chance to read much of it, but wanted to put it out there for others too, as I do generally find the use of statistical inference in the courts fascinating (and scary):
http://schachtmanlaw.com/blog/
And, I’m curious about your thoughts on the following Task Force Statement I discuss in this post:
https://critical-inference.com/thoughts-on-the-task-force-statement/
Nathan A Schachtman
I have blogged at bit, at the URL you gave in your reply to my comment, about the Task Force’s statement. The genesis of the Statement comes from the Wasserstein 2019 editorial, which called for “moving on” from tests of statistical significance. I for one did not begrudge Dr Wasserstein his opinion, but he signed it as Executive Director of the ASA, which led many people – including the Editor of the ASA publication Significance – to deem it ASA policy. It was not, and the continuing misunderstanding and misrepresentation of the editorial led then ASA President to task the Task Force with a response. Now, the Task Force’s Statement is also not an ASA policy statement, but in my view it undoes the harm done by the Wasserstein 2019 editorial. The ASA 2016 p-value statement articulated 6 principles, which I thought were unexceptional, but even those have been misrepresented in legal proceedings.
My blog, as does Professor Deborah Mayo’s blog, has several posts about the ASA p-value statement, and then the controversy over Wasserstein’s editorial.
Nathan Schachtman
MD Higgs
Thanks for this!