Fact detector? It is not.

Home / Fact detector? It is not.

Fact detector? It is not.

January 2, 2020 | General | 5 Comments

I continue to search for more effective, and simpler, ways to convey my views on misunderstandings and mis-uses of Statistics to others — including scientists. At the heart of my discomfort are not the limitations of statistical inference (I still find it fascinating and useful), but that we use it as if it provides something it was never intended to provide, and simply can’t provide. I have said in the past that it is not “answer finder” and should not be used as one — but I don’t have a lot of evidence from facial expressions and subsequent work of students and researchers that the idea hits home. I want to try again here, with a little different wording spurred from reading Ignorance: How it Drives Science by Stuart Firestein (2012 Oxford Press) – which is a quick read and a great way for anyone interested in science to start off 2020! My opinions come mainly from my experiences in science over the last 25 or so years, and I want to fully recognize that I understand they do not apply broadly across all scientists or scientific disciplines.

Let’s assume that most people see science as the process of collecting more and more facts (where facts are taken as evidence of knowledge). I find this a realistic assumption because of how science is typically presented, taught, and discussed. I wholeheartedly believe it is more about understanding ignorance than collecting facts, but even then have found myself accidentally reinforcing this view with my own kids at times. Firestein’s excellent discussion of the role of ignorance in science seems, at least to me, to often imply that scientists themselves understand the complexities and creative nature of the scientific process, but just haven’t done a good job conveying that to others. Based on my experiences, I think this is true to some degree, but I also see too much emphasis from scientists on fact-finding and fact-reporting and adherence to expectations for this in dissemination of work. This is where my views on use of Statistics come in. I see statistical methods often used in a way that reinforces, and even further contributes to, a fact-centered way of operating in science. The common (and wrong) explanation for what some statistical methods provide is that of a litmus test for whether observed effects “are real or not.” What does “real” mean? I can’t help but interpreting it as a reflection of the view of Statistics as a convenient fact-finding machine.

Take this quote from Firestein’s book:

How do scientists even know for sure when they know something? When is something known to their satisfaction? When is the fact final?

Page 21, Stuart Firestein, Ignorance: How it Drives Science

Firestein writes to help non-scientists or future scientists understand the reality of the scientific process, but it is worth thinking more about how current scientists can benefit from reflecting on these ideas. And, of course, I want to think about the role of common uses of statistical methods in the process. (Note – I purposefully choose between the phrases statistical methods and statistical inference, because I believe statistical methods can be (and are) used with a disregard for the inference part of the process).

I think the three questions posed by Firestein are uncomfortable to scientists, and they should be because they are actually impossible to answer. But, we’ve created a scientific culture and incentive system that expects scientists to pretend as if they have contributed a new fact and that it is close enough to “real” or “final” to be published in a scientific journal. In this system, who is to judge this? How much space in a presentation or paper should be dedicated to this onerous task? This is why and where I believe we have come to rely on statistical methods to provide a cheap shortcut — a shortcut they aren’t designed for.

Statistical methods are not fact detectors.

Counter to foundations based on concepts like variation, uncertainty and probability — statistical methods have been dressed up as meters to help scientists pretend to answer the impossible-to-answer questions. Instead of leaving them to struggle with the questions or admit they are unanswerable (or the wrong questions to begin with), much effort goes into creation of cheap tests and criteria as a shortcut to the wrong destination.

Statistical inferences are, and should be, complex and uncomfortable, not simple and comforting. Statistical inference is about inferring based on combining data and probability models, not about judging whether an experimental or observational result should be taken as a fact. There is no “determining” and no “answers” and no distinguishing “real” from “not real” — even though this language is common in scientific reporting. It is not helping science to keep pretending as if Statistics can detect facts.

You may think I am exaggerating the problem, but I encourage you to read reports of research with this in mind and judge for yourself. How common is it for scientists and other reporters of scientific work to talk and write about statistical methods as if they are fact detectors? The problem gets more complicated if we start to consider whether the authors actually believe statistical methods are capable of detecting facts, or if they are just following conventions and expectations for their own survival. This complication is beyond this post, because either way, awareness that it is a problem has to be the starting point.

What can we do? Well, here’s an easy to ask question when reviewing your work or the work of others: Are statistical methods being used or presented as fact detectors? If the answer at all leans toward ‘yes,’ then it’s time to back up and think about the shortcut being taken, as well as the presumed destination. What can be added to more honestly acknowledge uncertainty and assumptions in any reported inferences? Let us try hard to avoid using statistical inference as if it presents a shortcut to facts.

About Author

about author

MD Higgs

Megan Dailey Higgs is a statistician who loves to think and write about the use of statistical inference, reasoning, and methods in scientific research - among other things. She believes we should spend more time critically thinking about the human practice of "doing science" -- and specifically the past, present, and future roles of Statistics. She has a PhD in Statistics and has worked as a tenured professor, an environmental statistician, director of an academic statistical consulting program, and now works independently on a variety of different types of projects since founding Critical Inference LLC.

5 Comments
  1. George Savva

    Thanks for another great post. Has relevance when thinking about ‘replication’ and why it comes as a surprise to many that two identically designed studies can give different results, without one or the other being faulty in some way. A barrier to thinking grey is writing up in the absence of facts! We are too used to reading and writing ‘in this paper we show that…’. How do you think we should talk about our science if it isn’t about fact finding?

    • MD Higgs

      Thanks for pointing out the relevance of the fact-detecting attitude to feelings of surprise conveyed when solid replications of the same design lead to results that are interpreted as different. The scenario creates a very uncomfortable situation for those who view the process as fact-detecting, so it is easier to cry that something is wrong (e.g., “Oh no! Failure to replicate!!”) than use it as motivation for thinking about limitations of our methods.

      And, as you point out, it’s not just limitations of our methods, it’s limitations in examples and expectations for more honest language acknowledging limitations of our methods. Your question is important and I think about it a lot — and I won’t pretend to do it justice here, but here are a few thoughts meant to be as practical as I can get for now. In teaching, and collaborating with researchers on papers, I have had some luck cutting out words that imply fact-detecting so we are forced to look for other more appropriate words and phrasing. In my teaching, I started outlawing particular words from reports, or being used in the class at all, and found that simple act forced students to bump up against the wall of struggling with why I was outlawing them and what they could replace them with that would be deemed more appropriate. The outlaw strategy may seem rather extreme, but I adopted it after trying semesters of recommendations and finding they were, for the most part, completely ignored and led to me having to circle the words over and over again on their work and ask for justification. Here are the main ones I found helpful to call attention to: “determine”, “answer”, “show”, “significant”, and “whether or not.” Along with this, we need to double check that we are describing estimates explicitly as estimates and not implying they are facts (even if inadvertently). Including the word “estimate” (or some version of it) can seem boring and tedious, but I believe it is very important. Leaving it out makes a sentence read as if a fact is being reported, often out to a laughable number of digits — and this is often the case in media reports. The total number of words needed does increase, but I hope it is worth it for this small improvement.

      In general, I think we need to talk about our science as investigation (which includes acknowledging healthy ignorance and curiosity), rather than as if we are simply detecting and reporting facts. And, we need to stop over-stating, over-dramatizing, and over-selling. We are questioning and investigating, not “determining things” and “finding answers,” particularly in single studies. This shift in attitude requires a shift in language. It would be nice to shift the attitude and let the language follow, but in my experience, talking about the language first can get people to start engaging with the bigger problems behind the language.

  2. Martha K. Smith

    Good post. Point well put.

  3. We need more ignorance – Critical Inference

    […] Fact detector? It is not. […]

  4. Putting Megan Higgs and Thomas Basbøll in the room together « Statistical Modeling, Causal Inference, and Social Science

    […] Higgs, Fact detector? It is not.: […]

Leave a Reply