Too nice to means

Home / Too nice to means

Too nice to means

September 12, 2019 | General | 1 Comment

After writing my last post about the Average American, I finally started reading a book that’s been on my bedside table for awhile: The End of Average by Todd Rose. I am only through the first chapter, but seems promising for helping with the conversation — for anyone who harbors a slight unwillingness to let go of the idea that averages are, almost always, the quantity of interest. It also reminded me to go back and re-read Simon Raper’s articles in the American Statistical Association’s (ASA’s) magazine Significance from December 2017.

In the last decade, I have had numerous discussions with researchers where I broach the subject — Are you sure a mean score, or difference in mean scores, is really what you’re interested in? What would an average score really represent in this context? Such questions were typically met with blank and confused looks — like… Duh. What else is there? Why wouldn’t I want averages? Occasionally, the person would seem relieved and acknowledge that they really weren’t interested in means, but… that was usually accompanied by a perceived need to use the most common statistical techniques (e.g., t-tests, anova, regression, etc.) — because their career depending on it. Our statistical approaches and education are so mean-focused, particularly for those who take only a one or two semester course, that the universe of possibilities just seems very small and all about averages and means. And perhaps scarier than the perceived size of the statistical methods universe is the fact that there is typically no explicit recognition that by relying on the common linear model methods you ARE implicitly making the assumption that what you care about is means/averages. In my opinion, this is something that researchers should have to explicitly justify before basing inferences on estimating means and changes in means. Is it easy? no. Does the fact it’s not easy mean (no pun intended) we just shouldn’t expect it? no.

Can we stop being so nice about mean(s)? Means are incredibly nice mathematically — but how much should this dictate our reliance on them in practice? I am all for convenience and simplicity, if justified — but it should be justified for reasons other than convenience and simplicity. It clearly is a convenient and a logical starting place for statistical inference from a mathematical perspective. Unfortunately, we don’t often get any farther.

A default mode of operation is to compute averages first — before looking at the individual data. In fact, a huge portion of my time as a statistician has been convincing people to think first and plot data first before aggregating (including averages), and I know I am not alone in this. Over the last decade, I don’t think resistance to abandoning averages has wained — in fact, it may have increased (purely personal speculation here) — possible because more people can push the buttons to carry out statistical methods based on averages and each year adds to that inherently cultural expectation. There is no cultural expectation to question it or have to justify it. To be fair, on a rare occasion, people were thankful that they got a statistician to give them permission to not aggregate and rely on averages.

These general points and concerns have been raised repeatedly by others, but I have to say — when you’re out there trying to be a statistician, it sure doesn’t feel like others have made the points before. The message isn’t spreading, or at least seeping in, as quickly as it needs to. Like nearly all the issues I will write about in this blog, the unfortunate reality is that change means more work and challenge — not less.

I end this post, though definitely not my last on the subject of means and averages, with couple of quotes and the link to one of the pieces by Simon Raper (who’s writing I always appreciate).

https://www.significancemagazine.com/science/571-an-average-understanding

Eventually, the angst felt by many intellectuals of the nineteenth century regarding probability and statistics gave way to agnosticism by the early twentieth. Probability became simply accepted as the logic of uncertainty, without worrying about what precisely the word really meant. As a result, few moderns recognize that the statistical “reality” applies to populations, but not necessarily to the individuals within those populations.

Weisberg, H. (2014). Willful Ignorance: The Mismeasure of Uncertainty

“Some people must be average, you might insist, as a simple statistical truism. This book will show you how even this seemingly self-evident assumption is deeply flawed and must be abandoned.”

Rose, T. (2016). The End of Average: Unlocking Our Potential By Embracing What Makes Us Different

References:

About Author

about author

MD Higgs

Megan Dailey Higgs is a statistician who loves to think and write about the use of statistical inference, reasoning, and methods in scientific research - among other things. She believes we should spend more time critically thinking about the human practice of "doing science" -- and specifically the past, present, and future roles of Statistics. She has a PhD in Statistics and has worked as a tenured professor, an environmental statistician, director of an academic statistical consulting program, and now works independently on a variety of different types of projects since founding Critical Inference LLC.

1 Comment

Leave a Reply