A good read and a few meta-research thoughts

Home / A good read and a few meta-research thoughts

I have been keeping a least one trade science book on my nightstand — trying to learn from how others have gone after big topics in science, or more interestingly how we do science. I ordered Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hopes, and Wastes Billions on a whim. It’s written by Richard Harris — the NPR science correspondent and published by Basic Books in 2017. For some reason, I had low expectations going into it — but in retrospect I don’t think they were deserved. I think they mainly came from what I perceived to be an overly dramatic title. I like the play on words with rigor mortis — but somehow the whole title has the feel of a tabloid title to me. But, I get it — I suspect it did its job attracting people who might not have given it a second look if it had a less dramatic title. So it goes, whether I like it or not.

It didn’t take many pages for me to get over my unfair title-based expectations. Harris packs a lot of information, opinions, and arguments into an accessible, fairly short, and very easy to read book. It’s really pretty impressive. What did I like most about it? I think its effectiveness at giving the reader a bird’s eye view of different problems affecting biomedical research and how the negative effects of weak links can accumulate unseen. It makes the reader acutely aware that an individual researcher in a particular field, or a technician carrying out a very specific and seemingly minor task, rarely — if ever — gets the chance to see and contemplate how their decisions, and potential mistakes, may get magnified when considered over the whole process of carrying out the research and translating results to practice.

I don’t intend this as a full on book review. I just felt like sharing — maybe because it’s well written and not expensive. It’s a very low cost investment for the reader in terms of time and money. Why not?

I’ll end with one paragraph from the last chapter — mostly because it fed into something I was already thinking a lot about it. I believe we should try harder to recognize, understand, and acknowledge the social and human sides of doing science — but I also am acutely aware of the huge challenges making rigorous work in the area difficult. I don’t think many would argue against the idea that much of doing science is largely a social enterprise within a social system. Yet, it largely feels like we are living in a state of denial over this, usually stuck in the mentality that things like objectivity and unbiasedness are the real deal — even in social science research. We’re not doing ourselves any favors by pretending that when we’re doing science, we’re able to rise above all the human faults that afflict us in our non-science-related lives. It’s like we need therapists specifically for scientists and their work! Hmmm… maybe I’ve finally hit on my career calling! Though I question my qualifications on multiple levels.

And finally — here’s the paragraph:

Scientists make judgments all the time, not only about their own work but about the papers they read. Kimmelman hopes that these judgments can be quantified and reported as a matter of course. With this strategy, Kimmelman is trying to take advantage of human abilities that are not conveyed in the dry analysis of journal articles. It’s “getting to what’s going on in the heads of people,” he told me. “That’s not only one of the missing pieces of the puzzle here, but I think it’s a really, really critical issue.

Page 234 from Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hopes, and Wastes Billions by Richard Harris (2017, Basic Books)

I haven’t yet found the time to dig into Jonathan Kimmelman’s work, but his university webpage is here, along with one for STREAM (Studies of Translation, Ethics, and Medicine). Overall, the work related to “translation, ethics, and medicine” looks fascinating and important to me. It’s something I could easily get very excited about, but of course I have some reservations too.

I’m not super optimistic about being able to quantify and report most judgments (as suggested in the quote). That idea feeds into my general worry that “meta-research” work has clear potential to fall into exactly the same traps as the work is trying to assess. It is social science research largely within the same system and framework that other social science research is done. It’s one of those never-ending mirror situations — if we study how to do research, then we should probably study how to do research on how to do research, etc. I’m not arguing this makes the endeavor not worth while, just that it has complicated layers that should be acknowledged. It reminds me of a more tangible analogy in data analysis –the never-ending conundrum created by checking assumptions of a test using another test with its own assumptions that should be checked, etc. Where does it ever stop? It doesn’t mean the original assumptions aren’t useful or reasonable to appeal to — it just means we need to be careful and do the hard work of justifying them more qualitatively (not shortcutting the hard work with another statistical test with its own assumptions). This all leads me to the feeling that very few things worth doing are simple and easy.

About Author

about author

MD Higgs

Megan Dailey Higgs is a statistician who loves to think and write about the use of statistical inference, reasoning, and methods in scientific research - among other things. She believes we should spend more time critically thinking about the human practice of "doing science" -- and specifically the past, present, and future roles of Statistics. She has a PhD in Statistics and has worked as a tenured professor, an environmental statistician, director of an academic statistical consulting program, and now works independently on a variety of different types of projects since founding Critical Inference LLC.

Leave a Reply