Oh, the incentives…

Home / Oh, the incentives…

Oh, the incentives…

November 5, 2019 | General | 1 Comment

For scientists to continue to be scientists, they must survive in the environment in which scientists live. Understandably, most scientists prefer to try to thrive in that environment, not just survive. The environment is a social system, constructed by and for humans just like all social structures. Yes, it is made up of scientists who strive to seek “truths” of the universe, but it is no less human and with no fewer faults than other social systems. In fact, maybe it is destined to have more faults for its difficulty in acknowledging its connections to humanity — because just acknowledging that connection seems to imply weaknesses and biases.

Scientists pride themselves on their deep commitment to “objectively” seeking knowledge in ways that are as least biased and most honest as possible. This is the case, at least, until the focus is on the scientist’s career and not their science. As much as we would like to believe there is a healthy connection between our measurements of a scientist’s success and the quality of their science, it’s no secret that the incentives built into the social system do not always align with quality. I sense cringes from scientists, but I only mean to point out that scientists are human, their social systems are human, and the mistakes and decisions they make are unmistakably human. I consider myself a scientist and am proud to be one. I love science, I love research, and I believe in their value. Peering into the human sides of “doing science” is a positive thing, not an unfair criticism of science or scientists! It is a fascinating part of the process that we shouldn’t ignore if we are truly committed to doing the best science possible.

Well, that was a much longer introduction than I had planned. My mind gets pulled in so many directions when I start down this road — the reason it has taken me years to know where to start writing! I’m now going to drag myself back to where I was heading when I sat down to write this — incentives. Incentives for scientists — and a glimpse from the view of a collaborating statistician.

The social system in which scientists live is based on one main currency — publications. The system around how and what to publish is complex, often discipline specific, and scientists must work hard to understand it and navigate it effectively to survive. Much has been written in the last couple of decades identifying issues with the system — and I think there is a general consensus among scientists that quantity of publications does not imply quality of a scientist’s work. I also think there is general consensus that this fact is usually ignored in the face of promotion (both formal and informal). Scientists measured as successful by quantity-based metrics have figured out how to thrive within the current incentive system — possibly while doing their best possible science, but possibly just by figuring out how to effectively accumulate the prizes.

Let’s construct a quick hypothetical scenario. Suppose you are statistician (formally trained and with years of experience) collaborating with other researchers on a project. You are not only a statistician, you are a scientist, evaluated in the same way as your collaborators, but your work must span discipline boundaries and the shifts in the social systems that come with that. You want to be proud of your work, comfortable that it will be judged as high quality, or at least reasonable, by those who read it (maybe even for an external review of your tenure case!). Hopefully I’m not stretching anyone’s imagination too far yet.

Now, suppose that you have worked to develop and fully justify a reasonable approach to the design, analysis, or interpretation. You present the approach to your collaborators and it is applauded… but then quickly deemed inadequate relative to unwritten rules of the publication system in the their discipline. Your collaborators agree that the approach you have suggested is more reasonable and better justified than the approach they want to go with — but theirs is believed to have a better chance of earning a publication carrot. On the level that should matter most, you are all in agreement! But, they think it is too boring or uncommon in their discipline and are unwilling to potentially dampen their careers by risking no publication. What to do?

You go back and forth, you promise to fully justify your reasons for the approach, respond to reviewer comments, etc. But, they believe they are correct in their assessment (and they very well might be!)– and in the end, the social system in which they must operate wins out. You are then put in an awkward (and arguably ethical) dilemma: Do you remove yourself from the publication and get no credit toward your career or do you keep your name on the paper and get a carrot for your career for something you might not be proud of? The ultimate decision would be incredibly context dependent and person dependent and I do not mean to judge that here. There is a lot of extra baggage (emotional, professional, and ethical) entangled with scenario and everything that it leads to — and I am trying to stay out of that today.

Warning. Here is where my experiences lead me to cynical view. In my almost 20 years as a collaborating statistician, I think I removed my name from more papers than I left it on (in retrospect, I wish I would have kept careful count). My very first collaboration ended in this way. I was a first year graduate student in Statistics and the collaborator was a graduate student in the veterinary school. I assumed at the time it was an unfortunate first experience that would be rare in my bright future as a statistician. Instead, nearly 20 years later, my last few collaborations also ended in this way. It is not rare and does not seem to be going away. Statisticians vent to each other about it, but I haven’t seen it talked about as openly as I think it should be. It is a tangible example of our incentive system operating against the flow of doing the best science we can do.

On that note, I was relieved to see the last two paragraphs of this Nov 5th 2019 post of Andrew Gelman’s Statistical Modeling, Causal Inference, and Social Science blog. While Andrew’s tone sounds a little surprised, I felt no surprise at all when reading it. Maybe Andrew doesn’t have to deal with it on a daily basis because his name on a publication is able to outweigh any perceptions that the approach isn’t hip enough. This adds an interesting perspective I haven’t thought a lot about. For me (a less than famous statistician), it felt unavoidable and became a huge force that pushed me hard — away from academia and its social incentive systems.

Andrew Gelmans’ blog post from November 5, 2019

A final thought. The irony of my current position — trying to make a go at writing for a living — isn’t lost on me. I’m now more of a slave to publication than ever — but hopefully I’ll be able to publish honest material that I’m proud of.

About Author

about author

MD Higgs

Megan Dailey Higgs is a statistician who loves to think and write about the use of statistical inference, reasoning, and methods in scientific research - among other things. She believes we should spend more time critically thinking about the human practice of "doing science" -- and specifically the past, present, and future roles of Statistics. She has a PhD in Statistics and has worked as a tenured professor, an environmental statistician, director of an academic statistical consulting program, and now works independently on a variety of different types of projects since founding Critical Inference LLC.

1 Comment
  1. Kathi Irvine

    Thanks for daylighting an issue for us all!

Leave a Reply