Declining co-authorship — a theme of my career

Home / Declining co-authorship — a theme of my career

My first real gig as a collaborative statistician was landed as part of a required course in my Statistics master’s program — “Statistical Consulting Seminar.” Most programs have one (if they don’t, they should) — it’s the residency or intern part of becoming a practicing statistician. I went into it with some nerves and some excitement. I wanted it to be a confidence-building success, but at the same time wanted to experience some tough aspects of the job while still surrounded by the support of peers and faculty advisors. I guess I got that, but I had no idea at the time that the dilemma I faced in that first experience would grow to feel like a theme underlying my work as a collaborative statistician. I also didn’t fully grasp the magnitude or the source of the underlying issues leading to the dilemma. Twenty years later, things are no better, and might even be worse — as pressure to publish and obtain grant funding continues to drive decisions in research. We need to recognize it and start talking about it — among statisticians and other scientists.

The start to a theme

The researcher (a.k.a. client) was an equine veterinary PhD student who had carried out a few experiments to investigate the effectiveness of a new treatment relative to the standard treatment (enough time has passed that many of the details are now fuzzy and it’s not my intent to use this post to dig back into the details). After the first meeting, I remember feeling very excited about what I could offer in terms of assistance with the project. I took the job seriously and spent many hours beyond those expected — making plots of the raw data, bringing in an “equivalence testing” approach where they had planned to use a typical null hypothesis testing approach, modeling dependence from repeat measures on the same horse, helping with interpretation, justification of models, etc. I’m sure further improvements could have been made, but I’m confident the inferences based on the approach I recommended were better justified than those from the approach they planned to use before reaching out for statistical assistance.

When it became clear I could offer a lot to the project, it was agreed that I would be a co-author on the resulting manuscripts. I used this as justification to myself to continue work on the project beyond the seminar and beyond my graduation — for no charge. I thought it would be a great start to my career as a statistician to have valuable pubs with my name on them. I definitely contributed enough work intellectually and in sweat to deserve co-authorship.

I don’t remember now exactly when dilemma raised its head, but it had to be pretty close to manuscript submission time. There had been hints along the way that the student’s advisor was worried about trying to publish results from methods that weren’t the “typical” way of doing things in the discipline — or at least in the journal they wanted to be published in. I don’t recall ever meeting the advisor (though it’s possible I did) and I doubt direct interaction would have changed the outcome — realistically it would have been the opinion of a 20-something woman statistician-in-training against the opinion of a successful research veterinarian with probably as many years of experience in research as I had in life.

What I do remember is learning that the manuscripts would ultimately not include some (or most) of the major recommendations and justifications I had contributed. I may have learned of it first via email, but can still picture where I was sitting when we discussed it over the phone. My recollection is that advisor decided it was safer (in terms of chances of getting it published) to go with a more common approach for that field, even if it was not as well justified from a statistical perspective. The student was left to communicate this to me — putting him in a very difficult spot. I had no weight relative to the advisor who had paid for and sponsored the research. I was shocked. I thought I had done my job well. I had provided a more defensible approach to improve inferences (even approved by my Statistics professors at the time) — and they were going to completely ignore it in favor of using an approach for the reason that it had been used before? I don’t remember the decision being made based on the results (i.e., their approach ending with a better story for the treatment), as that would have set off a different level of alarm for me, but I also can’t guarantee that didn’t contribute to the decision.

Their resistance to me removing my name

My shock at their decision was then followed by an after shock at the reaction to my immediate decision to remove myself as a co-author from the work. I assumed (wrongly) that removing my name was the next logical step and would be judged so by everyone involved. I was still new in the statistician role, but I had spent over two years doing research in another discipline before graduate school in Statistics. I felt pretty clear (maybe naively so) on what the prerequisites were for being included as a co-author and that being a co-author implied taking responsibility for the work reported. It seemed like a straightforward situation to me. I didn’t agree with the approach presented and therefore, even though I put a lot of work into the project, my name should not be on the paper.

In all honesty, the degree of resistance at me removing my name did make me temporarily second guess my decision, and I think this is a common and understandable reaction among early career statisticians experiencing this for the first time. Was I being unreasonable? Naive? Was I going to burn bridges and hurt my career? Was I just not aware of norms and expectations for co-authorship relative to statistical contributions?

I don’t remember finding a list of authorship criteria then, but my understanding was consistent with these four current criteria from the International Committee of Medical Journal Editors (ICMJE) [boldface emphasis is mine]:

  1. Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
  2. Drafting the work or revising it critically for important intellectual content; AND
  3. Final approval of the version to be published; AND
  4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

It didn’t take long to stop the second guessing. Despite the resistance, I knew I did not want my name on the papers. It felt wrong for me to “sign off” on work I did not agree with. Communicating this directly and respectfully was harder than I expected. To justify my decision, I focused on the argument that not removing my name could hurt my career. This was a legitimate worry. I was starting a career as a statistician and did not want my name associated with a paper using a statistical approach I did not agree with and that may be deemed inappropriate by other statisticians. I imagined that every researcher could put themselves in an analogous situation specific to their own position or discipline. Would a veterinary researcher be okay with having their name associated with a paper describing methods that would be judged harshly by other veterinarians? I doubt it — even if it would mean giving up a publication. This career-centric argument was the one I felt got traction — probably because it didn’t make the researcher feel so guilty. In some way, it was focused more on me than the general quality of their work. Though it was and is more complicated than that.

The gray, sticky, and ugly layers

Before going on, I think we need to pause for a reality check. I had hoped to write this first post to lay out the dilemma and raise awareness of this real and serious issue, without delving too deep into the ethical aspects surrounding it. I’m still trying for this, but have found it impossible to do completely. Writing about it has actually helped me process the ethical aspects and implications — and has helped me see (or admit) the problem’s deeper and uglier layers. I still want to put this post out there without adequately addressing those layers, but I at least want to acknowledge them.

I worry that readers will be quick to make their own judgements from the outside — as if the problem is neatly black or white. I do not believe it is productive to harshly judge the individuals on either side and I ask you to trust me that it doesn’t feel so black and white when you are living it. The problem is rooted in scientific culture — in its current incentive systems and norms. Unfortunately, a lot of the incentives and norms are integrally tied to statistical methods and their results. As crazy as that is, it is a reality that creates tough situations for researchers and collaborating statisticians on a daily basis. It creates tension between choice of statistical approach and taking risks for one’s career — and this percolates negatively through science. If there were such thing as the correct approach in most cases, then this would be simpler, but choice of statistical approach is largely a judgement call. While there are some approaches most statisticians would agree are wrong for a given setting, there are many reasonable approaches (under different assumptions and justifications).

With that said, you may have read the beginning and jumped to the conclusion that I had an ethical responsibility to do more than just remove my name from the paper. But, in reality, the line is rarely that clear (unless there is clear and intentional research misconduct). In that first situation, and many situations after, I was not sure where the line was and I wasn’t about to make sweeping accusations. I believed my approach was better justified, but I didn’t feel so confident calling into question the research of established scientists. How could I effectively argue it was unethical to use an approach that the top journals in the field were expecting, and essentially requiring? What did I have to stand on? Who or what would back me up? The ethical dilemma to me at the time was very centered on my decision to be a co-author, not about a lack of integrity or research ethics of my collaborators in general. I believe that they believed their approach was also justified — Why would the journals recommend it and publish work based on it if it wasn’t? Should they change their approach because of recommendations from one stats master’s student? I hope you can see how sticky the situation actually is.

Complicated and serious messages

I naively assumed the researcher would be relieved to remove my name and get on with their submission of the manuscript. I didn’t expect resistance because I hadn’t thought through the discomfort and inconvenience it would brought up for them. The message my decision to remove my name sent was not simple. I now imagine it was heard as something like: “I don’t agree with your research” and “I did a lot of work to try to collaborate with you and now I’m not going to get any credit for it.” Both aspects of this message probably led to feelings of guilt and questioning. So of course it would have made him feel better if I had just agreed to keep my name on the paper.

Such resistance to a statistician removing their name is typical in these situations, for the reasons described. It often becomes a dance at trying to come up with reasons and ways to communicate those reasons that do not overly offend — particularly if the collaboration is integral to one’s future work. I am not condoning this dance, but just acknowledging its existence and the fact that many of us have participated in it. I think it’s worth trying to understand the messages sent by declining co-authorship to help us understand the psychological and social aspects of the problem. Here’s a summary of my thoughts about the situation I described:

  • Removing my name called into question the integrity of the research in a way that voicing my disagreement, but keeping my name on the paper, would not have. The plan to remove my name forced the researchers to deal with some discomfort and at least question things a little. However, they ultimately did not see the choice of approach as a problem in the way I did — because it was accepted in their field.
  • I put in a lot of time on the project for no pay. I was to be “paid” through co-authorship, which would presumably have helped my career and “paid off” in the end for me. This is very common situation for statisticians to find themselves in. When it works, it can benefit both the researcher, the statistician, and maybe science, but the more I think about this arrangement, the more I’m convinced that it can lead to serious ethical problems. If a collaboration ends in a situation similar to what I have described, then this agreement adds an ugly layer. The statistician feels robbed and understandably has to think harder about taking their name off and may settle to keep their name on even when they feel uncomfortable with it. Making the problem worse, the researcher feels guilt at not “paying” the statistician and applies more pressure to have them stay on the paper. If the statistician is paid for their work and then the researcher chooses not to use it, then at least the situation is less sticky (though admittedly still difficult).
  • It is always unfortunate when a graduate student is caught between a statistician’s recommendations and an advisor’s decision. In my experience, this is not rare. The graduate student may completely agree with the statistician, but ultimately the advisor is paying for the work and calling the shots for publication. I think the feelings of guilt and discomfort placed on the student are clear here — and of course they will feel better about the situation if the statistician doesn’t remove their name from the paper.

It’s not about honest disagreements in approach

It’s important to note that in this case, and the many others that would follow in my career, there was no underlying disagreement with my professional recommendations. In fact, many times there was explicit acknowledgement that the approach I was recommending did appear more appropriate than the one the researcher wanted to go with. The tension came from the incredibly strong desire to use the same methods as they had used before and that had been associated with publications in the journal — a huge push to keep doing things the way they’d been doing them because it seemed less risky to stay on the paved path (even with its many potholes). The fact that research using a method was published seems to rubber stamp the method, even when a statistician shines a spotlight on its limitations.

There are many layers to this than can be pulled back and examined and different ways this can all play out. A common theme, however, is that the choice of methods, or the decision to oversell results, is based on maximizing the perceived probability of getting published, even if it means ignoring professional advice from statisticians relative to methods and inferences. Depending on context, this can mean doing what’s usually done (as in my example), using overly sophisticated and cool sounding methods (following fads), choosing methods that tell a more attractive story, presenting results with misleading language meant to sell the research, etc. I know this is a strong statement, but I think most statisticians have witnessed it, at least once in their careers. The extent to which researchers realize they are doing it varies — and I think it’s best to give individuals the benefit of the doubt and continue to raise issues with the strong current that pushes them in that direction. However, when a statistician says they do not feel comfortable having their name on the paper despite major contributions to the project, it should motivate more serious conversation.

So much focus on careers – at the expense of the science

Protecting one’s career is a powerful motivation — not just for pride, but for financial security and survival. It’s not surprising that it carries an enormous amount of weight when decisions are made in research (whether we want to admit it or not). Incentive systems matter and they affect the quality of research being done by infiltrating many seemingly small decisions along the way (again, whether we want to admit it or not). As I already alluded to above, it leads the the ugly and sticky sides of the declining co-authorship dilemma.

There are potential negative and positive effects for both the statistician and other researchers when faced with the decision of whether the statistician should decline co-authorship. Individuals weigh risks to their careers in the process of making decisions.

Over the last 20 years, I have seen little evidence that leaving my name on papers I deemed questionable would have actually harmed my career (though I still believe it should have). I know of a few PhD statisticians who seem to happily accept co-authorship on any paper they are invited to, or at least have a fairly low bar for how much they need to contribute or agree with. I have seen no evidence that it hurts them — only that it helps them by growing their list of publications. Career stability and success is so dependent on “objective metrics” based on counts of publications and length of CV. Plus, researchers love collaborating with statisticians who don’t raise a fuss, so those who aren’t likely to decline co-authorship likely get more opportunities for authorship in the future.

Don’t other statisticians read the papers and raise red flags? I haven’t really seen it happening — and again there is the complication of many reasonable approaches to a problem and many differences in opinion, unless something is blatantly wrong. It’s impossible to know what went on behind the scenes and there just isn’t enough time for an external statistician to critically evaluate minor non-Statistics papers when statisticians go up for promotions or tenure. Statisticians feel they deserve credit toward their careers for work done (even if not well represented in the ultimate publication) and often publications are the only currency to make that happen. Other researchers generally appreciate working with someone who is playing the same career-incentive-system game. It feels like a win-win, and it is a win-win if the primary goal is to support careers.

I don’t see it as a win for science though. People try to survive by succeeding at their jobs and to succeed, they have to play according to the incentive systems in place (or take huge risks by refusing to play the game or even trying to change it). Unfortunately, incentive systems for scientists often do not align with (or at least promote) incentives for doing the best science we are capable of. I think there is broad, if not universal, agreement that the goal of doing science is not to promote and protect the careers of scientists. Yet, actions tell a different story. And the “statistician declining co-authorship” scenario is a concrete example that exposes a lot of layers if we are willing to try to see them.

Interactions between researchers and statisticians can demonstrate the power of individual survival (career success) over research and scientific integrity. I included this direct quote from an email to me in an earlier blog post, but will repeat it again here because it is so relevant: “I know you disagree, but I’m going to stick with my bad science power analysis for this proposal — it’s what the NIH program officer I’ve been talking with told me to do.” This is the most explicit example I have to share, but the message is not rare. Most people just dance around the issue and do it in verbal discussions rather than boldly throwing it out there in an email. The point is — researchers make career self-preservation and self-promotion decisions and they are not seeing the potential ethical issues associated with the choices (or I don’t think it would be boldly stated in an email). They are constantly weighing the risks to their careers of stepping outside scientific norms — and those with the most power in the current system are those who generally navigated those risks successfully, so the cycle continues. Unfortunately, statisticians may be the ones unintentionally recommending risky behavior (career-wise) when their professional opinions conflict with discipline-specific expectations in methods perceived as less risky.

To be fair, we’ll never know how the decisions ultimately play out relative to good for science and society. Again, things are not as black and white as we would like to think they are. For example, take the above email scenario. Maybe if they had gone with my recommendations they wouldn’t have received a grant and the research would have never happened — and maybe it will have overall benefits to society. We don’t know. But, I hope we can agree that working within a system that seems to value careers of scientists over the quality of the science is a serious problem — for researchers, for collaborating statisticians, and for science. It is a problem with research integrity and ethics, even if in the moment it feels like a problem of survival.

Here’s another anecdote. I recently heard from a statistician who removed his name from a manuscript after his original work was replaced with misleading displays of results that were more story worthy. He followed what he felt was his professional obligation by removing his name from the manuscript and pointing out the reasons the new presentation of results was misleading. In response, he was reprimanded by a supervisor — because he did not place enough value on protecting and nurturing the career of the researcher who made the decision to go with the misleading displays (presumably to boost an early career with a tastier and more consumable story). This story sends clear messages to all involved that careers of researchers are valued over scientific integrity — and over the professional integrity of statisticians.

While I never experienced a reprimand from a supervisor, I certainly experienced more indirect, and sometimes passive aggressive, comments. The message was clearly conveyed to me that I was not being helpful in the way they wanted me to be helpful. Didn’t I understand they had to operate within the current rules of the game? Didn’t I understand how to succeed as a scientist? They usually justified decisions to ignore my recommendations under the pretense that I wasn’t embedded within their discipline and just didn’t have enough of an understanding of “how research was done” in their discipline. Within-discipline reviewers and editors hold the power over careers.

Guidelines for ethical statistical practice

The American Statistical Association’s (ASA’s) Committee on Professional Ethics created a document (approved by the ASA’s board of directors) describing the ethical obligations of a practicing statistician. The document is titled Ethical Guidelines for Statistical Practice, though I think it should be Guidelines for Ethical Statistical Practice. There are huge challenges with drafting such a document and I think overall it’s a thoughtful and useful collection of guidelines. I have found it very useful to discuss with Statistics students and have also shared it with collaborators who seem to have a hard time grasping my ethical responsibilities as a statistician.

I encourage you to read (or re-read) the whole thing, but I will include the last section here because it is most relevant to the topic of this post. It is a section not for statisticians, but directed toward those working with statisticians. I think it’s fair to take this as evidence of the widespread nature of problems like those that come up around declining co-authorship. Unfortunately, I don’t think the guidelines are widely read, acknowledged, or followed by non-statisticians, and it often doesn’t get enough traction when preached from statisticians themselves. Regardless, here it is — to read and to share [boldface is mine]:

Responsibilities of Employers, Including Organizations, Individuals, Attorneys, or Other Clients Employing Statistical Practitioners Those employing any person to analyze data are implicitly relying on the profession’s reputation for objectivity. However, this creates an obligation on the part of the employer to understand and respect statisticians’ obligation of objectivity.

  1. Recognize that the ethical guidelines exist and were instituted for the protection and support of the statistician and the consumer alike. 
  2. Maintain a working environment free from intimidation, including discrimination based on personal characteristics; bullying; coercion; unwelcome physical (including sexual) contact; and other forms of harassment.
  3. Recognize that valid findings result from competent work in a moral environment. Employers, funders, or those who commission statistical analysis have an obligation to rely on the expertise and judgment of qualified statisticians for any data analysis. This obligation may be especially relevant in analyses known or anticipated to have tangible physical, financial, or psychological effects.
  4. Recognize the results of valid statistical studies cannot be guaranteed to conform to the expectations or desires of those commissioning the study or the statistical practitioner(s)
  5. Recognize it is contrary to these guidelines to report or follow only those results that conform to expectations without explicitly acknowledging competing findings and the basis for choices regarding which results to report, use, and/or cite.
  6. Recognize the inclusion of statistical practitioners as authors or acknowledgement of their contributions to projects or publications requires their explicit permission because it implies endorsement of the work.
  7. Support sound statistical analysis and expose incompetent or corrupt statistical practice. 
  8. Strive to protect the professional freedom and responsibility of statistical practitioners who comply with these guidelines.

There is nothing about a responsibility of statisticians to compromise their professional integrity to help the careers of other scientists. Period. If you are a statistician, stand up for your professional opinions. You have an ethical responsibility to do so. Something is wrong if you are feeling pressured to “sign off” on work you don’t agree with. If you are someone working with statisticians, please respect their professional opinion and their responsibility to according to the guidelines and their own internal ethics-meter (which may differ by individual). Statisticians should be able to do their jobs without being placed in ethical dilemmas daily — or forced to compromise their ethics to maintain productive professional relationships and collaborations. If they choose not to be a co-author on your work, respect that decision — and reflect hard on why they are making that decision.

Individual differences in ethics-meters and gray area

There are some actions we can all agree are professional unethical (like manufacturing data or knowingly presenting results that are misleading). But like it or not, there is a lot of gray area within the practice of using statistical methods to make inferences. What feels inappropriate, under justified, or even unethical to me because of my understandings, experiences, and philosophies, may not feel the same to a colleague of mine. This is something I have spent a lot of time agonizing over and trying to come to terms with.

Things are rarely black and white, as much as we would like them to be so. I am not so sure of the superiority of my gray-area feelings over those of others that I expect others to move in my direction, but I also should not be expected to move in their direction. Fear of appearing overly critical and judgmental kept me more silent than it should have for many years. I fought to turn down the volume on my own ethics/integrity meter because I was constantly sent the message by others that mine was too sensitive.

I think (hope) I have finally given in to accepting the volume of my meter for what it is and have changed my career to adapt to it, rather than trying to adapt to pressure to help the careers of others in ways I don’t agree with. For me, this took leaving academia and an industry position — giving up a lot of financial security. I certainly feel lighter about the present and the future, but I do still have to contend with decisions I made in the past.

Compromises

Have I compromised? Yes. Is my name on papers that I would rather it not be? Yes. During my career, I often felt cornered and pressured to be on publications or do work in a particular way I didn’t necessarily agree with — for my own career and the careers of others. I worked hard to make sure I contributed positively and openly raised any concerns I had. I made sure papers reflected some intellectual work that I was proud of, even if there were still some parts that were hard for me to swallow. And, in my collaborative work, I still declined authorship on more papers than I accepted authorship on. Some of those I declined were because I honestly did not think my level of contribution was sufficient to warrant co-authorship (another layer I didn’t get to in this post). My publication record does not represent the depth or breadth of my work with other researchers. There is probably a stronger message in my anti-publication record. I am okay with this, but for those who really want to succeed as a collaborative statistician within academia or be a successful consultant (with happy clients who return often) it can present a serious problem.

While working as a traditional collaborative/consulting statistician, I constantly struggled to balance the doing of my job as was expected within the current paradigm (conditional on assumptions and methods that I disagreed with) and staying aligned with my ethical responsibilities as a statistician. I was at times being paid to assist researchers with getting grants and publishing their work. Pushing researchers to adopt approaches beyond those expected within their discipline, was pushing them to take risks with their careers. And, in retrospect, I don’t think I always landed on the right side of the line. There are a lot of decisions I’m not proud of — a lot of compromises I made in the spirit of collaboration and team work.

Summing up – a few take home messages

Especially for those just skimming, here are a few bulleted take-homes that made up the bulk of an earlier and very short draft.

  • Statisticians have a responsibility to not include their name on work that they do not agree with, even if they have put a lot of time into the project.
  • Statisticians, particularly those in academia whose careers depend on publications (at least for the time being), need to have a way to get credit for work with researchers who in the end choose to ignore the advice of the statistician. It is bad for science to interpret lack of co-authorship after collaboration as evidence the statistician is not an adequate collaborator or applied statistician, or that they didn’t do enough work to get credit. Beware of unknowingly valuing number of publications (and careers in general) over research integrity.
  • Researchers collaborating with a statistician have a responsibility to respect the statistician’s expertise and the statistician’s commitment to professional ethics and research integrity. It is incredibly disrespectful to simultaneously choose to ignore advice and contributions while expecting the statistician to keep co-authorship. Put yourself in the statistician’s position by imagining some analogous situation you could end up in.
  • Just because a statistician’s suggestions do not align with “the way things are done to get published” in your discipline, does not mean the ideas aren’t worth careful consideration. We need to move forward in improving our use of statistical methods by thinking more deeply about how and why we are doing it. Justifying choice of an approach simply because “that’s how we do it” is not good enough — particularly when there is someone with more expertise than you recommending something different. Just because something has been done a million times does not mean it’s a good thing to do or should be continued.

Time to stop ignoring the problem

The problem is real and if you are dealing with it, you are not alone. Just as I was finishing this post, I received a very timely email from an early career statistician asking for advice about a co-authorship dilemma. It is time to start talking openly about the issue with other scientists and support those who find themselves in difficult situations relative to co-authorship in general. As hard as it is to admit, it is an issue of research integrity and ethics.

About Author

about author

MD Higgs

Megan Dailey Higgs is a statistician who loves to think and write about the use of statistical inference, reasoning, and methods in scientific research - among other things. She believes we should spend more time critically thinking about the human practice of "doing science" -- and specifically the past, present, and future roles of Statistics. She has a PhD in Statistics and has worked as a tenured professor, an environmental statistician, director of an academic statistical consulting program, and now works independently on a variety of different types of projects since founding Critical Inference LLC.

6 Comments
  1. MD Higgs

    I received permission to anonymously share a paragraph from the email I received yesterday (referred to in the last paragraph of the post). I think it does a nice job conveying the dilemma as felt by a statistician in real time and I appreciate being able to share it.

    “I’m a co-author on a paper targeting a non-statistics journal. I’ve been involved in the project for a long time as it’s gone through fits and spurts of progress, but I don’t feel my concerns about the statistical validity of the methods have been acknowledged and addressed. I also don’t have the time to invest in directly addressing the issues myself, and I’m not actually completely convinced they even could be adequately addressed. I’d like to step away from the project and ask to be removed as a co-author, but I would need to do it tactfully as there are multiple colleagues I expect to work with again who are also co-authors. How can I tell if a graceful exit is something that I can pull off without hurting feelings too badly, or if I should stick it out and make the best of a flawed research project?”

    • George Savva

      Thanks for another great post. It’s unfortunate that we take such pains to consider the professional dignity of others when asking to be removed from author lists, while our collaborators do not seem bothered about the indignity and unprofessionalism of either (1) doing analysis that a statistician has said is wrong or at least sub-optimal or (2) expecting a statistician to put their name to something they do not approve of.

  2. irvinekathi

    All I have to say is, can I hear an Amen!!!

  3. Ariel Muldoon

    I’ve definitely been where you have, including wondering if my ethics meter is just too sensitive. I hadn’t read that final section of the ethical guidelines, though, which is useful!

  4. Santosh Shevade (@santoshshevade)

    Quite an insightful blog.. I also think this speaks so much about ‘publishable research’ debate…are there any publishers/journals who have drafted guidelines in such areas?

    • Tom 2

      The Ecological Society of America has or at least had some guidelines, including written agreements up front on who does what and authorship and rights of veto vs name removal.

Leave a Reply