Denouncing the impact impostor

In 2013, the American Society for Cell Biology and several scientific journals launched the San Francisco Declaration on Research Assessment, DORA, meant to put an end to the ridiculously unscientific practice of using the impact factor of journals to assess individual researchers or research groups or even institutions. According to the original text, this practice creates biases and inaccuracies when appraising scientific research. The impact factor must no longer be considered as « a measure of the quality of individual research articles, or in hiring, promotion, or funding decisions ».

At this day 12,747 institutions and individuals worldwide have signed the DORA. And yet only a handful of institutions who have signed it have actually implemented the DORA. Review committees, assessment jurys, funding organizations and academic authorities have continued using, openly or discreetly, the journal impact factor as a determining element of judgement on the output of scientific research.

Let’s look at data collected patiently by my collaborator Paul Thirion (ULg) whom I thank for this : he listed all 1,944 articles published in Nature in 2012 and 2013 and looked at how many times each one has been cited in 2014. Only 75 of them (3.8%) provide 25% of the journal’s citations, hence of the journal’s impact factor (IF = 41.4…, I’ll spare you the other digits !) and 280 (14,4%) do account for half of the total citations & IF… While 214 (11%) get 0 or 1 citation.

A graphic representation is even more striking:

image

This does not take away the fact that a high impact factor is a legitimate measurement of the prestige of a journal. But if one can generally admit (not everybody does) that a scientist’s contribution to science can be somehow measured by the citations of his/her work (although not true in all domains of knowledge), using the impact factor of the journals where he/she publishes is like measuring someone’s qualities by the club where he/she is allowed to go dining. Stars are for restaurants, not for their customers…

This goes to show that most Nature authors do benefit from an IF generated by a happy few (if you admit that citation is a valid assessment indicator, of course).

But if the very convenient assessment by impact factor is to be banned, what is intended to replace it ? Ideally, the solution is a thorough reading of the work by a competent reader, a very unrealistic task nowadays. DORA makes several suggestions, such as BiorXiv. The British HEFCE has analised the question as well. Altmetric has developed new methods. All in all, a combination of these procedures may provide a useful measurement but it should be kept in mind that comparisons across disciplines make no sense at all, even between similar fields. A wider reflection is clearly needed to come up with a manageable solution, so long as one agrees that evaluation, as we see it, makes sense. In any case, it cannot be reduced to a single figure, as if such a value could by any means represent a basis for a comparative evaluation.

6 commentaires sur “Denouncing the impact impostor

  1. One other strategy is to get journals to publish their citation distributions – to make it repeatedly transparent that the JIF cannot capture the spread of ‘performance’ of the papers in any one journal. The EMBO Journal, PeerJ and the Royal Society (UK) journals have started to adopt this practice and I hope that many more will in 2016. See http://occamstypewriter.org/scurry/2015/12/04/jolly-good-fellows-royal-society-publishes-journal-citation-distributions/

    J’aime

  2. […] por Paul Thirion para el blog de Bernard Rentier, “Denouncing the imposter factor,” Ouvertures immédiates, 31 Dec 2015. La segunda figura sobre Nature Chemistry es de Stuart Cantrill, “Nature Chemistry’s […]

    J’aime

  3. brembs

    Add to that the data that show that journals such as Nature publish the least reliable science:
    http://www.frontiersin.org/Human_Neuroscience/10.3389/fnhum.2013.00291/full
    and you select for the people publishing the least reliable science if you select your candidates using the IF. Little wonder we have a replicability crisis.

    J’aime

  4. Are the underlying data available somewhere?

    J’aime

    1. With Paul’s agreement, I’ll make them available. I just have to find a convenient way.

      J’aime

  5. […] to evaluate an individual article, even less a researcher or a research team. This shows clearly on a graph I have already published in this blog: among 1,942 articles published in […]

    J’aime

Laisser un commentaire

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur la façon dont les données de vos commentaires sont traitées.