Bernard Rentier, biologo e virologo, rettore dell’università di Liegi dal 2005 al 2014, ha pubblicato nel suo blog i risultati di una analisi fatta sulle citazioni del 2014 ad articoli pubblicati su Nature nel 2012 e nel 2013 (1944 per la precisione). Ricordiamo che il valore dell’IF si ottiene dividendo il numero di citazioni di un determinato anno agli articoli dei due anni precedenti apparsi su una determinata rivista, per il numero degli articoli pubblicati in quella rivista. Risulta che 280 articoli (il 14,4% del totale) raccolgono la metà delle citazioni mentre la maggior parte degli articoli ne riceve assai poche. Pertanto, misurare i ricercatori in base all’impact factor «is like measuring someone’s qualities by the club where he/she is allowed to go dining. Stars are for restaurants, not for their customers». I risultati confermano quanto già scoperto nel secolo scorso da Bradford, Lotka e Zipf, il problema è che troppo spesso decisori istituzionali e politici se ne dimenticano.
In 2013, the American Society for Cell Biology and several scientific journals launched the San Francisco Declaration on Research Assessment, DORA, meant to put an end to the ridiculously unscientific practice of using the impact factor of journals to assess individual researchers or research groups or even institutions. According to the original text, this practice creates biases and inaccuracies when appraising scientific research. The impact factor must no longer be considered as « a measure of the quality of individual research articles, or in hiring, promotion, or funding decisions ».
At this day 12,747 institutions and individuals worldwide have signed the DORA. And yet only a handful of institutions who have signed it have actually implemented the DORA. Review committees, assessment jurys, funding organizations and academic authorities have continued using, openly or discreetly, the journal impact factor as a determining element of judgement on the output of scientific research.
Let’s look at data collected patiently by my collaborator Paul Thirion (ULg) whom I thank for this : he listed all 1,944 articles published in Nature in 2012 and 2013 and looked at how many times each one has been cited in 2014. Only 75 of them (3.8%) provide 25% of the journal’s citations, hence of the journal’s impact factor (IF = 41.4…, I’ll spare you the other digits !) and 280 (14,4%) do account for half of the total citations & IF… While 214 (11%) get 0 or 1 citation.
A graphic representation is even more striking:
This does not take away the fact that a high impact factor is a legitimate measurement of the prestige of a journal. But if one can generally admit (not everybody does) that a scientist’s contribution to science can be somehow measured by the citations of his/her work (although not true in all domains of knowledge), using the impact factor of the journals where he/she publishes is like measuring someone’s qualities by the club where he/she is allowed to go dining. Stars are for restaurants, not for their customers…
This goes to show that most Nature authors do benefit from an IF generated by a happy few (if you admit that citation is a valid assessment indicator, of course).
But if the very convenient assessment by impact factor is to be banned, what is intended to replace it? Ideally, the solution is a thorough reading of the work by a competent reader, a very unrealistic task nowadays. DORA makes several suggestions, such as BiorXiv. The British HEFCE has analised the question as well. Altmetric has developed new methods. All in all, a combination of these procedures may provide a useful measurement but it should be kept in mind that comparisons across disciplines make no sense at all, even between similar fields. A wider reflection is clearly needed to come up with a manageable solution, so long as one agrees that evaluation, as we see it, makes sense. In any case, it cannot be reduced to a single figure, as if such a value could by any means represent a basis for a comparative evaluation.