Segnaliamo questo rapporto dell’accademia delle scienze di Francia presentato il 17 gennaio 2011 al Ministero dell’Alta Educazione e della Ricerca. Il documento in inglese può essere scaricato qui
Individual evaluation of researchers is still a subjective process that suffers from numerous potential biases. The Académie has examined the use of quantitative bibliometrics, which are considered to be more objective, and has made a number of recommendations on rigorous rules that should be followed when using bibliometrics to support qualitative evaluations. Such rules should be recognized internationally, at least at the European level. The issue of bibliometric evaluation is a complex one and is still being debated. Strong opinions have arisen for and against its use that depend greatly on the scientific field.
e questa è l’introduzione
Bibliometrics has played an increasing role in evaluating individual researchers (the focus of this report) as well as research groups and institutions. This can be explained by its ease of use and the overview it provides on a researcher’s career. At the same time, bibliometrics appears not to have been always well used and has proven an object of serious wrongdoing when used in isolation.
In its Report of 8 July 2009, the Académie des Sciences emphasised that peers should play a decisive role in the individual evaluation of researchers (see Annex 2). Unfortunately, there have been many cases of improper and poor qualitative evaluation by peer panels due to conflicts of interest, favoritism, local interests, group processes, insufficient expertise of evaluators, superficial examination of applications. The question thus arose how to ensure better execution of peer evaluation.
To overcome such shortcomings, the evaluation of the impact of a researcher’s work based on quantitative analysis, which is considered to be more objective, was suggested for certain disciplines as a tool to help qualitative evaluation by peers. Bibliometrics commonly refers to this use.
It should be pointed out that bibliometrics is not necessarily objective and that it suffers from many biases. It is usually reduced to a few numbers and used in an extremely reductive manner in spite of the fact that current databases from which these indices are computed hold an enormous amount of information which, properly taken into account, could significantly help qualitative evaluation.
This report focuses on the use of lists of publications and indices based on the citation of these publications. The report will review the current situation and explore new directions for improvement.
Queste, in sintesi, le Raccomandazioni:
Recommendation 1: The use of bibliometric indices for evaluating individual researchers is of no value unless a number of prerequisites are met: – The evaluation should focus on the articles and not the journals. – Data quality, standardization, significance of deviation and robustness of indices must be validated.
– Bibliometric evaluations should only compare researchers in the same scientific field and
over their whole career. It is important to consider bibliometric data against the specific distribution of values of the researcher’s field and also to take into account the rate of career progression.
– Users of bibliometrics must justify their conclusions. It will force them to develop a solid expertise in this area.
It is important to be aware that some researchers might chose to steer their activity in such a way as to get articles accepted in journals with a high impact factor rather than engaging in original and creative research and persisting with a thematic continuity, at least for several years.
Finally, since evaluations are based on peer judgement, the question arises as to whether the evaluators should not themselves be submitted to a bibliometric evaluation.
Recommendation 2: Bibliometrics should not be reduced to numbers, it must be accompanied by an in-depth consideration of bibliometric and bibliographic data, and if possible the articles themselves
It should be pointed out that some French Fields Medal winners in mathematics and Nobel laureates in chemistry and physics have surprisingly very modest bibliometric indices. – Any bibliometric evaluation should be tightly associated to a close examination of a researcher’s work, in particular to evaluate its originality, an element that cannot be assessed through a bibliometric study.
– The Académie recommends that for all individual evaluations, especially in cases where the
panel cannot reach a consensus, a close examination of the bibliometric data of the 5, 10 or 20 most cited articles (or those chosen by the candidate) should be undertaken along with a close scrutiny of the bibliographic comments accompanying these publications. Such a selection and the respective electronic pdf files provided by the scientist would facilitate close examination of his/her work.
– A comparison of the citations of a researcher’s article in a given journal to the mean number of citations within same journal over a given period is envisaged. This will add value to articles that are frequently cited in low impact journals.
– A comparison of the number of citations of an article to the statistical data of another article published at the same time and in the same field should also be undertaken.
– It would be interesting to know where a given article stands compared to the most cited articles in the field: within 0.01%, 0.1%, etc.? The ISI sub-database Essential Science Indicators (see Additional Resources) greatly facilitates this examination in the major disciplines. Further analysis by sub-disciplines may be necessary. In the ISI bibliographic files, it is also possible to check how citations changed over time and who has cited the article.
– Qualitative and (semi-quantitative) bibliometrics would be useful in certain close examinations where the quality of the citations and their quantification is made: knowing which articles (or types of articles) have cited a given article (or person) not only can reveal who has appreciated the work but also be used to assess its interdisciplinarity, longevity, scope and timeliness.
– Concerning a bibliographic analysis, we recommend that the example of the Mathematical Reviews database be encouraged and extended to all other fields.
– Bibliometric indices should not be used for researchers with a career spanning less than 10
years in order to prevent their only pursuing research in areas of high citation levels. This
would impede researcher creativity at the start of a career. – Bibliometrics should also be excluded when recruiting young researchers. At the chargé de recherche CR2 (researcher) or maîtres de conférences (lecturer) levels, a candidate has only a small number of publications. The panel must read and try to understand with greater care the works proposed by the candidate.
– In the case of recruitment for or promotion to senior positions, bibliometric indices can be used by the peer panel (see below).
– In the case of promotion to senior research or teaching positions, using indices and bibliometrics can help to establish a distribution of the candidates and to eliminate those whose performance is too weak.
– Recruitment for senior level research or teaching positions is closer to the preceding case than to that of young persons. A preliminary screening through bibliometrics is thus possible when there are too many candidates.
– In cases where the final evaluation does not correspond to the bibliometric indices, explicit explanation for the reasons of the decision taken by the panel must be provided.
– Bibliometric evaluation of candidates applying for a research grant or an award (prize, medal, election to an academy among others) must be treated differently according to the context and the age of the researchers and greater importance must be given to the originality of the work which generally is not properly taken into account by bibliometrics.
When an article is signed by several authors, the position of a researcher’s name in the order of authors is of considerable importance as it reflects the personal contribution of the scientist to the work published. In disciplines where it is usual to list numerous authors or in disciplines where authors are listed in alphabetical order or according to other variable and complex rules, it is not possible to easily judge the contribution of any one author.
– Articles to which a given author contributed significantly and articles where the author was only a collaborator should be treated differently.
– The concept of authorship needs to be clarified. We recommend that all journals in all fields use the Vancouver authorship criteria (see annex 4).
– It may be useful to also get information on the other authors of an article.
Recommendation 5: Bibliometric evaluation should become an object of study in order to improve its value. France must participate in this process.
All the recommendations above need to be further examined. In order to do so, the Académie recommends the creation of a Steering Committee to examine the use of bibliometrics in individual evaluations, for example within the framework of the Observatoire des Sciences et Techniques (OST) which is a public body with a long experience in bibliometrics. It would be composed of a small group of experts from various disciplines and agencies, whose task will be to study the limitations of indices and their use and suggest how to improve them. This committee should engage in research that will help refine existing indices and make practical suggestions to be validated at the European level. Its recommendations should be based on a number of tests and studies such as retrospective tests and the development of criteria to detect originality, innovation, dissemination and impact of a work.