The current dogma says that the largest part of available research funds must be assigned only to the best scientists. This way, researchers, who also are often state employees, are put in competition with each other for the allocation of resources. Only a small fraction—between 5% and 20% depending on the circumstances—will be able to obtain the research funds needed to fully develop their own scientific projects. This is the idea behind so-called corporatisation of scientific research, the only cure against parasitism of the state organisation, according to Laurent Segalat, biologist, directeur of research at CNRS, in France.
But which manager would adopt such a questionable production process? Indeed, there is a fundamental flaw in this funding strategy. More to the point, it is an ideological blunder. If some competition is good for public research, it is clear that there is a threshold beyond which competition creates more adverse than positive effects. An excess of competition stimulates misbehaviour and an invasive pressure on the choices of the field of research topics made by individuals. As a result, misconduct in scientific papers is becoming an increasing problem: “Too many of the findings that fill the academic either are the result of shoddy experiments or poor analysis” according to a recent article in the Economist entitled ‘How science goes wrong.’
Another trend is that young scientists are becoming increasingly conservative in their approach to research. by investing research time in mainstream ideas. This is driven by peers pressure and the job market requirements. The problem is thus how to stimulate innovative projects with extremely risky but potentially highly profitable returns.
There are three possible strategies for the division of the funding cake. First, to divide it between the top 5% to 10% of researchers, or projects. Second, to divide funds between all researchers. Or, to divide the funding between a substantial fraction of researcher, between 30% and 50% of them. How can we therefore identify the best strategy? We can reasonably conclude that financing all projects is not the optimal choice. This is because in every system there are poorly performing individuals.
The question remains whether the decision to finance only a few researchers that are considered excellent, at a certain time, represents the optimal strategy.
Research funding agencies must make choices among the following strategies. Is it more effective to give large grants to a few groups of elite researchers? Or is it better to award small grants to many researchers? Large grants would only be more effective if scientific impact increases as a growing function of grant size. A quantitative study of this problem, published in Plos One in 2013 by Canadian scientists Jean-Michel Fortin and David Currie of the Ottawa-Carleton Institute of Biology, suggests that strategies targeting diversity, rather than excellence, are likely to be more productive.
The problem is not only to finance researchers known today as excellent. Instead, what matters is to give a chance to develop those who will become tomorrow’s excellent scientist, but are currently merely good quality researchers. The question would be easier to identify research projects that will lead to important discoveries, if it were possible to know the future. Unfortunately this is not the case. So rather than speculating as to what might happen in the future, we should first of all understand what has happened in the past.
Any past funding strategy should be put to the test. The question is how to test that the right choice was made in previously allocated funds. If we focus on a long enough time span, for example twenty years, and perform a systematic study of the outcomes of funded projects or researchers, it should, in theory, be possible to some answers to this question.
Such a study should be done by every institution caring about how best to allocate funds. A recent example can help shed some light as to why such approach matters. Consider the two researchers who have been awarded the Nobel Prize for physics in 2010, Andrei Geim and Konstantin Novoselov. The work for which they have had a very quick and great success was published in 2004. At the time, both of them had an ordinary bibliometric track record, in terms of number of articles and citations. Namely, they had a few dozen of publications and a few thousand of citations.
The number of publications they produced grew by only a factor of two to three between 2004 and 2011. In parallel, the number of citations of their work has exploded. Today they boast a few tens of thousands of citations, which is clearly a considerable number for physicists. Therefore, today, any committee would recognise their excellence. The more subtle and important problem concerns the question of whether a hypothetical committee would have picked their project in 2004 and located it in the top 10%.
Rewarding what is today recognised as excellence is trivial. The real problem is to understand whom to reward today, among the large magma of good quality researchers, and how to pick those who would become excellent tomorrow. This problem is commonly approached funding only a small number of projects, but at the same time this is the reason why such a strategy is not the most effective one.
However this is what happens both at European and national level across Europe.
Science is a social process. The evaluation of scientists needs to give space to different degrees of quality: the pursuit of excellence is only the mirage reflection of an ideological and unrealistic dogma.
Co-founder and editor at Return on Academic ReSearch (ROARS), Italy.
Answering the question of whether a hypothetical committee would have picked Konstantin Novoselov’s project before 2011 and located it in the top 10%: a real, non-hypotetical, committee selected his project and awarded him an ERC Starting Grant in 2007,based on the sole criterion of scientific excellent. The success rate in that specific ERC call(succesfull proposals/evaluated proposals)was 3%.
The famous paper by Andre Geim and Konstantin Novoselov was published in 2004 http://arxiv.org/abs/cond-mat/0410550 and in 2007 it was indeed quite famous and cited. The point is whether the committee would have selected his project and awarded him with an ERC Starting Grant in 2004. By looking at his citations and publications records in 2004 it is very un-probable that he would have been considered among the top 10%.