[En français ici]
Open Science has become, in the last few years, a true path of History. Few voices are rising to challenge its generous principles of communication, cooperation and sharing without limits. Nevertheless, this fine ideal is still confronted with concrete difficulties that can be enumerated, but all of which are inherent in the difficulties arising from profound changes in mentalities. As in all revolutions, even the most peaceful ones, that overturn basic principles, the difficulty is for those who first dare to change things, at the risk of jeopardizing their own future and the one of those around them.
This difficulty stems from the gap between the erection of new principles that break with tradition and a positive appreciation that is made of them by everyone, especially by those who have the power to reward and sanction.
Before embarking on a new path, even morally and ethically appealing, people need to know whether or not it will be understood, encouraged and even acknowledged.
That is why Open Science, in all its variety of components and regardless of the general enthusiasm it induces, will never stand a chance of being implemented unless the evaluation of researchers stops being based on productivity criteria modeled on those of the industrial world. Moreover, a mere statement of principle will not suffice until a new approach becomes a reality and is verifiable and proven, hence as long as the researchers will not be fully confident in the principles by which they will be judged.
Research is pure creativity and it cannot be measured by productivist methods.
Raw numbers of publications, impact factors and derivatives such as h-index – all big boosters of a damaging overproduction, therefore of a drop in average quality, but also of rampant selfishness – must be abandoned, even if this makes evaluation much more laborious, complex and time-consuming: a fair assessment of research careers is well worth the effort.
A consensus must be adopted by a very large number of assessing bodies, in universities but also in funding organisations and in any committee with evaluation responsibilities throughout the world. I believe that I can, immodestly, give some advice that is indispensable for the effective implementation of this new approach. I would summarise them as follows:
1. Always use a multiple criteria model such as the OS-CAM (Open Science Career Assessment Matrix) included in the Open Science Toolbox of the European Commission). Make sure to adapt it to the specificities of the research field (as explained here, chapter 6).
2. Rank the criteria by order of importance according to your specific objectives (expected skills, merits, achievements) and make sure to favour Open Science objectives.
3. Never use indirect and/or poorly relevant indicators such as raw number of publications, journal impact factors or derivatives. As a rule, never use numbers or metrics.
4. Ask the evaluee to select maximum one publication per year that he/she considers his/her best.
5. Make sure the evaluee fills out the own form first. Verify that all evaluation notes are substantiated. If not, investigate with the evaluee’s close environment.