One year ago, the Royal Society concluded its research culture programme with the ‘Research culture: changing expectations’ conference. A major take-away for us, from delivering the programme and conference, has been that many issues in research environments revolve around the way research is evaluated and how success is defined. When we set out the programme, many of the topics under the research culture umbrella; inclusion & diversity, ethics & integrity, openness & collaboration, had been discussed many times before, but not all at the same time. Looking at all these areas holistically was what made the programme innovative, and what was needed to appreciate that tackling the narrow definitions of success might well be the Holy Grail for improving research culture.
Since the conclusion of the programme, several studies, articles and developments have confirmed the central role of reward and recognition, evaluation of researchers, and definitions of success, for improving research culture:
Inclusion & diversity – While the Royal Society’s research culture programme was concluding, the Royal Society of Chemistry published its Breaking the Barriers report, giving new insights into the barriers facing women in the chemical sciences. Barriers identified included the ‘publish or perish model’ as the primary driver for progression; narrow and out-dated definitions of excellence in science; and funding and promotion decisions being driven by research output and hardly recognising efforts and successes in teaching, pastoral and academic citizenship activities, which women are often more involved in.
In September 2019, the League of European Research Universities published its report on ‘Equality, diversity and inclusion at universities: the power of a systemic approach’, which draws upon and aims to transmit the core messages from the latest research on the challenges and opportunities for universities wishing to enhance equality, diversity and inclusion. It highlights, among other areas, the importance of eliminating bias and blind spots in research and university assessment. It states that biases appear to have taken root in many areas of research assessment, but that emerging evidence suggests that concrete measures and targeted interventions can be taken to counter the effects of these biases in research assessment.
Ethics & integrity – In March 2019, the ETH Zurich fired a professor over accusations of misconduct towards PhD students and announced that it will include in its main selection criteria when appointing professors not only excellence in research and teaching, but also in leadership. This implies that prior to this incident only little attention was given to evaluating leadership and managerial skills when employing staff with considerable people management responsibilities. The fixation on excellence of research as the primary criterion for hiring and promotion, not only at ETH but at most research organisations, has blinded evaluators for skill deficits in other areas, often to the detriment of junior researchers.
As part of its research culture programme, the Royal Society developed a research integrity toolkit. The toolkit shares case studies of ways that could help individuals and institutions bring life to the codes, concordats and pledges to improve research integrity they have signed up to. One of these case studies suggests assessing researchers differently to embed research integrity into institutional culture. It refers to research on alternatives to traditional criteria for researcher evaluations, including researchers’ mentoring capacity and their contributions to improving research environments.
Openness & collaboration – Several research funders and organisations in the Netherlands are spearheading a new approach to recognising and rewarding academics. One of the three themes they focus this effort on is ‘team science’. While the reward system in academia has evolved to encourage hypercompetitive behaviour and promote superstars rather than team players, there is wide recognition that open and collaborative science is increasingly important and will be crucial to address global challenges. The Academy of Medical Sciences came to a similar conclusion a few years ago. As part of their report on ‘Team science’, they explored how reward and recognition is allocated for individual biomedical researchers participating in ‘team science’ and what the barriers are to sufficient recognition.
In September 2018, a group of funders launched Plan S, a commitment to ensure that access to publications that are generated through grants they allocate, must be fully and immediately open. After public consultation on the plan, the group realised that researchers’ strong drive, motivated by a misdirected reward system, to report their outcomes in journals behind pay walls, is slowing down the transition to open access. In May 2019 they included in their plan a commitment to fundamentally revise the incentive system of science, using the Declaration on Research Assessment (DORA) as a starting point. The European Universities Association more recently published the results of a survey that showed that universities consider contributions to open science the least important aspect of academic work for researcher assessment. The chairs of the expert group that supported the study wrote that open science will never be achieved unless accompanied by a change in the way researchers are evaluated. Changing the way we measure the quality of people’s work also lays at the heart of Octopus, a plan to revolutionise publishing developed by Alex Freeman who won the Pitch competition at the Royal Society’s conference.
The combination of all these studies and reports shows that, if done well, broadening the definitions of success in academia and adapting evaluation processes accordingly, has immense potential to holistically address a myriad of issues that the academic research sector has struggled to tackle in isolation. For any player in the landscape who’s unsure about where to start to address research culture issues, looking at the criteria they use to evaluate, hire, promote, reward and recognise researchers will be a good starting point.
Upon celebrating its ‘One-year-on’ from the research culture conference, The Royal Society (finally) published the last output we developed as part of the programme, the Résumé for Researchers. The Résumé has been created to support the evaluation of individuals’ varied contributions to research and we hope many will use it as a tool to make a start on addressing narrow definitions of success and broadening what contributions count.
By Karen Stroobants & Frances Downey
First featured by MetisTalk in October 2019