The Dutch Research Council, NWO, recently changed the way it asks reviewers to assess grant applications. In the past, it took quantitative measures like Journal Impact Factor into consideration. The assessment process is now purely qualitative. It takes into account that the modern university is not a publication factory, that you cannot evaluate the ability of a researcher and the quality of their work merely by the journals in which they publish, and the frequency with which they do this. NWO want to acknowledge and appreciate the modern academic.
Funding is arbitrary
This new policy created a lot of anger among especially senior researchers. Over 170, including one Nobel Laureate, wrote a letter stating that the new policy would adversely affect the standing of Dutch researcher in the global scientific community. They argue that by ignoring established quantitative measures like Journal Impact Factor, junior researchers will no longer be able to compete. Furthermore, they claim that it’s harder to assess a “narrative CV” as there are no clear methods to measure skills like “leadership”, that they struggle to fill in the plusses and minuses of an application, which overrule their “gut feeling” about the application, and that they in the end just google the researcher to look at their publications list anyway. The whole process would make it arbitrary who receives funding.
These arguments are weak, to say the least. For starters, who receives funding is already arbitrary. The quality of the funding proposal and the researcher’s abilities are less relevant than who are the members of the committee. Many bids meets the standards for excellent research, which means that committees lack clear grounds to distinguish between them. They rely on arbitrary factors such as their renowned gut feeling. Whether an applicant receives funding is more or less a lottery – just with slightly better odds.
These arguments are weak, to say the least. For starters, who receives funding is already arbitrary. The quality of the funding proposal and the researcher’s abilities are less relevant than who are the members of the committee. Many bids meets the standards for excellent research, which means that committees lack clear grounds to distinguish between them. They rely on arbitrary factors such as their renowned gut feeling. Whether an applicant receives funding is more or less a lottery – just with slightly better odds.
Narrative CVs or Impact Factors
The complaint about how one should review narrative CVs is also strange coming from senior researchers. Let’s ask a slightly different question: how do you evaluate good scholarship? The answer would seem obvious to these people: number of publications, journal impact factor, etc. But that answer takes something for granted, which isn’t at all obvious: that these are in fact valid metrics for scholarship and not to some extent arbitrary, a historical accident. Through the middle of the 20th century, researchers have come to value publications in certain journals, but it did not have to be that way. We could have focused on public outreach, on impact on technological development, healthcare, public approval, and an infinite number of other social values. There are no natural laws for good scholarship, only social conventions. That’s not to say all metrics would make sense, but it does mean that you should always re-assess existing metrics when they seem out of date.
These researchers are correct that assessing a narrative CV is harder than assessing a classic CV, but only because we already have metrics for the latter. It is no more difficult to measure leadership than it is to measure scholarship: as any 2nd year undergraduate student will learn, if you want to measure something, you need to operationalize it. And while not every operationalization is equally valid for the concept you’re interested in – there can be discussions about what are good metrics – there is no reason we cannot measure leadership any more than there would be complaints that we cannot measure scholarship. Again, our metrics are just social conventions – they are good, because we agree that they are good.
These researchers are correct that assessing a narrative CV is harder than assessing a classic CV, but only because we already have metrics for the latter. It is no more difficult to measure leadership than it is to measure scholarship: as any 2nd year undergraduate student will learn, if you want to measure something, you need to operationalize it. And while not every operationalization is equally valid for the concept you’re interested in – there can be discussions about what are good metrics – there is no reason we cannot measure leadership any more than there would be complaints that we cannot measure scholarship. Again, our metrics are just social conventions – they are good, because we agree that they are good.
Noise
My final beef with the objections of these senior researchers is that a series of plusses and minuses does not allow them to evaluate according to what they think is a good application. But this isn’t a bug: it’s a feature. By having standardized ways of assessing what is good or bad about an application, we get rid of the noise that exists between the assessments of different reviewers. Different reviewers look at proposals in very different ways, and you do not want the funding decision to be dependent on who happens to be the reviewer. Multiple opinions help mitigate against noise, but a committee of 3-5 members is still a small sample, and thus very noisy. Moreover, the areas where they do agree are biased towards certain types of scholars. Decisions are made and arguments are then fabricated to justify the decision: that’s simply how human decision-making works if left unchecked. We need to make sure that when a researcher submits a grant application, any committee would come to the same assessment, and that means not relying on the gut feeling of a reviewer.
It is probably no surprise that these objections were made by mostly senior researchers. After all, they thrived in the system that is now being redesigned – the metrics that made them successful will no longer be valued. In fact, shortly after they wrote their letter, over a hundred junior researchers wrote their own letter to counter these objections. Senior researchers do not share the lived reality of junior researchers who buckle under the pressure of modern academia, pressure these senior scholars never had to deal with – at least not in the same way. Relying on 9-month contracts for years after finishing a PhD, moving from city to city, spending hours writing grants in one’s spare time, that are almost always rejected, no matter the quality, all for a salary that forces some to live in shared housing well into their 30s is insane. If these senior scholars truly care about us junior researchers, they will not try to stick to an antiquated system, just because it’s the way things have always been. They are in a position to make it better: the place to start would be to listen and work with the scholars they want to be helping.
It is probably no surprise that these objections were made by mostly senior researchers. After all, they thrived in the system that is now being redesigned – the metrics that made them successful will no longer be valued. In fact, shortly after they wrote their letter, over a hundred junior researchers wrote their own letter to counter these objections. Senior researchers do not share the lived reality of junior researchers who buckle under the pressure of modern academia, pressure these senior scholars never had to deal with – at least not in the same way. Relying on 9-month contracts for years after finishing a PhD, moving from city to city, spending hours writing grants in one’s spare time, that are almost always rejected, no matter the quality, all for a salary that forces some to live in shared housing well into their 30s is insane. If these senior scholars truly care about us junior researchers, they will not try to stick to an antiquated system, just because it’s the way things have always been. They are in a position to make it better: the place to start would be to listen and work with the scholars they want to be helping.