The case for Open Research: the mis-measurement problem

Let’s face it. The biggest blockage we have to widespread Open Access is not researcher apathy, a lack of interoperable systems, or an unwillingness of publishers to engage (although these do each play some part) – it is the problem that the only thing that counts in academia is publication in a high impact journal.

This situation is causing multiple problems, from huge numbers of authors on papers, researchers cherry picking results and retrospectively applying hypotheses, to the reproducibility crisis and a surge in retractions.

This blog was intended to be an exploration of some solutions prefaced by a short overview of the issues. Rather depressingly, there was so much material the blog has had to be split up, with several parts describing the problem(s) before getting to the solutions.

Prepare yourself, this will be a bumpy ride. This first instalment looks at the reward system. The second instalment will consider authorship and credit. The third will look at reproducibility, retractions and retrospective hypotheses. The fourth asks if peer review is working. And the final blog will discuss some options for solving at least part of the problem.

I should note that this is not a comprehensive literature review. Every subheading of this blog series is a topic of considerable research on its own and there are many further examples available to the interested reader. I welcome debate, suggestions and links in the comments section of the blog(s).

Measurement for reward

The Journal Impact Factor

Let’s start with how researchers are measured. For decades academia has lived with the ‘Publish or Perish’ mantra which has spawned problems with poor publication practices. Today the pressure to be published in a high impact journal is stronger than ever.

A journal’s Impact Factor (JIF) averages the number of citations received by a journal in a given year divided by the number of articles published in the previous two years. For example, a journal’s JIF for a given year is calculated by taking the number of citations made that year to the articles published in the journal in the previous years and then dividing by the total number of articles (including reviews and other non-scholarly content) published in that journal in those years.

The JIF is compiled by Journal Citation Reports – which is owned by a commercial company Thompson Reuters. The company announced its sale for $3.5 billion today.

This blog will not dig in any depth into the issues with the way the JIF is calculated, although there are some serious ones (see a 2006 paper I coauthored on this topic). Neither will it explore the problem of how much the JIF is gamed – from self-citations to journals insisting on a certain number of citations to publications within the same journal. Sufficient to say that each year a number of journals are removed from the index due to this type of behaviour. The record to date was in 2013, a year which saw 66 journals struck from the list. By comparison only 18 were suppressed in the most recent report.

There have been many, many criticisms of the Journal Impact Factor and its effects on scholarship. But the criticisms put forward a decade ago to the month by PLOS still ring true. One of the issues, PLOS argued, was that because Thompson Reuters does not make public the process for choosing ‘citable’ article types, this means “science is currently rated by a process that is itself unscientific, subjective, and secretive”.

Indeed last week a news article in Science and a related news article in Nature put forward exactly the same criticism. The stories referred to a paper: “A simple proposal for the publication of journal citation distributions” posted on BioRxiv. This described some comparative research undertaken to look at whether a reanalysis of the data would provide the same results as Thompson Reuters. It didn’t. The work found the citation distributions were “so skewed that up to 75% of the articles in any given journal had lower citation counts than the journal’s average number”. The authors likened using the JIF to determine the impact of a given article to ‘guesswork‘.

Jon Tennant, in a 2015 blog stated that “The impact factor is one of the most mis-used metrics in the history of academia” and proposed an Open Letter template for researchers to “send to people in positions of power at different institutions, co-signed by as many academics as possible who believe in fairer and evidence-based assessment”. Tennant in turn references Stephen Curry’s 2012 blog which opened with the statement “The impact factor might have started out as a good idea, but its time has come and gone”.

There are many more, but I am sure you get the idea.

This is recognised as such a big problem that in 2012 the San Francisco Declaration on Research Assessment (DORA) was conceived with the intent to: ‘Put science into the assessment of research’. Over 12,000 individuals and over 700 organisations have signed the declaration to date supporting the call for a “need to assess research on its own merits rather than on the basis of the journal in which the research is published”.

If nothing else, there is clearly a problem with measuring the worth of something by considering the packaging and not the item itself. But the academy continues to use the JIF and criticisms continue to come thick and fast.

Clearly something is rotten in the state of Denmark.

Ditching the JIF

In Stephen Jay Gould’s seminal book The Mismeasure of Man where he debunks the science behind biological determinism, he criticises “the myth that science itself is an objective enterprise, done properly only when scientists can shuck the constraints of their culture and view the world as it really is”. This observation is true of any metrics we apply to the valuing of research outputs. They are not objective, and not an accurate view. Any measurement tool causes its own problems.

An example of a non-JIF type of measurement is the increased emphasis on ‘excellence’ by funders and governments (the Research Excellence Framework in the UK and Excellence in Research for Australia being two examples). But ‘excellence rhetoric’ is counterproductive to good research, according to one argument which concludes that ‘excellence’ is a “pernicious and dangerous rhetoric that undermines the very foundations of good research and scholarship”.

The insistence on excellence, it can be argued, have spawned problems with reproducibility and fraud. In other words, the same problems that the JIF has caused.

There have been many other suggestions for ways to measure researchers, such as the h-index which has its own set of issues, and the Eigenfactor Score – these are only two of a myriad of options. But as the system changes, so does researcher behaviour. A clear example was in Australia when the funding mechanism moved to a simple count of research papers rather than any assessment of the value of those papers. This resulted in a marked increase in the number of papers being produced and a concurrent decrease in the overall quality as described in ‘Modifying publication practices in response to funding formulas‘.

Clifford Lynch, the Executive Director of CNI noted in his welcome talk at the JISC-CNI event held at Wadham College, Oxford last week that using alternative metrics means we start running into issues about vendor lock-in and data confidentiality.

While alternate metrics might solve the ‘valuing the article rather than journal’ issue, they bring up problems of their own. In HEFCE’s 2015 report on metrics being used in assessment in the future noted that some indicators can be misused or ‘gamed’ – with journal impact factors, university rankings and citation counts put forward as three prominent examples. The report recommended that metrics should be updated in response to their potential effects. In deciding what metrics to use, the report recommended using the best possible data in terms of accuracy and scope, and that the data collection and analytical processes should be open and transparent to allow verification. It also suggested using a range of indicators.

Financial implications

What does this emphasis on particular publication outlets have to do with Open Access? Well a great deal as it happens. It is the big blocker to widespread change. As long as we continue with this emphasis we will not get any real traction with Open Access because it locks us into an old print paradigm of academia.

Much ink has been spilt over the cost of publication and the added cost of open access (some of it mine) which includes not just the cost of the article processing charges but the burden of administering multiple micropayments.

As I have said on numerous occasions (see here and here) funders paying for hybrid open access is expensive and has not resulted in journals flipping to gold (as a transition to fully Open Access environment) despite this being a stated aim of the process. It makes sense from a publisher’s perspective not to flip journals – why, when researchers are under pressure to publish in high impact journals, and there is a new revenue stream associated with that publishing, would you kill the proverbial goose?

Indeed, a paper earlier this year argued that “Open Access has the potential to become unsustainable for research communities if high-cost options are allowed to continue to prevail in a widely unregulated scholarly publishing market.”

The problem, it can be argued, is that the infrastructure underpinning open access is ‘path dependent’ a concept proposed in 1985 which explains how the set of decisions in the present is limited by the decisions one has made in the past, even though the contextual factors shaping the past decision no longer apply. Scholarly publishing is path-dependent, some authors argue “because it still heavily depends on a few players that occupy crucial nodes in the scientific information infrastructure. In the past, these players were scientific associations, but now these players are commercial publishing companies”.

As long as the current reward system remains, the crucial nodes will not change and we are stuck.

Conclusion

So that covers some of the problems with the way we measure our researchers, and some of the financial implications of this. The next blog in this series will cover some of the issues with authorship.

Published 11 July 2016
Written by Dr Danny Kingsley
Creative Commons License

8 thoughts on “The case for Open Research: the mis-measurement problem

  1. Misdiagnosis of the slow passage to the optimal and inevitable

    There is absolutely no contradiction between making papers (Green) OA and publishing them in any journal you like.

    (1) Deposit them in your institutional repository immediately upon acceptance for publication

    and

    (2a) Either make the deposit OA immediately

    or (if you want to comply with a publisher OA embargo)

    (2b) Make the deposit “Closed Access” and make sure your repository has implemented the “Almost-OA” copy-request Button,

    The biggest blockage to OA is indeed not “a lack of interoperable systems.” Nor is it “unwillingness of publishers to engage.” (Publishers are irrelevant.)

    But wherever researchers fail to do (1) and either (2a) or (2b), the biggest blockage to OA is indeed researcher apathy.

    Nothing whatsoever to do with journal choice, journal impact factors, or “mismeasurement.” Red Herrings, all.

    Dixit.

    Stevan Harnad

    ——

    Harnad, S (2015) Optimizing Open Access Policy. The Serials Librarian, 69(2), 133-141

    Sale, A., Couture, M., Rodrigues, E., Carr, L. and Harnad, S. (2014) Open Access Mandates and the ‘Fair Dealing’ Button. In: Dynamic Fair Dealing: Creating Canadian Culture Online (Rosemary J. Coombe & Darren Wershler, Eds.)

    Vincent-Lamarre, P, Boivin, J, Gargouri, Y, Larivière, V & Harnad, S (2016)
    Estimating Open Access Mandate Effectiveness: The MELIBEA Score. Journal of the Association for Information Science and Technology (JASIST) 67 (in press)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.