Lifting the lid on peer review

This blog describes some of the insights that emerged from two sets of discussions with academics at Cambridge University organised by Cambridge University Press last year. The topic was peer review and the two sessions were a group of editors in the Humanities and Social Sciences, the other a group of editors in the Science, Technical, Medical and Engineering areas.

The themes that emerged echoed many of the issues that were raised in the associated blog ‘The case for Open Research: does peer review work?‘. If anything, the discussion paints a darker picture of the peer review landscape.

Themes included the challenges of finding and retaining reviewers, the reviewing demand on some people, the reality that many reviews are done by inexperienced researchers, that peer reviewing can lead to collaboration, that blinding review can lead to terrible behaviour, but opening it may lead to an exodus of reviewers. There were no real solutions decided at these discussions, but the conversation was rich and full of insights.

Very uneven workload

It is generally known that finding and retaining reviewers is a challenge for editors. One of the first discussion points for the group was the issue of being asked to review work. Some people in the room said that they get asked about twice a week, but the requests are so great that they are only able to do about one in 10 of what is asked. At any given time researchers can be  doing at least one review.

Researchers working in different fields get asked by different journals, however some colleagues never get asked and complain about this. In reality, most people are never asked to undertake reviewing but people in top research universities are asked all the time.

The CUP suggested that we could have a shared database that lots of editors look at, however this idea was met with concern from at least one person: “You don’t want to reveal your good reviewers in case they get stolen”.  (Note that some journals publish the list of reviewers).

When the option of payment and credit for reviewing was raised the general consensus was that the reason reviewers don’t review was not because they don’t get paid, it is because they don’t have time.

Who is actually doing the reviewing?

It was freely admitted around the table that peer reviews are mostly done by PhD students and PostDocs. One of the reasons there are bad reviews is simply because they are being done by very inexperienced people. Many reviewers have not seen very many reviews before they review papers themselves. There is no formal training or assessment in peer review. And there is no incentive for editors to do something about the quality of reviews.

The question that then arises from this issue is: How we get people into the reviewing pool and how we give them some training? One solution offered in the STEM discussion was reviewer training. The option of encouraging scientists to recommend their post-docs as reviewers under their supervision would allow a new generation of reviewers to gain supervised experience.

Another problem with junior researchers reviewing is if you have people who are early in their careers they don’t feel they can say things, or are able to publish negative reviews. The problem is not the scandal, it is the hierarchy of power.

An observation in the STEM discussion was that the assumption that ‘senior = good’ sometimes does not stand up, as often early-career scientists will be excellent reviewers. It may be that senior researchers may best recognise how a paper fits into the field, however more junior scientists may be more adept in the technical details of a paper.

Discussions in the STEM group moved to the role of the Editor, where an observation was made that authors must understand that the final decision rests with the Editor, who is provided guidance by referees.

In STEM there is a practice of sharing reviews among all reviewers of a paper. Several of those present gave examples where reviews are shared mid-stream (e.g. after a ‘revise’ decision), at the end of the process, and even prior to a first decision – which gives reviewers a chance to cross-comment on each other’s reviews.

There was the comment that in STEM, editors must act pro-actively in cases of conflicting reviews, where it is the Editor’s responsibility to focus on the important points and give an informed decision and guidance to authors.

What works

The main reason peer review is essential is you have to filter out the ‘bad stuff.’ It is already very difficult to keep up with the literature, without that it would be impossible. When the peer review  happens, the end result is high quality. It is not just articles are being rejected but the work that comes out is better. A STEM editor noted that authors have written in praise of reviewing when their papers have been rejected, “So it does add quality”.

The thing you value most in a journal is the quality of reviewing and the editorial steer, observed a STEM participant. They said this was noticeable in Biology “where the editorial guidance is getting better”.

An observation in the Humanities discussion was that many of the models in the sciences don’t work for the Humanities. In early History most journal articles are published by early career people so peer review in this instance is an educational job teaching historians about how to write journal articles.

A STEM observation was that sometimes peer reviewing leads to collaboration. One editor noted that in their journal, over the last 10-15 years, there have been quite a number of papers where the reviewer has provided a helpful and detailed review of the paper and the authors have asked if they can be put on as authors of a paper.

What doesn’t work

The discussions about what doesn’t work in peer review ranged from the comment that “Peer review for monographs is ‘broken irretrievably’“. One attendee noted that peer review for edited books has never really happened.

One STEM participant said the thing they liked least about peer review was that from an author perspective is it is pretty random – picking two or three people. “If you get one or two bad reviews it won’t get published – this is up to luck”. They made the comment that peer review is not really reproducible. Another issue is because it is so closed there is no incentive for people to improve the quality of their peer reviewers – there are a small number of good and lots of average reviewers .

One humanities person noted that reviewers put the work they are reviewing “through an idea about what a journal articles should look like’” so while there “used be all kinds of writing in the 1970s now they are all similar”. This reduces work to the lowest common denominator. It is not just a minimal positive impact on work but a negative impact on work. Another person agreed on the homogenisation issue – but thought this was an editorial problem: “A good editor should be prepared to go out on a limb”.

Long delays over review

For some journals the average time for review is 6-7 months. One participant noted “I review book manuscripts shorter than that. The main problem is it is too slow”.

A post doc noted that the delay for peer review is a serious problem at that level of an academic career. It is necessary to have publications on a CV: “It is not good enough to say it is being considered by a journal (for the past year)”.

The cursory nature of many reviews arose a few times. One person asked whether as an editor you take the review or do you go to other reviewer and slow the whole process down. Some journals ask for up to six reviews which drags the whole things down. Another said the problem meant ‘you endlessly go through the ABC of the topic’.

Blaming peer review for something else?

One participant raised the question of whether we were blaming peer review for things it is not responsible for. There probably is a problem which is more to do with the changing nature of the academic endeavour. More academics are out there and everyone is being pressured to publish in top-tier journals. These are issues in the profession.

The group noted academia has too many people trying to get to too few positions. The ‘cascade’ [of publications being sent to lower tier journals after rejection] is connected to this – you have a hierarchy of quality.

The conversation moved to the pressure to publish in high-impact journals. One STEM participant noted the problem has got substantially worse than 30 years ago. It is to do with the amount of expectation put upon everyone in the STM system. The need people have to publish material that 20-30 years ago that no-one would have bothered with. The data that is sitting at the bottom of the drawer – usually when you retire. Now they are digging it out – so the rejection rate is going up because more rubbish is going in.

The free labour/payment debate

A social anthropologist noted that a major problem with peer review is we are asking people to do a whole load of free labour, “It is not just credit but we should find a way to pay people for what they do”. Some journals have a large editorial board who do a lot of the reviewing. One person noted this was not completely free labour as they get a subscription to the journal.

The idea of paying for peer review is an economic question. Does paying for things alter the relationship between the person who is paying and the person doing the work? In this discussion the participants had a concern that paying people makes authors into consumers, does it change the system by introducing an economic transaction?

There was some debate over the payment question. One researcher said they would be ‘happy to receive’ payment, but noted if they are offered payment for manuscripts they always collect books. There is ‘something exciting about which book I should go for’. Other suggested that it did not necessarily have to be a cash payment but some sort of quid pro quo, “it would be nice if there was an offer of that”.

There was some resistance to the idea of offering cash payment with the suggestion that there are people who are on a single salary and this would be a real incentive to review so they get burnt out and put poor reviews out. However, payment for timely reviews was considered a great idea by some.

A STEM participant noted that reviewers usually do so out of a sense of moral obligation, as a part of the academic world, and that it is difficult to feel morally obliged to do anything for which you are offered money, thus care must be exercised when thinking of bringing in payment or reward.

Portable reviews?

The idea of portable reviews was discussed by both groups. In principle it sounds good because a lot of work is being done twice, second reviews could happen much more quickly if they were attached. In addition with a small pool of reviewers, it is possible and likely that a paper rejected after review by one journal will then be sent to the same reviewer when re-submitted to another journal.

However the humanities group who noted there was “danger in importing the model from the hard sciences into humanities”. The STEM group noted this would require a re-programming of the culture of reviewing.

There would be some issues with implementation – for example a journal has to admit it is a second tier journal because it takes the ‘slops’, given top journals only take 4% of the papers. And there are some potential problems with re-using reviews. One participant said “I write different kinds of reviews for the top journals compared to the lower ones – so the reviews are not transferable – they could disadvantage the authors.”

There are some examples of this type of thing happening now. Antarctic Science requests authors to provide details of prior journals submitted to and reviews. But it is not universally accepted. Examples were given by the STEM group of times where authors decide to send prior reviews when submitting to a new journal, but the publishers will not accept these as they did not commission them.

Overall the STEM group broadly agreed that sharing reviews in this way would save a significant amount of time and work, the logistics of sharing reviews especially between publishers are obviously very difficult. They also noted that such procedures would greatly reduce wasted effort, and presumably also increase the sample of reviews / opinions used when making a decision on a paper.

Open peer review

The opinions in the discussion around open peer review ranged widely. The arguments against included: “Open peer review sounds like recipe for academia becoming diffused with hostility even more than already”. And: “The publication of reviews idea is absolutely terrible, you need the person to feel they can be open.” There was also some concern that people could be ingratiating if they were reviewing a researcher ‘higher up’.

A STEM participant noted that some authors had said that ‘if you publish all of the reviews at the end of the year we won’t review any more’. They noted that when you have a small pool of reviewers that is a problem. The reviewers’ concerns include that they won’t get another job.

In one case a participant said they had been involved with a journal that was doing the “absolute opposite” with triple blind review – dealing with issues of implicit bias – particular gender bias, where the editors don’t know who the author is. The conversation then noted that even in double blind it is possible to tell who the reviewer is. Most people don’t know how to de-identify the document as well.

However on the positive side, there was support for a dialogue between the author and the reviewer – involved in a three way discussion.  There is a problem in that it can be very prolonged. A STEM participant noted that sometimes the reviewer debate surrounding an article is more interesting or useful than the original paper itself.

One STEM participant observed they had been involved in open review and “was sceptical at first”. However they noted it makes people behave better. “In anonymous reviews I have seen really shocking things said“.

Conclusion

This was an interesting exercise – providing an opportunity for editors to talk amongst themselves and with a publisher about issues relating to peer review. It will be instructive to see what happens.

Published 19 July 2016
Written by Dr Danny Kingsley
Creative Commons License

One thought on “Lifting the lid on peer review

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.