Monthly Archives: July 2016

The case for Open Research: does peer review work?

This is the fourth in a series of blog posts on the Case for Open Research, this time looking at issues with peer review. The previous three have looked at the mis-measurement problem, the authorship problem and the accuracy of the scientific record. This blog follows on from the last and asks – if peer review is working why are we facing issues like increased retractions and the inability to reproduce considerable proportion of the literature? (Spoiler alert – peer review only works sometimes.)

Again, there is an entire corpus of research behind peer review, this blog post merely scrapes the surface. As a small indicator, there has been a Peer Review Congress held every four years for the past thirty years (see here for an overview). Readers might also be interested in some work I did on this published as The peer review paradox – An Australian case study.

There is a second, related post published with this one today. Last year Cambridge University Press invited a group of researchers to discuss the topic of peer review – the write-up is here.

An explainer

What is peer review? Generally, peer review is the process by which research submitted for publication is overseen by colleagues who have expertise in the same or similar field before publication. Peer review is defined as having several purposes:

  • Checking the work for ‘soundness’
  • Checking the work for originality and significance
  • Determining whether the work ‘fits’ the journal
  • Improving the paper

Last year, during peer review week the Royal Society hosted a debate on whether peer review was fit for purpose. The debate found that in principle peer review is seen as a good thing, but the implementation is sometimes concerning. A major concern was the lack of evidence of the effectiveness of the various forms of peer review.

Robert Merton in his seminal 1942 work The Normative Structure of Science described four norms of science*. ‘Organised scepticism’ is the norm that scientific claims should be exposed to critical scrutiny before being accepted.  How this has manifested has changed over the years. Refereeing in its current form, as an activity that symbolises objective judgement of research is a relatively new phenomenon – something that has only taken hold since the 1960s.  Indeed, Nature was still publishing some unrefereed articles until 1973.

(*The other three norms are ‘Universalism’ – that anyone can participate, ‘Communism’ – that there is common ownership of research findings and ‘Disinterestedness’ – that research is done for the common good, not private benefit. These are an interesting framework with which to look at the Open Access debate, but that is another discussion.)

Crediting hidden work

The authorship blog in this series  looked at credit for contribution to a research project, but the academic community contributes to the scholarly ecosystem in many ways.  One of the criticisms of peer review is that it is ‘hidden’ work that researchers do. Most peer review is ‘double blind’ – where the reviewer does not know  the name of the author and the author does not know who is reviewing the work. This makes it very difficult to quantify who is doing this work.  Peer review and journal editing is a huge tranche of unpaid work that academics contributions to research.

One of the issues with peer review is the sheer volume of articles being submitted for publication each year. A 2008 study  ‘Activities, costs and funding flows in the scholarly communications system‘ estimated the global unpaid non-cash cost of peer review as £1.9 billion annually.

There has been some call to try and recognise peer review in some way as part of the academic workflow. In January 2015 a group of over 40 Australian Wiley editors sent an open letter Recognition for peer review and editing in Australia – and beyond?  to their universities, funders, and other research institutions and organisations in Australia, calling for a way to reward the work. In September that year in Australia,  Mark Robertson, publishing director for Wiley Research Asia-Pacific, said “there was a bit of a crisis” with peer reviewing, with new approaches needed to give peer reviewers appropriate recognition and encourage ­institutions to allow staff to put time aside to review.

There are some attempts to do something about this problem. A service called Publons is a way to ‘register’ the peer review a researcher is undertaking. There have also been calls for an ‘R index’ which would give citable recognition to reviewers. The idea is to improve the system by both encouraging more participation and providing higher quality, constructive input, without the need for a loss of anonymity.

Peer review fails

The secret nature of peer review means it is also potentially open to manipulation. An example of problematic practices is peer review fraud. A recurrent theme throughout discussions on peer review at this year’s Researcher 2 Reader conference (see the blog summary here) was that finding and retaining peer reviewers was a challenge that was getting worse. As the process of obtaining willing peer reviewers becomes more challenging, it is not uncommon for the journal to ask the author to nominate possible reviewers.  However  this can lead to peer review ‘fraud’ where the nominated reviewer is not who they are meant to be which means the articles make their way into the literature without actual review.

In August 2015 Springer was forced to retract 64 articles from 10 journals, ‘after editorial checks spotted fake email addresses, and subsequent internal investigations uncovered fabricated peer review reports’.  They concluded the peer review process had been ‘compromised’.

In November 2014, BioMed Central uncovered a scam where they were forced to retract close to 50 papers because of fake peer review issues. This prompted BioMed Central to produce the blog ‘Who reviews the reviewers?’ and Nature writing a story on Publishing: the peer review scam.

In May 2015 Science  retracted a paper because the supporting data was entirely fabricated. The paper got through peer review because it had a big name researcher on it. There is a lengthy (but worthwhile) discussion of the scandal here. The final clue was getting hold of a closed data set  that: ‘wasn’t a publicly accessible dataset, but Kalla had figured out a way to download a copy’. This is why we need open data, by the way …

But is peer review itself the problem here? Is this all not simply the result of the pressure on the research community to publish in high impact journals for their careers?

Conclusion

So at the end of all of this, is peer review ‘broken’? Yes according to a study of 270 scientists worldwide published last week. But in a considerably larger study published last year by Taylor and Francis showed an enthusiasm for peer review. The white paper Peer review in 2015: a global view,  which gathered “opinions from those who author research articles, those who review them, and the journal editors who oversee the process”. It found that researchers value the peer review process.  Most respondents agreed that peer review greatly helps scholarly communication by testing the academic rigour of outputs. The majority also reported that they felt the peer review process had improved the quality of their own most recent published article.

Peer review is the ‘least worst’ process we have for ensuring that work is sound. Generally the research community require some sort of review of research, but there are plenty of examples that our current peer review process is not delivering the consistent verification it should. This system is relatively new and it is perhaps time to look at shifting the nature of peer review once more. On option is to open up peer review, and this can take many forms. Identifying reviewers, publishing reviews with a DOI so they can be cited, publishing the original submitted article with all the reviews and the final work, allowing previous reviews to be attached to the resubmitted article are all possibilities.

Adopting  one or all of these practices benefits the reviewers because it exposes the hidden work involved in reviewing. It can also reduce the burden on reviewers by minimising the number of times a paper is re-reviewed (remember the rejection rate of some journals is up to 95% meaning papers can get cascaded and re-reviewed multiple times).

This is the last of the ‘issues’ blogs in the case for Open Research series. The series will turn its attention to some of the solutions now available.

Published 19 July 2016
Written by Dr Danny Kingsley
Creative Commons License

Lifting the lid on peer review

This blog describes some of the insights that emerged from two sets of discussions with academics at Cambridge University organised by Cambridge University Press last year. The topic was peer review and the two sessions were a group of editors in the Humanities and Social Sciences, the other a group of editors in the Science, Technical, Medical and Engineering areas.

The themes that emerged echoed many of the issues that were raised in the associated blog ‘The case for Open Research: does peer review work?‘. If anything, the discussion paints a darker picture of the peer review landscape.

Themes included the challenges of finding and retaining reviewers, the reviewing demand on some people, the reality that many reviews are done by inexperienced researchers, that peer reviewing can lead to collaboration, that blinding review can lead to terrible behaviour, but opening it may lead to an exodus of reviewers. There were no real solutions decided at these discussions, but the conversation was rich and full of insights.

Very uneven workload

It is generally known that finding and retaining reviewers is a challenge for editors. One of the first discussion points for the group was the issue of being asked to review work. Some people in the room said that they get asked about twice a week, but the requests are so great that they are only able to do about one in 10 of what is asked. At any given time researchers can be  doing at least one review.

Researchers working in different fields get asked by different journals, however some colleagues never get asked and complain about this. In reality, most people are never asked to undertake reviewing but people in top research universities are asked all the time.

The CUP suggested that we could have a shared database that lots of editors look at, however this idea was met with concern from at least one person: “You don’t want to reveal your good reviewers in case they get stolen”.  (Note that some journals publish the list of reviewers).

When the option of payment and credit for reviewing was raised the general consensus was that the reason reviewers don’t review was not because they don’t get paid, it is because they don’t have time.

Who is actually doing the reviewing?

It was freely admitted around the table that peer reviews are mostly done by PhD students and PostDocs. One of the reasons there are bad reviews is simply because they are being done by very inexperienced people. Many reviewers have not seen very many reviews before they review papers themselves. There is no formal training or assessment in peer review. And there is no incentive for editors to do something about the quality of reviews.

The question that then arises from this issue is: How we get people into the reviewing pool and how we give them some training? One solution offered in the STEM discussion was reviewer training. The option of encouraging scientists to recommend their post-docs as reviewers under their supervision would allow a new generation of reviewers to gain supervised experience.

Another problem with junior researchers reviewing is if you have people who are early in their careers they don’t feel they can say things, or are able to publish negative reviews. The problem is not the scandal, it is the hierarchy of power.

An observation in the STEM discussion was that the assumption that ‘senior = good’ sometimes does not stand up, as often early-career scientists will be excellent reviewers. It may be that senior researchers may best recognise how a paper fits into the field, however more junior scientists may be more adept in the technical details of a paper.

Discussions in the STEM group moved to the role of the Editor, where an observation was made that authors must understand that the final decision rests with the Editor, who is provided guidance by referees.

In STEM there is a practice of sharing reviews among all reviewers of a paper. Several of those present gave examples where reviews are shared mid-stream (e.g. after a ‘revise’ decision), at the end of the process, and even prior to a first decision – which gives reviewers a chance to cross-comment on each other’s reviews.

There was the comment that in STEM, editors must act pro-actively in cases of conflicting reviews, where it is the Editor’s responsibility to focus on the important points and give an informed decision and guidance to authors.

What works

The main reason peer review is essential is you have to filter out the ‘bad stuff.’ It is already very difficult to keep up with the literature, without that it would be impossible. When the peer review  happens, the end result is high quality. It is not just articles are being rejected but the work that comes out is better. A STEM editor noted that authors have written in praise of reviewing when their papers have been rejected, “So it does add quality”.

The thing you value most in a journal is the quality of reviewing and the editorial steer, observed a STEM participant. They said this was noticeable in Biology “where the editorial guidance is getting better”.

An observation in the Humanities discussion was that many of the models in the sciences don’t work for the Humanities. In early History most journal articles are published by early career people so peer review in this instance is an educational job teaching historians about how to write journal articles.

A STEM observation was that sometimes peer reviewing leads to collaboration. One editor noted that in their journal, over the last 10-15 years, there have been quite a number of papers where the reviewer has provided a helpful and detailed review of the paper and the authors have asked if they can be put on as authors of a paper.

What doesn’t work

The discussions about what doesn’t work in peer review ranged from the comment that “Peer review for monographs is ‘broken irretrievably’“. One attendee noted that peer review for edited books has never really happened.

One STEM participant said the thing they liked least about peer review was that from an author perspective is it is pretty random – picking two or three people. “If you get one or two bad reviews it won’t get published – this is up to luck”. They made the comment that peer review is not really reproducible. Another issue is because it is so closed there is no incentive for people to improve the quality of their peer reviewers – there are a small number of good and lots of average reviewers .

One humanities person noted that reviewers put the work they are reviewing “through an idea about what a journal articles should look like’” so while there “used be all kinds of writing in the 1970s now they are all similar”. This reduces work to the lowest common denominator. It is not just a minimal positive impact on work but a negative impact on work. Another person agreed on the homogenisation issue – but thought this was an editorial problem: “A good editor should be prepared to go out on a limb”.

Long delays over review

For some journals the average time for review is 6-7 months. One participant noted “I review book manuscripts shorter than that. The main problem is it is too slow”.

A post doc noted that the delay for peer review is a serious problem at that level of an academic career. It is necessary to have publications on a CV: “It is not good enough to say it is being considered by a journal (for the past year)”.

The cursory nature of many reviews arose a few times. One person asked whether as an editor you take the review or do you go to other reviewer and slow the whole process down. Some journals ask for up to six reviews which drags the whole things down. Another said the problem meant ‘you endlessly go through the ABC of the topic’.

Blaming peer review for something else?

One participant raised the question of whether we were blaming peer review for things it is not responsible for. There probably is a problem which is more to do with the changing nature of the academic endeavour. More academics are out there and everyone is being pressured to publish in top-tier journals. These are issues in the profession.

The group noted academia has too many people trying to get to too few positions. The ‘cascade’ [of publications being sent to lower tier journals after rejection] is connected to this – you have a hierarchy of quality.

The conversation moved to the pressure to publish in high-impact journals. One STEM participant noted the problem has got substantially worse than 30 years ago. It is to do with the amount of expectation put upon everyone in the STM system. The need people have to publish material that 20-30 years ago that no-one would have bothered with. The data that is sitting at the bottom of the drawer – usually when you retire. Now they are digging it out – so the rejection rate is going up because more rubbish is going in.

The free labour/payment debate

A social anthropologist noted that a major problem with peer review is we are asking people to do a whole load of free labour, “It is not just credit but we should find a way to pay people for what they do”. Some journals have a large editorial board who do a lot of the reviewing. One person noted this was not completely free labour as they get a subscription to the journal.

The idea of paying for peer review is an economic question. Does paying for things alter the relationship between the person who is paying and the person doing the work? In this discussion the participants had a concern that paying people makes authors into consumers, does it change the system by introducing an economic transaction?

There was some debate over the payment question. One researcher said they would be ‘happy to receive’ payment, but noted if they are offered payment for manuscripts they always collect books. There is ‘something exciting about which book I should go for’. Other suggested that it did not necessarily have to be a cash payment but some sort of quid pro quo, “it would be nice if there was an offer of that”.

There was some resistance to the idea of offering cash payment with the suggestion that there are people who are on a single salary and this would be a real incentive to review so they get burnt out and put poor reviews out. However, payment for timely reviews was considered a great idea by some.

A STEM participant noted that reviewers usually do so out of a sense of moral obligation, as a part of the academic world, and that it is difficult to feel morally obliged to do anything for which you are offered money, thus care must be exercised when thinking of bringing in payment or reward.

Portable reviews?

The idea of portable reviews was discussed by both groups. In principle it sounds good because a lot of work is being done twice, second reviews could happen much more quickly if they were attached. In addition with a small pool of reviewers, it is possible and likely that a paper rejected after review by one journal will then be sent to the same reviewer when re-submitted to another journal.

However the humanities group who noted there was “danger in importing the model from the hard sciences into humanities”. The STEM group noted this would require a re-programming of the culture of reviewing.

There would be some issues with implementation – for example a journal has to admit it is a second tier journal because it takes the ‘slops’, given top journals only take 4% of the papers. And there are some potential problems with re-using reviews. One participant said “I write different kinds of reviews for the top journals compared to the lower ones – so the reviews are not transferable – they could disadvantage the authors.”

There are some examples of this type of thing happening now. Antarctic Science requests authors to provide details of prior journals submitted to and reviews. But it is not universally accepted. Examples were given by the STEM group of times where authors decide to send prior reviews when submitting to a new journal, but the publishers will not accept these as they did not commission them.

Overall the STEM group broadly agreed that sharing reviews in this way would save a significant amount of time and work, the logistics of sharing reviews especially between publishers are obviously very difficult. They also noted that such procedures would greatly reduce wasted effort, and presumably also increase the sample of reviews / opinions used when making a decision on a paper.

Open peer review

The opinions in the discussion around open peer review ranged widely. The arguments against included: “Open peer review sounds like recipe for academia becoming diffused with hostility even more than already”. And: “The publication of reviews idea is absolutely terrible, you need the person to feel they can be open.” There was also some concern that people could be ingratiating if they were reviewing a researcher ‘higher up’.

A STEM participant noted that some authors had said that ‘if you publish all of the reviews at the end of the year we won’t review any more’. They noted that when you have a small pool of reviewers that is a problem. The reviewers’ concerns include that they won’t get another job.

In one case a participant said they had been involved with a journal that was doing the “absolute opposite” with triple blind review – dealing with issues of implicit bias – particular gender bias, where the editors don’t know who the author is. The conversation then noted that even in double blind it is possible to tell who the reviewer is. Most people don’t know how to de-identify the document as well.

However on the positive side, there was support for a dialogue between the author and the reviewer – involved in a three way discussion.  There is a problem in that it can be very prolonged. A STEM participant noted that sometimes the reviewer debate surrounding an article is more interesting or useful than the original paper itself.

One STEM participant observed they had been involved in open review and “was sceptical at first”. However they noted it makes people behave better. “In anonymous reviews I have seen really shocking things said“.

Conclusion

This was an interesting exercise – providing an opportunity for editors to talk amongst themselves and with a publisher about issues relating to peer review. It will be instructive to see what happens.

Published 19 July 2016
Written by Dr Danny Kingsley
Creative Commons License

The case for Open Research: reproducibility, retractions & retrospective hypotheses

This is the third instalment of ‘The case for Open Research’ series of blogs exploring the problems with Scholarly Communication caused by having a single value point in research – publication in a high impact journal. The first post explored the mis-measurement of researchers and the second looked at issues with authorship.

This blog will explore the accuracy of the research record, including the ability (or otherwise) to reproduce research that has been published, what happens if research is retracted, and a concerning trend towards altering hypotheses in light of the data that is produced.

Science is thought to progress  through the building of knowledge through questioning, testing and checking work. The idea of ‘standing on the shoulders of giants’ summarises this – we discover truth by building on previous discoveries. But scientists are very rarely rewarded for being right, they are rewarded for publishing in certain journals and for getting grants. This can result in distortion of the science.

How does this manifest? The Nine Circles of Scientific Hell describes questionable research practices that occur, ranging from Overselling, Post-Hoc storytelling, p-value Fishing, Creative use of Outliers to Non or Partial Publication of Data. We will explore some of these below. (Note this article appears in a special issue of Perspectives on Psychological Science on the Replicability in Psychological Science – which contains many other interesting articles).

Much as we like to think of science as an objective activity it is not. Scientists are supposed to be impartial observers, but in reality they need to get grants, and publish papers to get promoted to more ‘glamorous institutions’. This was the observation of Professor Marcus Munafo in his presentation ‘Scientific Ecosystems and Research Reproducibility’ at the Research Libraries UK conference held earlier this year (the link will take you to videos of the presentations). Monafo observed that scientists are rarely rewarded for being right, so the scientific record is being distorted by the scientific ecosystem.

Monafo, a Biological Psychologist at Bristol University, noted that research, particularly in the biomedical sciences, ‘might not be as robust as we might have hoped‘.

The reproducibility crisis

A recent survey of over 1500 scientists by Nature tried to answer the question “Is there a reproducibility crisis?” The answer is yes, but whether that matters appears to be debatable: “Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.”

There are certainly plenty of examples of the inability to reproduce findings. Pharmaceutical research can be fraught. Some research into potential drug targets found that in almost two-thirds of the projects looked at, there were inconsistencies between published data and the data resulting from attempts to reproduce the findings. 

There are implications for medical research as well. A study published last month looked at functional MRI (fMRI), noting that when analysing data using different experimental designs they should in theory find a significance threshold of 5% (a p-value of less than 0.05  which is conventionally described as statistically significant). However they found “the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results.”

A 2013 survey of cancer researchers found that approximately half of respondents had experienced at least one episode of the inability to reproduce published data. Of those people who followed this up with the original authors, most were unable to determine why the work was not reproducible. Some of those original authors were (politely) described as ‘less than “collegial”’.

So what factors are at play here? Partly it is due to the personal investment in a particular field. A 2012 study of authors of significant medical studies concluded that: “Researchers are influenced by their own investment in the field, when interpreting a meta-analysis that includes their own study. Authors who published significant results are more likely to believe that a strong association exists compared with methodologists.”

This was also a factor in a study Why Most Published Research Findings Are False that considered the way research studies are constructed. This work found that “for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”

Psychology is a discipline where there is a strong emphasis on novelty, discovery and finding something that has a p-value of less than 0.05. There is such an issue with reproducibility in psychology that there are large efforts to try and reproduce psychological studies to estimate the reproducibility of the research. The Association for Psychological Science has launched a new article type of Registered Replication Reports which consists of “multi-lab, high-quality replications of important experiments in psychological science along with comments by the authors of the original studies”.

This is a good initiative, although there might be some resistance to this type of scrutiny. Something that was interesting from the Nature survey on reproducibility was the question of what happened when researchers attempted to publish a replication study. Note that only a few of respondents had done this, possibly because incentives to publish positive replications are low and journals can be reluctant to publish negative findings. The study found that “several respondents who had published a failed replication said that editors and reviewers demanded that they play down comparisons with the original study”.

What is causing this distortion of the research? It is the emphasis on publication of novel results in high impact journals. There is no reward for publishing null results or negative findings.

HARKing problem

The p-value came up again in a discussion about HARKing at this year’s FORCE2016 conference (HARK stands for Hypothesising After the Results are Known – a term coined in 1998).

In his presentation at FORCE2016 Eric Turner, Associate Professor OHSU, spoke about HARKing (see this video 37 minutes onward).  The process is that the researcher conceives the study, writes the protocol up for their eyes only, with a hypothesis and then collects lots of other data – ‘the more the merrier’ according to Turner. Then the researcher runs the study and analyses the data. If there is enough data, the researcher can try alternative methods and can play with statistics. ‘You can torture the data and it will confess to anything’ noted Turner. At some point the p-value will come out below 0.05. Only then does the research get written up.

Turner noted that he was talking about the kind of research where the work is trying to confirm a hypothesis (like clinical trials). This is different to hypothesis-generating research.

In the US clinical trials with human participants must be registered with the Federal Drug Agency (FDA) so it is possible to see the results of all trials. Turner talked about his 2008 study looking at antidepressant trials, where the journal version of the results supported the general view that antidepressants always beat placebo.  However when they looked at the FDA version of all of the studies of the same drugs it happened that half of the studies were positive and half and half were not positive. The published record does not reflect the reality.

The majority of the negative studies were simply not published, but 11 of the papers had been ‘spun’ from negative to positive. These papers had a median impact factor of 5 and median citations of 68 – these were highly influential articles. As Turner noted ‘HARKing is deceptively easy’.

This perspective is supported by the finding that a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. Indeed Munafo noted that over 90% of the psychological literature finds what it purports to set out to do. Either the research being undertaken is extraordinarily mundane, or something is wrong.

Increase in retractions

So what happens when it is discovered that something that is published is incorrect? Journals do have a system which allows for the retraction of papers, and this is a practice which has been increasing over the past few years.  Research looking at why the number of retractions have increased found that it was partly due to lower barriers to publication of flawed articles. In addition papers are now being retracted for issues like plagiarism and retractions are now happening more quickly.

Retraction Watch is a service which tracks retractions ‘as a window into the scientific process’. It is enlightening reading with several stories published every day.

An analysis of correction rates in the chemical literature found that the correction rate averaged about 1.4 percent for the journals examined. While there were numerous types of corrections, chemical structures, omission of relevant references, and data errors were some of the most frequent types of published corrections. Corrections are not the same as retractions, but they are significant.

There is some evidence to show that the higher the impact factor of the journal a work is published in, the higher the chance it will be retracted. A 2011 study showed a direct correlation between impact factor and the number of retractions, with New England Journal of Medicine topping the list. This situation has led to claims that the top ranking journals publish the least reliable science.

A study conducted earlier this year demonstrated that there are no commonly agreed definitions of academic integrity and malpractice. (I should note that amongst other findings the study found 17.9% (± 6.1%) of respondents reported having fabricated research data. This is almost 1 in 5 researchers. However there have been some strong criticisms of the methodology.)

There are questions about how retractions should be managed. In the print era it was not unheard of for library staff to put stickers into printed journals notifying a retraction. But in the ‘electronic age’ asked one author in 2002 when the record can be erased, is this the right thing to do because erasing the article entirely is amending history.  The Committee on Publication Ethics (COPE) do have some guidelines for managing retractions which suggest the retraction be linked to the retracted article wherever possible.

However, from a reader’s perspective, even if an article is retracted this might not be obvious. In 2003* a survey of 43 online journals found 17 had no links between the original articles and later corrections. When present, hyperlinks between articles and errata showed patterns in presentation style, but lacked consistency. There are some good examples – such as Science Citation Index but there was a lack of indexing in INSPEC, and a lack of retrieval with SciFinder Scholar.

[*Note this originally said 2013, amended 2 September 2016]

Conclusion

All of this paints a pretty bleak picture. In some disciplines the pressure to publish novel results in high impact journals results in the academic record being ‘selectively curated’ at best. At worst it results in deliberate manipulation of results. And if mistakes are picked up there is no guarantee that this will be made obvious to the reader.

This all stems from the need to publish novel results in high impact journals for career progression. And when those high impact journals can be shown to be publishing a significant amount of subsequently debunked work, then the value of them as a goal for publication comes into serious question.

The next instalment in this series will look at gatekeeping in research – peer review.

Published 14 July 2016
Written by Dr Danny Kingsley
Creative Commons License