Tag Archives: green OA

‘Be nice to each other’ – the second Researcher to Reader conference

Aaaaaaaaaaargh! was Mark Carden’s summary of the second annual Researcher to Reader conference, along with a plea that the different players show respect to one another. My take home messages were slightly different:

  • Publishers should embrace values of researchers & librarians and become more open, collaborative, experimental and disinterested.
  • Academic leaders and institutions should do their bit in combating the metrics focus.
  • Big Deals don’t save libraries money, what helps them is the ability to cancel journals.
  • The green OA = subscription cancellations is only viable in a utopian, almost fully green world.
  • There are serious issues in the supply chain of getting books to readers.
  • And copyright arrangements in academia do not help scholarship or protect authors*.

The programme for the conference included a mix of presentations, debates and workshops. The Twitter hashtag is #r2rconf.

As is inevitable in the current climate, particularly at a conference where there were quite a few Americans, the shadow of Trump was cast over the proceedings. There was much mention of the political upheaval and the place research and science has in this.

[*please see Kent Anderson’s comment at the bottom of this blog]

In the publishing corner

Time for publishers to raise to the challenge

The conference opened with an impassioned speech by Mark Allin, the President and CEO of John Wiley & Sons, who started with the statement this was “not a time for retreat, but a time for outreach and collaboration and to be bold”.

The talk was not what was expected from a large commercial publisher. Allin asked: “How can publishers act as advocates for truth and knowledge in the current political climate?” He mentioned that Proquest has launched a displaced researchers programme in reaction to world events, saying, “it’s a start but we can play a bigger role”.

Allin asked what publishers can do to ensure research is being accessed. Referencing “The content trap” by Bharat Anand, Allin said “We won’t as a media industry survive as a media content and putting it in a bottle and controlling its distribution. We will only succeed if we connect the users. So we need to re-engineer the workflows making them seamless, frictionless. “We should be making sure that … we are offering access to all those who want it.”

Allin raised the issue of access, noting that ResearchGate has more usage than any single publisher. He made the point that “customers don’t care if it is the version of record, and don’t care about our arcane copyright laws”. This is why people use SciHub, it is ease of access. He said publishers should not give up protecting copyright but must realise its limitations and provide easy access.

Researchers are the centre of gravity – we need to help them spend more time researching and less time publishing, he says. There is a lesson here, he noted, suppliers should use “the divine discontent of the customer as their north star”. He used the example of Amazon to suggest people working in scholarly communication need to use technology much better to connect up. “We need to experiment more, do more, fail more, be more interconnected” he said, where “publishing needs open source and open standards” which are required for transformational impact on scholarly publishing – “the Uber equivalent”.

His suggestion for addressing the challenges of these sharing platforms is to “try and make your experience better than downloading from a pirate site”, and that this would be a better response than taking the legal route and issuing takedown notices.  He asked: “Should we give up? No, but we need to recognise there are limits. We need to do more to enable access.”

Allin called the situation, saying publishing may have gone online but how much has the internet really changed scholarly communication practices? The page is still a unit of publishing, even in digital workflows. It shouldn’t be, we should have a ‘digital first’ workflow. The question isn’t ‘what should the workflow look like?’, but ‘why hasn’t it improved?’, he said, noting that innovation is always slowed by social norms not technology. Publishers should embrace values of researchers & librarians and become more open, collaborative, experimental and disinterested.

So what do publishers do?

Publishers “provide quality and stability”, according to Kent Anderson, speaking on the second day (no relation to Rick Anderson) in his presentation about ‘how to cook up better results in communicating research’. Anderson is the CEO of Redlink, a company that provides publishers and libraries with analytic and usage information. He is also the founder of the blog The Scholarly Kitchen.

Anderson made the argument that “publishing is more than pushing a button”, by expanding on his blog on ‘96 things publishers do’. This talk differed from Allin’s because it focused on the contribution of publishers.

Anderson talked about the peer review process, noting that rejections help academics because usually they are about mismatch. He said that articles do better in the second journal they’re submitted to.

During a discussion about submission fees, Anderson noted that these “can cover the costs of peer review of rejected papers but authors hate them because they see peer review as free”. His comment that a $250 journal submission charge with one journal is justified by the fact that the target market (orthopaedic surgeons) ‘are rich’ received (rather unsurprisingly) some response from the audience via Twitter.

Anderson also made the accusation that open access publishers take lower quality articles when money gets tight. This did cause something of a backlash on the Twitter discussion with a request for a citation for this statement, a request for examples of publishers lowering standards to bring in more APC income with the exception of scam publishers. [ADDENDUM: Kent Anderson below says that this was not an ‘accusation’ but an ‘observation’. The Twitter challenge for ‘citation please?’ holds.]

There were a couple of good points made by Anderson. He argued that one of the value adds that publishers do is training editors. This is supported by a small survey we undertook with the research community at Cambridge last year which revealed that 30% of the editors who responded felt they needed more training.

The library corner

The green threat

There is good reason to expect that green OA will make people and libraries cancel their subscriptions, at least it will in the utopian future described by Rick Anderson (no relation to Kent Anderson), Associate Dean of University of Utah in his talk “The Forbidden Forecast, Thinking about open access and library subscriptions”.

Anderson started by asking why, if we’re in a library funding crisis, aren’t we seeing sustained levels of unsubscription? He then explained that Big Deals don’t save libraries money. They lower the cost per article, but this is a value measure, not a cost measure. What the Big Deal did was make cancellations more difficult. Most libraries have cancelled every journal that they can without Faculty ‘burning down the library’, to preserve the Big Deal. This explains the persistence of subscriptions over time. The library is forced to redirect money away from other resources (books) and into serials budget. The reason we can get away with this is because books are not used much.

The wolf seems to be well and truly upon us. There have been lots of cancellations and reduction of library budgets in the USA (a claim supported by a long list of examples). The number of cancellations grows as the money being siphoned off book budgets runs out.

Anderson noted that the emergence of new gold OA journals doesn’t help libraries, this does nothing to relieve the journal emergency. They just add to the list of costs because it is a unique set of content. What does help libraries is the ability to cancel journals. Professor Syun Tutiya, Librarian Emeritus at Chiba University in a separate session noted that if Japan were to flip from a fully subscription model to APCs it would be about the same cost, so that would solve the problem.

Anderson said that there is an argument that “there is no evidence that green OA cancels journals” (I should note that I am well and truly in this camp, see my argument). Anderson’s argument that this is saying the future hasn’t happened yet. The implicit argument here is that because green OA has not caused cancellations so far means it won’t do it into the future.

Library money is taxpayers’ money – it is not always going to flow. There is much greater scrutiny of journal big deals as budgets shrink.

Anderson argued that green open access provides inconsistent and delayed access to copies which aren’t always the version of record, and this has protected subscriptions. He noted that Green OA is dependent on subscription journals, which is “ironic given that it also undermines them”. You can’t make something completely & freely available without undermining the commercial model for that thing, Anderson argued.

So, Anderson said, given green OA exists and has for years, and has not had any impact on subscriptions, what would need to happen for this to occur? Anderson then described two subscription scenarios. The low cancellation scenario (which is the current situation) where green open access is provided sporadically and unreliably. In this situation, access is delayed by a year or so, and the versions available for free are somewhat inferior.

The high cancellation scenario is where there is high uptake of green OA because there are funder requirements and the version is close to the final one. Anderson argued that the “OA advocates” prefer this scenario and they “have not thought through the process”. If the cost is low enough of finding which journals have OA versions and the free versions are good enough, he said, subscriptions will be cancelled. The black and white version of Anderson’s future is: “If green OA works then subscriptions fail, and the reverse is true”.

Not surprisingly I disagreed with Anderson’s argument, based on several points. To start, there would need to have a certain percentage of the work available before a subscription could be cancelled. Professor Syun Tutiya, Librarian Emeritus at Chiba University noted in a different discussion that in Japan only 6.9% of material is available Green OA in repositories and argued that institutional repositories are good for lots of things but not OA. Certainly in the UK, with the strongest open access policies in the world, we are not capturing anything like the full output. And the UK is itself only 6% of the research output for the world, so we are certainly a very long way away from this scenario.

In addition, according to work undertaken by Michael Jubb in 2015 – most of the green Open Access material is available in places other than institutional repositories, such as ResearchGate and SciHub. Do librarians really feel comfortable cancelling subscriptions on the basis of something being available in a proprietary or illegal format?

The researcher perspective

Stephen Curry, Professor of Structural Biology, Imperial College London, spoke about “Zen and the Art of Research Assessment”. He started by asking why people become researchers and gave several reasons: to understand the world, change the world, earn a living and be remembered. He then asked how they do it. The answer is to publish in high impact journals and bring in grant money. But this means it is easy to lose sight of the original motivations, which are easier to achieve if we are in an open world.

In discussing the report published in 2015, which looked into the assessment of research, “The Metric Tide“, Curry noted that metrics & league tables aren’t without value. They do help to rank football teams, for example. But university league tables are less useful because they aggregate many things so are too crude, even though they incorporate valuable information.

Are we as smart as we think we are, he asked, if we subject ourselves to such crude metrics of achievement? The limitations of research metrics have been talked about a lot but they need to be better known. Often they are too precise. For example was Caltech really better than University of Oxford last year but worse this year?

But numbers can be seductive. Researchers want to focus on research without pressure from metrics, however many Early Career Researchers and PhD students are increasingly fretting about publications hierarchy. Curry asked “On your death bed will you be worrying about your H-Index?”

There is a greater pressure to publish rather than pressure to do good science. We should all take responsibility to change this culture. Assessing research based on outputs is creating perverse incentives. It’s the content of each paper that matters, not the name of the journal.

In terms of solutions, Curry suggested it would be better to put higher education institutions in 5% brackets rather than ranking them 1-n in the league tables. Curry calls for academic leaders and institutions to do their bit in combating the metrics focus. He also called for much wider adoption of the Declaration On Research Assessment (known as DORA). Curry’s own institution, Imperial College London, has done so recently.

Curry argued that ‘indicators’ would be a more appropriate term than ‘metrics’ in research assessment because we’re looking at proxies. The term metrics imply you know what you are measuring. Certainly metrics can inform but they cannot replace judgement. Users and providers must be transparent.

Another solution is preprints, which shift attention from container to content because readers use the abstract not the journal name to decide which papers to read. Note that this idea is starting to become more mainstream with the research by the NIH towards the end of last year “Including Preprints and Interim Research Products in NIH Applications and Reports

Copyright discussion

I sat on a panel to discuss copyright with a funder – Mark Thorley, Head of Science Information, Natural Environment Research Council , a lawyer – Alexander Ross, Partner, Wiggin LLP and a publisher – Dr Robert Harington,  Associate Executive Director, American Mathematical Society.

My argument** was that selling or giving the copyright to a third party with a purely commercial interest and that did not contribute to the creation of the work does not protect originators. That was the case in the Kookaburra song example. It is also the case in academic publishing. The copyright transfer form/publisher agreement that authors sign usually mean that the authors retain their moral rights to be named as the authors of the work, but they sign away rights to make any money out of them.

I argued that publishers don’t need to hold the copyright to ensure commercial viability. They just need first exclusive publishing rights. We really need to sit down and look at how copyright is being used in the academic sphere – who does it protect? Not the originators of the work.

Judging by the mood in the room, the debate could have gone on for considerably longer. There is still a lot of meat on that bone. (**See the end of this blog for details of my argument).

The intermediary corner

The problem of getting books to readers

There are serious issues in the supply chain of getting books to readers, according to Dr Michael Jubb, Independent Consultant and Richard Fisher from Something Understood Scholarly Communication.

The problems are multi-pronged. For a start, discoverability of books is “disastrous” due to completely different metadata standards in the supply chain. ONIX is used for retail trade and MARC is standard for libraries, Neither has detailed information for authors, information about the contents of chapters, sections etc, or information about reviews and comments.

There are also a multitude of channels for getting books to libraries. There has been involvement in the past few years of several different kinds of intermediaries – metadata suppliers, sales agents, wholesalers, aggregators, distributors etc – who are holding digital versions of books that can be supplied through the different type of book platforms. Libraries have some titles on multiple platforms but others only available on one platform.

There are also huge challenges around discoverability and the e-commerce systems, which is “too bitty”. The most important change that has happened in books has been Amazon, however publisher e-commerce “has a long way to go before it is anything like as good as Amazon”.

Fisher also reminded the group that there are far more books published each year than there are journals – it’s a more complex world. He noted that about 215 [NOTE: amended from original 250 in response to Richard Fisher’s comment below] different imprints were used by British historians in the last REF. Many of these publishers are very small with very small margins.

Jubb and Fisher both emphasised readers’ strong preference for print, which implies that much more work needed on ebook user experience. There are ‘huge tensions’ between reader preference (print) and the drive for e-book acquisition models at libraries.

The situation is probably best summed up in the statement that “no-one in the industry has a good handle on what works best”.

Providing efficient access management

Current access control is not functional in the world we live in today. If you ask users to jump through hoops to get access off campus then your whole system defeats its purpose. That was the central argument of Tasha Mellins-Cohen, the Director of Product Development, HighWire Press when she spoke about the need to improve access control.

Mellins-Cohen started with the comment “You have one identity but lots of identifiers”, and noted if you have multiple institutional affiliations this causes problems. She described the process needed for giving access to an article from a library in terms of authentication – which, as an aside, clearly shows why researchers often prefer to use Sci Hub.

She described an initiative called CASA – Campus Activated Subscriber-Access which records devices that have access on campus through authenticated IP ranges and then allows access off campus on the same device without using a proxy. This is designed to use more modern authentication. There will be “more information coming out about CASA in the next few months”.

Mellins-Cohen noted that tagging something as ‘free’ in the metadata improves Google indexing – publishers need to do more of this at article level. This comment was responded with a call out to publishers to make the information about sharing more accessible to authors through How Can I Share It?

Mellins-Cohen expressed some concern that some of the ideas coming out of RA21 Resource Access in 21st Century, an STM project to explore alternatives to IP authentication, will raise barriers to access for researchers.

Summary

It is always interesting to have the mix of publishers, intermediaries, librarians and others in the scholarly communication supply chain together at a conference such as this. It is rare to have the conversations between different stakeholders across the divide. In his summary of the event, Mark Carden noted the tension in the scholarly communication world, saying that we do need a lively debate but also need to show respect for one another.

So while the keynote started promisingly, and said all the things we would like to hear from the publishing industry, there is still the reality that we are not there yet.  And this underlines the whole problem. This interweb thingy didn’t happen last week. What has actually happened  to update the publishing industry in the last 20 years? Very little it seems. However it is not all bad news. Things to watch out for in the near future include plans for micro-payments for individual access to articles, according to Mark Allin, and the highly promising Campus Activated Subscriber-Access system.

Danny Kingsley attended the Researcher to Reader conference thanks to the support of the Arcadia Fund, a charitable fund of Lisbet Rausing and Peter Baldwin.

Published 27 February 2017
Written by Dr Danny Kingsley
Creative Commons License

Copyright case study

In my presentation, I spoke about the children’s campfire song, “Kookaburra sits in the old gum tree” which was written by Melbourne schoolteacher Marion Sinclair in 1932 and first aired in public two years later as part of a Girl Guides jamboree in Frankston. Sinclair had to get prompted to go to APRA (Australasian Performing Right Association) to register the song. That was in 1975, the song had already been around for 40 years but she never expressed any great interest in any propriety to the song.

In 1981 the Men at Work song “Down Under” made No. 1 in Australia. The song then topped the UK, Canada, Ireland, Denmark and New Zealand charts in 1982 and hit No.1 in the US in January 1983. It sold two million copies in the US alone.  When Australia won the America’s Cup in 1983 Down Under was played constantly. It seems extremely unlikely that Marion Sinclair did not hear this song. (At the conference, three people self-identified as never having heard the song when a sample of the song was played.)

Marion Sinclair died in 1988, the song went to her estate and Norman Lurie, managing director of Larrikin Music Publishing, bought the publishing rights from her estate in 1990 for just $6100. He started tracking down all the chart music that had been printed all over the world, because Kookaburra had been used in books for people learning flute and recorder.

In 2007 TV show Spicks and Specks had a children’s music themed episode where the group were played “Down Under” and asked which Australian nursery rhyme the flute riff was based on. Eventually they picked Kookaburra, all apparently genuinely surprised when the link between the songs was pointed out. There is a comparison between the music pieces.

Two years later Larrikin Music filed a lawsuit, initially wanting 60% of Down Under’s profits. In February 2010, Men at Work appealed, and eventually lost. The judge ordered Men at Work’s recording company, EMI Songs Australia, and songwriters Colin Hay and Ron Strykert to pay 5% of royalties earned from the song since 2002 and from its future earnings.

In the end, Larrikin won around $100,000, although legal fees on both sides have been estimated to be upwards $4.5 million, with royalties for the song frozen during the case.

Gregory Ham was the flautist in the band who played the riff. He did not write Down Under, and was devastated by the high profile court case and his role in proceedings. He reportedly fell back into alcohol abuse and was quoted as saying: “I’m terribly disappointed that’s the way I’m going to be remembered — for copying something.” Ham died of a heart attack in April 2012 in his Carlton North home, aged 58, with friends saying the lawsuit was haunting him.

This case, I argued, exemplifies everything that is wrong with copyright.

Could the HEFCE policy be a Trojan Horse for gold OA?

The HEFCE Policy for open access in the post-2014 Research Excellence Framework kicks in 9 weeks from now.

The policy states that, to be eligible for submission to the post-2014 REF, authors’ final peer-reviewed manuscripts of journal articles and conference proceedings with an ISSN must have been deposited in an institutional or subject repository on acceptance for publication. Deposited material should be discoverable, and free to read and download, for anyone with an internet connection.

The goal of the policy is to ensure that publicly funded (by HEFCE) research is publicly available. The means HEFCE have chosen to favour is the green route – by putting the AAM into a repository. This does not involve any payment to the publishers. The timing of the policy – at acceptance – is to give us the best chance of obtaining the author’s accepted manuscript (AAM) before it is deleted, forgotten or lost by the author.

Universities across the UK have been preparing. Cambridge has had the ‘Accepted for publication? Send us your manuscript‘ campaign running since May 2014 with a very simple and well liked interface allowing researchers to submit their work. The Open Access team then deposits the item, checks for funding and the publisher policies and then organises payment for open access publication if required.

To give an idea of the numbers we are dealing with at Cambridge, during 2015 the Open Access team deposited 2553 articles into our repository Apollo.

Compliance levels

We have been reporting to Wellcome Trust and the RCUK over the past few years to indicate compliance levels with their policies. However the ‘compliance level’ for the HEFCE policy is a slippery concept. For a start, the policy has not yet come into force. Another complicating factor is the long term nature of the ‘reporting’. We will not truly know how compliant we have been until the time comes to submit to REF – whenever that will be (currently it seems 2021).

At Cambridge have been working on the assumption that because we do not know which outputs will be the ones that we will claim we should collect all eligible articles. However, the number of deposited articles Open Access team received over the past year represents approximately 30% of the full eligible output of the University. This might seem concerning in some ways, but it must be remembered that each researcher in the University will only be reporting four research outputs for the REF.

There are some articles that are obvious contenders for REF. By concentrating on researchers who are publishing in very high impact journals we have been trying to catch those articles we are extremely likely to claim.

During the course of 2015 we discovered 93 papers published in Nature, Science, Cell, The Lancet and PNAS. 33% of these papers were already HEFCE compliant. Of the remaining non-compliant papers we contacted 47 authors, made them aware of the HEFCE open access policy, and invited them to submit their accepted manuscript to the Open Access Service. Less than 40% of those authors who were contacted responded with their accepted manuscript. Therefore, even after direct intervention only 49% papers were HEFCE compliant, which means that still more than half of all eligible papers published in Nature, Science, Cell, The Lancet and PNAS during this period would not have been HEFCE compliant had the policy been in place.

The lack of engagement by members of the academic community with this process is a serious concern – and potentially due to four reasons:

  • Lack of awareness of the policy
  • Putting it off until the policy is in place
  • Deliberately choosing not to submit a work because it is not considered important enough or they do not consider their contribution to be significant enough
  • Some form of conscientious objection to the policy

We should note that the third reason is a matter of some concern to the University as it is not the researcher who decides which articles are put forward for REF. In addition, the University is interested in having a high overall level of compliance for REF as it considers making the research output of the institution available to be important.

Temporary reprieve

Cambridge is no island when it comes to facing significant challenges in capturing all outputs in preparation for HEFCE’s policy. While the highly devolved nature of the institution and the sheer volume of publications may be a problem unique to Cambridge and Oxford, other institutions are still developing the technology they intend to use or are facing staffing issues.

In a concession to serious concern across the sector about the ability to meet the deadline, on 24 July 2015 HEFCE announced that there was a temporary modification to the policy. They now allow research outputs to be made open access up to three months after publication until at least April 2017 (and until such time that the systems to support deposit at acceptance are in place).

This means for the first year of the policy we have a small window after publication to locate articles, determine if they are in our repositories, and if not chase the authors for the Author’s Accepted Manuscript.

The trick is knowing that an article has been published. At Cambridge our ‘best bet’ is to use Symplectic which scrapes various aggregating sources such as Scopus. However Symplectic is hindered by the efficiency of its sources. There is no guarantee that a given article will appear in Symplectic within three months of publication. And even if it is, we have already discussed the low engagement by the research community to approaches from the Open Access team for AAMs.

Subject based repositories

So far this blog has been talking about using institutional repositories for compliance. But the policy specifically states: “The output must have been deposited in an institutional repository, a repository service shared between multiple institutions, or a subject repository“.

The oldest, most established subject repository is arXiv.org and it makes sense for us to consider using arXiv as part of Cambridge’s compliance strategy. After all, some areas of high energy physics, most of computer science and much of mathematics use arXiv as a means to share their research papers. In 2014, the number of articles that were deposited into arXiv.org and subsequently picked up in Symplectic and approved by researchers were 582 – approximately 6.5% of Cambridge’s total eligible articles.

If we are able to claim these articles for HEFCE compliance without any behaviour change requirement from our academic staff then this is an ideal situation. But how do we actually do this? There is a footnote to the HEFCE statement above which says that: “Individuals depositing their outputs in a subject repository are advised to ensure that their chosen repository meets the requirements set out in this policy.” And this is the crunch point. arXiv does not currently identify which version of the work has been deposited, nor does it record the acceptance date of the work. Because of this we are currently not able to simply use the work being uploaded to arXiv.

There is work underway to look at this possibility and what would be required to allow us to use the subject based repositories as a means for compliance. HEFCE themselves have identified under ‘Further areas of work‘ that  “measures to support compliance in subject repositories” is an area of uncertainty and they will work with the community to address this.

Alternative approach?

It is possibly a good moment to take a step back from the minutiae of the means and the timing of the HEFCE policy and focus on the goal that publicly funded research is publicly available. We are in a complex policy environment. HEFCE affects all researchers but many researchers are also funded through COAF or the RCUK with their respective (gold leaning) Open Access policies.

Of the HEFCE eligible articles submitted to to Open Access team in 2015, after working through all the different funder requirements, there was a split of 44% gold Open Access and 56% green Open Access. Of the gold payments the split is approximately 74% for hybrid journals and 26% for fully open access journals.  That said, the three journals with which we have published the most – PLOS ONE, Nature Communications and Scientific Reports – are fully Open Access journals with APCs of $1495, $5200 and $1495 respectively.

A highly relevant question is – outside of the efforts by our Open Access compliance teams, how much Cambridge research is being made open access anyway?

Open access articles

The Web of Science (WoS) allows a filter on ‘Open Access’. It does not appear to list articles that are made open access on a hybrid basis, only picking up fully open access journals. While these are not definitive numbers, it does give us some idea of the scale we are looking at. In 2014 WoS gives us a figure of 981 articles published as open access by a University of Cambridge author in a fully open access journal.

The Springer Compact to which many institutions (including Cambridge) have signed up means that now all articles published by that research community will be made open access. In 2014, the Open Access Service had paid for 21 articles to be made open access. In the same period across the institution we had published 695 articles with Springer. (Note that in 2015 we paid 51 Springer  APCs). This means that for the cost of the Springer subscription and our APC payments for the previous year we will have a good proportion of Cambridge articles published as open access articles.

These two sets of numbers only allow for articles published either in fully open access journals or with Springer. It does not account for the articles where the University (or a Department or individual) pays an APC to make an article available in a hybrid (non Springer) journal. The upshot is – a significant proportion of Cambridge research is published open access.

Skip the AAM on acceptance part?

So what does this published open access research mean for compliance with the HEFCE policy? The updated HEFCE policy has addressed this:

“… we have decided to introduce an exception to the deposit requirements for outputs published via the gold route. This may be used in cases where depositing the output on acceptance is not felt to deliver significant additional benefit. We would strongly encourage these outputs to be deposited as soon as possible after publication, ideally via automated arrangements, but this will not be a requirement of the policy.”

This makes sense from an administrative perspective if the article appears in a journal where there is an embargo period on making the AAM available, forcing the University to pay an APC to make the work Open Access to meet RCUK requirements. It would avoid the palaver of:

  • obtaining the AAM from the author
  • depositing it into the repository
  • having to check to see when the article has been published
  • updating the details and
  • either set the embargo on the AAM or change the attachment in the record to the Open Access final published version

However journals where there is an embargo period on making the AAM available forcing an APC payment is in fact almost a definition of hybrid journals. We know there are issues with hybrid – of the extra expense, of double dipping, of the higher APC charges for hybrid over fully Open Access journals. Putting these aside, what this HEFCE policy change means is that publishers have effectively shifted the HEFCE policy away from a green open access policy to a gold one for a significant proportion of UK research. This is a deliberate tactic, along with the unsubstantiated campaign that green Open Access poses a major threat to scholarly publishing and therefore embargoes should be even longer.

We are already facing the problem that hybrid journals are forcing the move towards green open access being ‘code’ for a 12 month delay. This is the beginning of a very slippery slope. We have been outplayed. It really is time for the RCUK and Wellcome Trust to stop paying for hybrid Open Access.

But I digress.

The cons

The message is confusing enough – three sets of policies and three different requirements in terms of the timing and the means to make work compliant and available. We are trying to make it as simple as possible for researchers – with limited success.

The move to widespread Open Access in the UK is a huge shift for the research community and those that support them. It would be very difficult to debate the ‘against’ argument for the statement that publicly funded research should be publicly available but the devil is very much in the detail.

It would be an incredible shame if the HEFCE policy is hijacked into a partial gold OA policy, but as administrators we are drowning in compliance. There needs to be a broad discussion across the funders to try and address the conflicting compliance requirements and the potentially negative effect these policies are having on the future of open scholarly publishing. 

We welcome the opportunity to discuss these issues with HEFCE, Wellcome Trust and the RCUK. There’s plenty to talk about.

Published 25 January 2016
Written by Dr Danny Kingsley
Creative Commons License