Tag Archives: funders

10 years on and where are we at? COASP 2018

Last week, the 10th Conference of the Open Access Publishing Association was held in Vienna. Much was covered over the two and a half days. A decade in, this conference considered the state of the open access (OA) movement, discussed different approaches to OA, considered inequity and the infrastructure required to meet this need and argued about language. Apologies – this is a long blog.

Fracturing of the ‘OA movement’?

In an early discussion, Paul Peters, OASPA President and CEO of Hindawi noted that similarly to movements like organic food or veganism, the OA ‘movement’ is not united in purpose. When what appear to be ‘fringe’ groups begin, it is easy to assume that all involved take a similar perspective. But the reasons for people’s involvement and the end point they are aiming for can be vastly different. Paul noted that this can be an issue for OASPA because there is not necessarily one goal for all the members. He posed the question about what this might mean for the organisation.

It also raises questions about approaches to ‘solving’ OA issues. Many different approaches were discussed at the event.

Unbundling

The concept of ‘unbundling’ the costs associated with publishing and offering these to people to engage with on an as needs basis was raised several times. This points to the concept put forward last year by Toby Green of the OECD. It also triggered a Twitter conversation about the analogy of the airline industry (and how poorly they treat their customers).

If the scholarly journal were unbundled, different players could deliver the functions. Kathleen Shearer, Executive Director of COAR noted that not all functions of scholarly publishing need to take place on the same platform. She suggested next generation repositories as one of the options.

Jean-Claude Guedon provided several memorable quotes from the event, with the most pertinent being “We don’t need a ‘Version of Record’. We need a ‘record of versions’”. Kirsten Ratan, of Coko Foundation agreed in her talk on infrastructure, stating “we publish like its 1999”. The Version of Record is the one that matters and it is static in time. But it is not 1999, she noted, and we need to consider the full body of work in its entirety.

After all, it was observed elsewhere at the conference, nothing radical has changed in the format of publications over the past 25 years. We are simply not using the potential the internet offers. Kathleen quoted Einstein stating “You cannot solve a problem from the same consciousness that created it. You must learn to see the world anew”.

New subscribing models

Wilma van Wezenbeek, from TU Delft and Programme Manager, Open Access, VSNU discussed the approach to negotiations taken in The Netherlands. They are arguing that when comparing how much is spent per article under the toll system and what it would cost to have everything published OA, that enough money exists in the system. VSNU are being pragmatic, focusing on big publishers and going for gold OA (to avoid the duplication of journals). She also noted how important it is for libraries to have presidents of the University at the negotiation table. Her parting advice on negotiations was to hold your nerve, stay true to the principles and don’t waiver.

This approach does not include smaller publishers and completely ignores fully gold publishers, an observation that was made a few times in the conference. An alternative approach, argued Kamran Naim, Director of Partnerships & Initiatives at Annual Reviews, was collective action. In his talk ‘Transitioning Subscriptions to OA Funding: How libraries can Subscribe to Open’ he asked what is required to flip the subscription cost to manage OA publication (instead of APCs). The challenge with this idea is it requires people to continue subscribing even when material is OA and they don’t have to. Another problem is the idea of ‘subscribing’ to OA material can become a procurement challenge. This cost can be classified as a ‘donation’ which is not allowed by some library budgets. So the suggestion is that subscribing libraries will be offered to subscribe to select journals and receive 5% off the subscription cost. The plan is to roll out the project to libraries in 2019 for 2020 models.

Study – downloading habits when material is OA

A very interesting study was presented by Salvatore Mele and Alexander Kohls from CERN and SCOAP3. Entitled ‘Preprints vs traditional journals vs Open Access journals – What do scientists download?’ the study compared downloads of the same scientific artefact as a preprint on arXiv and as a published article on a (flipped) journal platform.

Their findings, which came from arXiv, Elsevier and SpringerNature’s statistics, showed that there is a significant use of the version in arXiv during the first six months (when the only version of the work is available in arXiv) which drops off dramatically after the work is published (a point identified as when the DOI is minted).

They also compared downloads from 2013 – before the journals flipped to gold under the SCOAP3 arrangement with those from 2016 when the journals were open access. The pattern over time was similar, but accesses in 2016 were higher overall over time, but dramatically higher in the first three months after the DOI was minted.

 

The final slide demonstrated that having recent open access content was also driving up downloads of older works in the non-open access backfiles from the publisher platforms.

This work is not published “because we have day jobs”. I have included my poor images of the slides in this blog and will link to the slides when they are made available.

Nostalgia

Being the 10th OASPA conference there was some reminiscing throughout the presentations. In a keynote reflection on the Open Access movement, Rebecca Kenniston from KN Consultants noted several myths about OA publishing that existed 10 years ago that still persist. Rebecca discussed “library wishful thinking” when it came to OA. This has included thinking OA would solve the serials crisis, that practice would change ‘if only the academic community were aware’, that institutional repositories and mandates would solve OA. (Certainly one of my own observations over the 16 or so years I have been involved in OA is there is always a palpable sense of glee at OA events when ‘real’ researchers bother to turn up.)

David Prosser, Executive Director of Research Libraries UK was outed as the architect of the ‘hybrid’ option, which he articulated in his 2003 paper “From here to there: a proposed mechanism for transforming journals from closed to open access“. David defended himself by noting that the whole concept did not work because it was proposed with an assumption about the “sincerity of the industry to engage”.

This made me consider the presentation I gave to another 10th anniversary conference this year – Repository Fringe at Edinburgh. In 1990 Steven Harnad wrote about ‘Scholarly Skywriting’ and described the obstacles to the ‘revolution’ as including ‘old ways of thinking about scientific communication and publication’, ‘the current intellectual level of discussion on electronic networks is anything but inspiring’, ‘plagiarism’, ‘copyright’ and ‘academic credit and advancement’ amongst others. Little appears to have changed in the past 28 years.

The more perceptive readers will note how long ago these dates are. This OA palaver has been going on for decades. And it seems even longer because, as Guido Blechl from the University of Vienna noted, “open access time is shorter than normal time because it moves so fast”.

But none of this wishful thinking has come to fruition. Rebecca asked “what shift do we need in our thinking?” Well in many ways that shift has landed in the form of Plan S. See the related blog for the discussions about Plan S that happened at the conference.

Language matters

Rebecca also mentioned “our own special language”, which is, she observed, a barrier to entry to the discussion. Indeed language issues came up often during the few days of the conference.

There were a few references to the problems with the terms ‘green’ and ‘gold’, and specifically gold. This has long been a personal bugbear of mine because of the nonsensical nature of the labels, and the associations of ‘the best’ and ‘expensive’ with gold. There has been a co-opting of the term ‘gold’ by the commercial publishing sector to mean ‘pay to publish’. Of course all *hybrid* journals charge an APC, and more articles are published where an APC has been paid than not, which is possibly why the campaign has been successful – see the Twitter discussion here. But it is inaccurate. In truth, ‘gold’ means the work is open access from the point of publication. More fully gold open access journals do not charge an APC than do.

There was also concern raised about the term ‘Open Science’ which, while in Europe is an inclusive term to cover all types of research, is not perceived this way in other parts of the world. There was strong support amongst the group for using the term ‘Open Scholarship’ as an alternative. This also brought up a discussion about using the term ‘publication’ rather than the more inclusive research ‘outputs’ or ‘works’, which encompass publishing options beyond the concept of a book or a journal.

Inequity

Inclusivity is not optional! We need a global (information/publishing) system!” was the rallying cry of Kathleen Shearer in her talk.

For many in the OA space, equity of access to research outputs lies at the centre of what the end goal is. It is clear that knowledge published by academic journals is inaccessible to the majority of researchers in low- and middle-income countries. But if we move to a fully gold environment, with the potential to increase the cost of author participation in the publishing environment, then we might have simply reversed the problem. Instead of not being able to read research, academics in the Global South will be excluded from participating in the academic discussion.

There was a discussion about the change in global publishing output since 2007, which reflects a big increase in output from China and Brazil, but otherwise shows that output is uneven and not inclusive.

One possible solution to this issue would be for open access publishers to make it clearer to authors that they offer waivers for authors who are unable to pay the APC. There was discussion about the question ‘what form should OA publishing take in Eastern and Southern Europe?’. The answer was that it should be inexpensive and use infrastructure that is publicly owned and cannot be sold.

Infrastructure

Ahhhh infrastructure. We are working within a fast consolidating environment. Elsevier continues to buy up companies to ensure it has representation across all aspects of the scholarly ecosystem and Digital Science is developing and acquiring new services to a similar end. See ‘Virtual Suites of tools/platforms supported by the same funder’ and ‘Vertical integration resulting from Elsevier’s acquisitions’. These are obvious examples but Clarivate Analytics has recently acquired Publons and ProQuest has absorbed Ex Libris which has in turn bought Research Research and has plans to create Esploro – a cloud-based research services platform, so this is prevalent across the sector.

This raises some serious concerns for the concept of ‘openness’. In his excellent round up, Geoff Bilder, Director of Strategic Initiatives at Crossref, commented that we are looking in the rear view mirror at things that have already happened and we are not noticing what is in front of us. While we might end up in a situation where publications are open access, these are not representative of the discussions that occurred to allow the authors to come to those conclusions. The REAL communication happens in coffee shops and online discussions. If these conversations are using proprietary systems (such as Slack, for example), then these conversations are hidden from us.

Who owns the information about what is being researched and the data behind it when the scholarly infrastructure is held within a commercial ecosystem? Is there an opportunity to reimagine? asked Kirsten Ratan, referencing SPARC’s action plan on ‘Securing community controlled infrastructure’. “In scholarly communication,” she summarised, “we have accepted the limitations of the infrastructure with a learned helplessness. It‘s time that these days are over.”

There are multiple projects currently in place around the world to collectively manage and support infrastructure. Kathleen Shearer described several projects:

  • Consortia negotiations such as OA2020 and SCOAP3
  • The Global Sustainability Coalition for Open Science Services (SCOSS) is an international group of leading academic and advocacy organisations that came together in 2016 to help secure the vital infrastructure underpinning Open Access and Open Science. SPARC Europe is a founding member.
  • The 5% commitment is a call that “Every academic library should commit to contribute 2.5% of its total budget to support the common infrastructure needed to create the open scholarly commons”. This is primarily a US and Canadian discussion.
  • OA membership models
  • APC funds

There are actually a couple of other projects not mentioned at COASP 2018. In 2017, several major funding organisations met and came to a strong consensus that core data resources for life sciences should be supported through a coordinated international effort to both ensure long term sustainability and appropriately align funding with scientific impact. The ELIXIR Core Data Resources project is identifying resources defined as a set of European data resources that are of fundamental importance to the wider life-science community and the long-term preservation of biological data.

OA Monographs

The final day of the event looked at OA monographs. Having come from a British Academy event on OA monographs the week before (see the Twitter discussion), this debate is fairly top of mind at the moment for me.

Sven Fund, who is both the Managing Director of Knowledge Unlatched and of fullstopp which is running a consultation on OA monographs for Universities UK, spoke about the OA monograph market. He noted that books are important, and not just because “people like to decorate their living rooms with them”. But he suggested that rather than just adding a few hyperlinks, we should be using the technology available to us with books. It has been the smaller publishers who have been innovating, large publishers have not been involved, which has limited the level of interest.

The OA book market is still small, with only 12,794 books and chapters listed in the Directory of OA Books (DOAB) compared to over 3 million articles  listed in the Directory of OA Journals (DOAJ). But growth in OA books is still strong even though the OA journal market matures. Libraries are the bottleneck, Sven argued, because they need to change the funding model significantly. There has been 10-15 years of discussion and now is the time to act. Libraries need to make a significant commitment that X% goes into open access.

There are also problems with demonstrating proof of impact of the OA book. Sven argued we need transparency and simplicity in the market, and said that no-one is doing an analysis of which books should be OA or not based on impact and usage. This needs to happen.

Sven said that royalties are important to authors – not because of the money but because it shows how much the work has been read. For this reason he argued we need publishers to share their usage data for a specific OA titles with the author. As an aside, it seems extraordinary that publishers are not already doing this, and I asked Sven why they don’t. He replied that it seems that ‘data is the new gold’ and therefore they do not share the information. Download information about open books is often protected because of the risk a of providing information that gives their competitors a commercial advantage.

But Sven also noted there needs to be context in the numbers. Libraries in the past have done a simple analysis of cost per download without taking into consideration the differences in topics. Of course some areas cost more per download than others, he said. There is also the risk that if you share this data then you might have a situation where a £10,000 book only has a few downloads which ‘looks bad’.

The profit imperative

There were some tensions at the meeting about profits. A question that arose early in the first panel discussion was: “Should we be ashamed as commercial publishers for making money?”. One response was that if you don’t make money you are not a commercial publisher. But the same person noted the ‘anti commercial sentiment’ in these discussions indicate that something is wrong.

A secondary observation was that open access publishers are doing a good job “while the current incentive systems are in place”. This of course points to the academic reward system controlling the behaviour of all players in this game.

As is always the case at open meetings, the Journal Impact Factor was never far away, although Paul Peters noted that the JIF was partly responsible for the success of OA journals, PLOS ONE took off when it received an impact factor. It was noted in that discussion that OA journals obtaining and increasing their JIF is ‘not proof of success, it’s proof of adaptation’.

The final talk was from Geoff Bilder. One participant described his talk on Twitter as “the best part of the publishing conference, where Geoff Bilder tells us everything that we’re doing that’s wrong”. Geoff noted that throughout the conference people had used some terms fairly loosely, including ‘commercial’ and ‘for profit’. He noted that profit doesn’t necessarily mean taking money out of the system, often profit is channelled back into the business.

In the end

In all it was an interesting and thought provoking conference. Possibly the most memorable part was the 12 flights of stairs between the lecture rooms and the breakout coffee and lunch space. This has been the first OA conference I have attended where participants improved their cardiovascular fitness as a side bonus to the event.

The Twitter hashtag is #COASP10

Published 24 September 2018
Written by Dr Danny Kingsley
Creative Commons License

Most Plan S principles are not contentious

This is a sister blog to “Relax everyone, Plan S is just the beginning of the discussion” and provides the ‘supplementary material’ to that blog. It discusses the points in the Plan S principles that are not particularly contentious.

At the end of this blog is a list of links and commentary to date on Plan S.

Not much new here

The Funders will ensure the establishment of robust criteria and requirements for the services that compliant high quality Open Access platforms and journals must provide.

This is perfectly reasonable. The amount of money being invested is huge and quite rightly, the funders want to articulate what they are prepared to pay for. It is also helpful from an institutional perspective to have guidelines that clearly identify which journals are compliant and which are not.

Indeed, there is a precedent. In 2017 the Wellcome Trust introduced a publisher requirement list stating that compliant publishers needed to deposit to PubMed Central Europe, apply the correct licence and provide invoices that contained complete and understandable information. They asked publishers to sign up to these principles to be listed on their ‘white list’.

Where applicable, Open Access publication fees are covered by Funding Agencies or universities…

This point reflects the status quo in the UK at least. Universities across the UK are currently managing open access payments through various funding models. In some instances, such as Cambridge, payments are only made from funds provided by funding bodies with no extra funds provided by the institution. Other institutions such as UCL provide central university funds in addition to those provided by funders. There are a small number of institutions which do not receive any funds from funders but do provide central funds for specific publications.

Of course, if journals were to flip to fully open access then funds currently being used to pay for subscriptions could be freed up to divert to expenditure on APCs for fully gold publications.

Funders will ask universities and libraries to align their policies and strategies, notably to ensure transparency.

While this might be a little tricky simply because of the individual governance arrangements at institution, it is a sensible thing to aim for.

The above principles shall apply to all research outputs, but it is understood that the timeline to achieve Open Access for monographs and books may be longer than 1st January 2020.

Open Access monographs ARE contentious, don’t get me wrong. But in the context of this statement of principle, there is concession that there is some work to be done in this space. And we already knew that UKRI intends to include monographs in the post REF2021 (as in, anything published from 1 January 2021). Wellcome Trust have had OA monographs in their policy for years.

The importance of open archives and repositories for hosting research outputs is acknowledged because of their long-term archiving function and their potential for editorial innovation.

Now I know this is contentious for us Open Access nerds because there is a sense that repositories are once again being pushed into the shadows, which is what happened with the Finch report. But as noted in the main blog, under Plan S, deposit of an Author’s Accepted Manuscript into a repository is compliant if it is there under a CC-BY licence and with a zero embargo.

Some issues are operational

In a few instances, the queries or concerns raised about Plan S are actually operational ones.

When APCs are applied, their funding is standardised and capped (across Europe)

Currently the RCUK (now UKRI) does cap funding to Universities, using a complex algorithm to determine allocations in a given year to support the institutions meeting the open access policy. This has resulted in some institutions (including Cambridge) to identify a preference for publishers  exhibiting actions towards an open access future.

Manchester University has introduced new criteria for payment of APCs. They support “Publishers who are taking a sustainable and affordable approach to the transition to OA, e.g. by reducing the cost of publishing Gold OA in hybrid (subscription) journals via offsetting deals or membership schemes are listed below:…” They include a list of journals for which APCs will not be paid.

The alternative interpretation of this statement will be that individual APCs will be capped. This would have implications for all administrators of APCs. It would have particular implications for Cambridge University because of the relatively high proportion of papers published in expensive open access journals such as Nature Communications. The University would both have to find funds to supplement the cost, and also provide the administrative support for this process. This is where discussions need to happen about redirecting subscription budgets towards open access activities. While Plan S adds some urgency, there is time to have these.

The Funders will monitor compliance and sanction non-compliance.

This is the statement that has some administrative staff highly concerned. In the end it will fall upon them to ensure their research community is up to speed and doing the required activities. But we have had sanctions for non-compliance to Wellcome Trust policies since 2014 so this in itself is not new.

Relevant documents from Science Europe

Commentary, news stories & press releases

There has been considerable discussion about Plan S – here are just a few links that might be interesting. NOTE this list has been moved and is now being maintained on a separate blog: ‘Plan S – links, commentary and news items‘.

Published 12 September 2018
Written by Dr Danny Kingsley
Creative Commons License

What I wish I’d known at the start – setting up an RDM service

In August, Dr Marta Teperek began her new role at Delft University in the Netherlands. In her usual style of doing things properly and thoroughly, she has contributed this blog reflecting on the lessons learned in the process of setting up Cambridge University’s highly successful Research Data Facility.

On 27-28 June 2017 I attended the Jisc’s Research Data Network meeting at the University of York. I was one of several people invited to talk about experiences of setting up RDM services in a workshop organised by Stephen Grace from London South Bank University and Sarah Jones from the Digital Curation Centre. The purpose of the workshop was to share lessons learned and help those that were just starting to set up research data services within their institutions. Each of the presenters prepared three slides: 1. What went well, 2. What didn’t go so well, 3. What they would do differently. All slides from the session are now publicly available.

For me the session was extremely useful not only because of the exchange of practices and learning opportunity, but also because the whole exercise prompted me to critically reflect on Cambridge Research Data Management (RDM) services. This blog post is a recollection of my thoughts on what went well, what didn’t go so well and what could have been done differently, as inspired by the original workshop’s questions.

What went well

RDM services at Cambridge started in January 2015 – quite late compared to other UK institutions. The late start meant however that we were able to learn from others and to avoid some common mistakes when developing our RDM support. The Jisc’s Research Data Management mailing list was particularly helpful, as it is a place used by professionals working with research data to look for help, ask questions, share reflections and advice. In addition, Research Data Management Fora organised by the Digital Curation Centre proved to be not only an excellent vehicle for knowledge and good practice exchange, but also for building networks with colleagues in similar roles. In addition, Cambridge also joined the Jisc Research Data Shared Service (RDSS) pilot, which aimed to create a joint research repository and related infrastructure. Being part of the RDSS pilot not only helped us to further engage with the community, but also allowed us to better understand the RDM needs at the University of Cambridge by undertaking the Data Asset Framework exercise.

In exchange for all the useful advice received from others, we aimed to be transparent about our work as well. We therefore regularly published blog posts about research data management at Cambridge on the Unlocking Research blog. There were several additional advantages of the transparent approach: it allowed us to reflect on our activities, it provided an archival record of what was done and rationale for this and it also facilitated more networking and comments exchange with the wider RDM community.

Engaging Cambridge community with RDM

Our initial attempts to engage research community at Cambridge with RDM was compliance based: we were telling our researchers that they must manage and share their research data because this was what their funders require. Unsurprisingly however, this approach was rather unsuccessful – researchers were not prepared to devote time to RDM if they did not see the benefits of doing so. We therefore quickly revised the approach and changed the focus of our outreach to (selfish) benefits of good data management and of effective data sharing. This allowed us to build an engaged RDM community, in particular among early career researchers. As a result, we were able to launch two dedicated programmes, further strengthening our community involvement in RDM: the Data Champions programme and also the Open Research Pilot Project. Data Champions are (mostly) researchers, who volunteered their time to act as local experts on research data management and sharing to provide advice and specialised training within their departments.The Open Research Pilot Project is looking at the benefits and barriers to conducting Open Research.

In addition, ensuring that the wide range of stakeholders from across the University were part of the RDM Project Group and had an oversight of development and delivery of RDM services, allowed us to develop our services quite quickly. As a result, services developed were endorsed by wide range of stakeholders at Cambridge and they were also developed in a relatively coherent fashion. As an example, effective collaboration between the Office of Scholarly Communication, the Library, the Research Office and the University Information Services allowed integration between the Cambridge research repository, Apollo, and the research information system, Symplectic Elements.

What didn’t go so well

One of the aspects of our RDM service development that did not go so well was the business case development. We started developing the RDM business case in early 2015. The business case went through numerous iterations, and at the time of writing of this blog post (August 2017), financial sustainability for the RDM services has not yet been achieved.

One of the strongest factors which contributed to the lack of success in business case development was insufficient engagement of senior leadership with RDM. We have invested a substantial amount of time and effort in engaging researchers with RDM and by moving away from compliance arguments, to the extent that we seem to have forgotten that compliance- and research integrity-based advocacy is necessary to ensure the buy in of senior leadership.

In addition, while trying to move quickly with service development, and at the same time trying to gain trust and engagement in RDM service development from the various stakeholder groups at Cambridge, we ended up taking part in various projects and undertakings, which were sometimes loosely connected to RDM. As a result, some of the activities lacked strategic focus and a lot of time was needed to re-define what the RDM service is and what it is not in order to ensure that expectations of the various stakeholders groups could be properly managed.

What could have been done differently

There are a number of things which could have been done differently and more effectively. Firstly, and to address the main problem of insufficient engagement with senior leadership, one could have introduced dedicated, short sessions for principal investigators on ensuring effective research data management and research reproducibility across their research teams. Senior researchers are ultimately those who make decisions at research-intensive institutions, and therefore their buy-in and their awareness of the value of good RDM practice is necessary for achieving financial sustainability of RDM services.

In addition, it would have been valuable to set aside time for strategic thinking and for defining (and re-defining, as necessary) the scope of RDM services. This is also related to the overall branding of the service. In Cambridge a lot of initial harm was done due to negative association between Open Access to publications and RDM. Due to overarching funders’ and government’s requirements for Open Access to publications, many researchers started perceiving Open Access to publications merely as a necessary compliance condition. The advocacy for RDM at Cambridge started as ‘Open Data’ requirements, which led many researchers to believe that RDM is yet another requirement to comply with and that it was only about open sharing of research data. It took us a long time to change the messages and to rebrand the service as one supporting researchers in their day to day research practice and that proper management of research data leads to efficiency savings. Finally, only research data which are management properly from the very start of the research process can be then easily shared at the end of the project.

Finally, and which is also related to the focusing and defining of the service, it would have been useful to decide on a benchmarking strategy from the very beginning of the service creation. What is the goal(s) of the service? Is it to increase the number of shared datasets? Is it to improve day to day data management practice? Is to to ensure that researchers know how to use novel tools for data analysis? And, once the goal(s) is decided, design a strategy to benchmark the progress towards achieving this goal(s). Otherwise it can be challenging to decide which projects and undertakings are worth continuation and which ones are less successful and should be revised or discontinued. In order to address one aspect of benchmarking, Cambridge led the creation of an international group aiming to develop a benchmarking strategy for RDM training programmes, which aims to create tools for improving RDM training provision.

Final reflections

My final reflection is to re-iterate that the questions asked of me by the workshop leaders at the Jisc RDN meeting really inspired me to think more holistically about the work done towards development of RDM services at Cambridge. Looking forward I think asking oneself the very same three questions: what went well, what did not go so well and what you would do differently, might become for a useful regular exercise ensuring that RDM service development is well balanced and on track towards its intended goals.


Published 24 August 2017
Written by Dr Marta Teperek

Creative Commons License