Tag Archives: publishing

‘Be nice to each other’ – the second Researcher to Reader conference

Aaaaaaaaaaargh! was Mark Carden’s summary of the second annual Researcher to Reader conference, along with a plea that the different players show respect to one another. My take home messages were slightly different:

  • Publishers should embrace values of researchers & librarians and become more open, collaborative, experimental and disinterested.
  • Academic leaders and institutions should do their bit in combating the metrics focus.
  • Big Deals don’t save libraries money, what helps them is the ability to cancel journals.
  • The green OA = subscription cancellations is only viable in a utopian, almost fully green world.
  • There are serious issues in the supply chain of getting books to readers.
  • And copyright arrangements in academia do not help scholarship or protect authors*.

The programme for the conference included a mix of presentations, debates and workshops. The Twitter hashtag is #r2rconf.

As is inevitable in the current climate, particularly at a conference where there were quite a few Americans, the shadow of Trump was cast over the proceedings. There was much mention of the political upheaval and the place research and science has in this.

[*please see Kent Anderson’s comment at the bottom of this blog]

In the publishing corner

Time for publishers to raise to the challenge

The conference opened with an impassioned speech by Mark Allin, the President and CEO of John Wiley & Sons, who started with the statement this was “not a time for retreat, but a time for outreach and collaboration and to be bold”.

The talk was not what was expected from a large commercial publisher. Allin asked: “How can publishers act as advocates for truth and knowledge in the current political climate?” He mentioned that Proquest has launched a displaced researchers programme in reaction to world events, saying, “it’s a start but we can play a bigger role”.

Allin asked what publishers can do to ensure research is being accessed. Referencing “The content trap” by Bharat Anand, Allin said “We won’t as a media industry survive as a media content and putting it in a bottle and controlling its distribution. We will only succeed if we connect the users. So we need to re-engineer the workflows making them seamless, frictionless. “We should be making sure that … we are offering access to all those who want it.”

Allin raised the issue of access, noting that ResearchGate has more usage than any single publisher. He made the point that “customers don’t care if it is the version of record, and don’t care about our arcane copyright laws”. This is why people use SciHub, it is ease of access. He said publishers should not give up protecting copyright but must realise its limitations and provide easy access.

Researchers are the centre of gravity – we need to help them spend more time researching and less time publishing, he says. There is a lesson here, he noted, suppliers should use “the divine discontent of the customer as their north star”. He used the example of Amazon to suggest people working in scholarly communication need to use technology much better to connect up. “We need to experiment more, do more, fail more, be more interconnected” he said, where “publishing needs open source and open standards” which are required for transformational impact on scholarly publishing – “the Uber equivalent”.

His suggestion for addressing the challenges of these sharing platforms is to “try and make your experience better than downloading from a pirate site”, and that this would be a better response than taking the legal route and issuing takedown notices.  He asked: “Should we give up? No, but we need to recognise there are limits. We need to do more to enable access.”

Allin called the situation, saying publishing may have gone online but how much has the internet really changed scholarly communication practices? The page is still a unit of publishing, even in digital workflows. It shouldn’t be, we should have a ‘digital first’ workflow. The question isn’t ‘what should the workflow look like?’, but ‘why hasn’t it improved?’, he said, noting that innovation is always slowed by social norms not technology. Publishers should embrace values of researchers & librarians and become more open, collaborative, experimental and disinterested.

So what do publishers do?

Publishers “provide quality and stability”, according to Kent Anderson, speaking on the second day (no relation to Rick Anderson) in his presentation about ‘how to cook up better results in communicating research’. Anderson is the CEO of Redlink, a company that provides publishers and libraries with analytic and usage information. He is also the founder of the blog The Scholarly Kitchen.

Anderson made the argument that “publishing is more than pushing a button”, by expanding on his blog on ‘96 things publishers do’. This talk differed from Allin’s because it focused on the contribution of publishers.

Anderson talked about the peer review process, noting that rejections help academics because usually they are about mismatch. He said that articles do better in the second journal they’re submitted to.

During a discussion about submission fees, Anderson noted that these “can cover the costs of peer review of rejected papers but authors hate them because they see peer review as free”. His comment that a $250 journal submission charge with one journal is justified by the fact that the target market (orthopaedic surgeons) ‘are rich’ received (rather unsurprisingly) some response from the audience via Twitter.

Anderson also made the accusation that open access publishers take lower quality articles when money gets tight. This did cause something of a backlash on the Twitter discussion with a request for a citation for this statement, a request for examples of publishers lowering standards to bring in more APC income with the exception of scam publishers. [ADDENDUM: Kent Anderson below says that this was not an ‘accusation’ but an ‘observation’. The Twitter challenge for ‘citation please?’ holds.]

There were a couple of good points made by Anderson. He argued that one of the value adds that publishers do is training editors. This is supported by a small survey we undertook with the research community at Cambridge last year which revealed that 30% of the editors who responded felt they needed more training.

The library corner

The green threat

There is good reason to expect that green OA will make people and libraries cancel their subscriptions, at least it will in the utopian future described by Rick Anderson (no relation to Kent Anderson), Associate Dean of University of Utah in his talk “The Forbidden Forecast, Thinking about open access and library subscriptions”.

Anderson started by asking why, if we’re in a library funding crisis, aren’t we seeing sustained levels of unsubscription? He then explained that Big Deals don’t save libraries money. They lower the cost per article, but this is a value measure, not a cost measure. What the Big Deal did was make cancellations more difficult. Most libraries have cancelled every journal that they can without Faculty ‘burning down the library’, to preserve the Big Deal. This explains the persistence of subscriptions over time. The library is forced to redirect money away from other resources (books) and into serials budget. The reason we can get away with this is because books are not used much.

The wolf seems to be well and truly upon us. There have been lots of cancellations and reduction of library budgets in the USA (a claim supported by a long list of examples). The number of cancellations grows as the money being siphoned off book budgets runs out.

Anderson noted that the emergence of new gold OA journals doesn’t help libraries, this does nothing to relieve the journal emergency. They just add to the list of costs because it is a unique set of content. What does help libraries is the ability to cancel journals. Professor Syun Tutiya, Librarian Emeritus at Chiba University in a separate session noted that if Japan were to flip from a fully subscription model to APCs it would be about the same cost, so that would solve the problem.

Anderson said that there is an argument that “there is no evidence that green OA cancels journals” (I should note that I am well and truly in this camp, see my argument). Anderson’s argument that this is saying the future hasn’t happened yet. The implicit argument here is that because green OA has not caused cancellations so far means it won’t do it into the future.

Library money is taxpayers’ money – it is not always going to flow. There is much greater scrutiny of journal big deals as budgets shrink.

Anderson argued that green open access provides inconsistent and delayed access to copies which aren’t always the version of record, and this has protected subscriptions. He noted that Green OA is dependent on subscription journals, which is “ironic given that it also undermines them”. You can’t make something completely & freely available without undermining the commercial model for that thing, Anderson argued.

So, Anderson said, given green OA exists and has for years, and has not had any impact on subscriptions, what would need to happen for this to occur? Anderson then described two subscription scenarios. The low cancellation scenario (which is the current situation) where green open access is provided sporadically and unreliably. In this situation, access is delayed by a year or so, and the versions available for free are somewhat inferior.

The high cancellation scenario is where there is high uptake of green OA because there are funder requirements and the version is close to the final one. Anderson argued that the “OA advocates” prefer this scenario and they “have not thought through the process”. If the cost is low enough of finding which journals have OA versions and the free versions are good enough, he said, subscriptions will be cancelled. The black and white version of Anderson’s future is: “If green OA works then subscriptions fail, and the reverse is true”.

Not surprisingly I disagreed with Anderson’s argument, based on several points. To start, there would need to have a certain percentage of the work available before a subscription could be cancelled. Professor Syun Tutiya, Librarian Emeritus at Chiba University noted in a different discussion that in Japan only 6.9% of material is available Green OA in repositories and argued that institutional repositories are good for lots of things but not OA. Certainly in the UK, with the strongest open access policies in the world, we are not capturing anything like the full output. And the UK is itself only 6% of the research output for the world, so we are certainly a very long way away from this scenario.

In addition, according to work undertaken by Michael Jubb in 2015 – most of the green Open Access material is available in places other than institutional repositories, such as ResearchGate and SciHub. Do librarians really feel comfortable cancelling subscriptions on the basis of something being available in a proprietary or illegal format?

The researcher perspective

Stephen Curry, Professor of Structural Biology, Imperial College London, spoke about “Zen and the Art of Research Assessment”. He started by asking why people become researchers and gave several reasons: to understand the world, change the world, earn a living and be remembered. He then asked how they do it. The answer is to publish in high impact journals and bring in grant money. But this means it is easy to lose sight of the original motivations, which are easier to achieve if we are in an open world.

In discussing the report published in 2015, which looked into the assessment of research, “The Metric Tide“, Curry noted that metrics & league tables aren’t without value. They do help to rank football teams, for example. But university league tables are less useful because they aggregate many things so are too crude, even though they incorporate valuable information.

Are we as smart as we think we are, he asked, if we subject ourselves to such crude metrics of achievement? The limitations of research metrics have been talked about a lot but they need to be better known. Often they are too precise. For example was Caltech really better than University of Oxford last year but worse this year?

But numbers can be seductive. Researchers want to focus on research without pressure from metrics, however many Early Career Researchers and PhD students are increasingly fretting about publications hierarchy. Curry asked “On your death bed will you be worrying about your H-Index?”

There is a greater pressure to publish rather than pressure to do good science. We should all take responsibility to change this culture. Assessing research based on outputs is creating perverse incentives. It’s the content of each paper that matters, not the name of the journal.

In terms of solutions, Curry suggested it would be better to put higher education institutions in 5% brackets rather than ranking them 1-n in the league tables. Curry calls for academic leaders and institutions to do their bit in combating the metrics focus. He also called for much wider adoption of the Declaration On Research Assessment (known as DORA). Curry’s own institution, Imperial College London, has done so recently.

Curry argued that ‘indicators’ would be a more appropriate term than ‘metrics’ in research assessment because we’re looking at proxies. The term metrics imply you know what you are measuring. Certainly metrics can inform but they cannot replace judgement. Users and providers must be transparent.

Another solution is preprints, which shift attention from container to content because readers use the abstract not the journal name to decide which papers to read. Note that this idea is starting to become more mainstream with the research by the NIH towards the end of last year “Including Preprints and Interim Research Products in NIH Applications and Reports

Copyright discussion

I sat on a panel to discuss copyright with a funder – Mark Thorley, Head of Science Information, Natural Environment Research Council , a lawyer – Alexander Ross, Partner, Wiggin LLP and a publisher – Dr Robert Harington,  Associate Executive Director, American Mathematical Society.

My argument** was that selling or giving the copyright to a third party with a purely commercial interest and that did not contribute to the creation of the work does not protect originators. That was the case in the Kookaburra song example. It is also the case in academic publishing. The copyright transfer form/publisher agreement that authors sign usually mean that the authors retain their moral rights to be named as the authors of the work, but they sign away rights to make any money out of them.

I argued that publishers don’t need to hold the copyright to ensure commercial viability. They just need first exclusive publishing rights. We really need to sit down and look at how copyright is being used in the academic sphere – who does it protect? Not the originators of the work.

Judging by the mood in the room, the debate could have gone on for considerably longer. There is still a lot of meat on that bone. (**See the end of this blog for details of my argument).

The intermediary corner

The problem of getting books to readers

There are serious issues in the supply chain of getting books to readers, according to Dr Michael Jubb, Independent Consultant and Richard Fisher from Something Understood Scholarly Communication.

The problems are multi-pronged. For a start, discoverability of books is “disastrous” due to completely different metadata standards in the supply chain. ONIX is used for retail trade and MARC is standard for libraries, Neither has detailed information for authors, information about the contents of chapters, sections etc, or information about reviews and comments.

There are also a multitude of channels for getting books to libraries. There has been involvement in the past few years of several different kinds of intermediaries – metadata suppliers, sales agents, wholesalers, aggregators, distributors etc – who are holding digital versions of books that can be supplied through the different type of book platforms. Libraries have some titles on multiple platforms but others only available on one platform.

There are also huge challenges around discoverability and the e-commerce systems, which is “too bitty”. The most important change that has happened in books has been Amazon, however publisher e-commerce “has a long way to go before it is anything like as good as Amazon”.

Fisher also reminded the group that there are far more books published each year than there are journals – it’s a more complex world. He noted that about 215 [NOTE: amended from original 250 in response to Richard Fisher’s comment below] different imprints were used by British historians in the last REF. Many of these publishers are very small with very small margins.

Jubb and Fisher both emphasised readers’ strong preference for print, which implies that much more work needed on ebook user experience. There are ‘huge tensions’ between reader preference (print) and the drive for e-book acquisition models at libraries.

The situation is probably best summed up in the statement that “no-one in the industry has a good handle on what works best”.

Providing efficient access management

Current access control is not functional in the world we live in today. If you ask users to jump through hoops to get access off campus then your whole system defeats its purpose. That was the central argument of Tasha Mellins-Cohen, the Director of Product Development, HighWire Press when she spoke about the need to improve access control.

Mellins-Cohen started with the comment “You have one identity but lots of identifiers”, and noted if you have multiple institutional affiliations this causes problems. She described the process needed for giving access to an article from a library in terms of authentication – which, as an aside, clearly shows why researchers often prefer to use Sci Hub.

She described an initiative called CASA – Campus Activated Subscriber-Access which records devices that have access on campus through authenticated IP ranges and then allows access off campus on the same device without using a proxy. This is designed to use more modern authentication. There will be “more information coming out about CASA in the next few months”.

Mellins-Cohen noted that tagging something as ‘free’ in the metadata improves Google indexing – publishers need to do more of this at article level. This comment was responded with a call out to publishers to make the information about sharing more accessible to authors through How Can I Share It?

Mellins-Cohen expressed some concern that some of the ideas coming out of RA21 Resource Access in 21st Century, an STM project to explore alternatives to IP authentication, will raise barriers to access for researchers.

Summary

It is always interesting to have the mix of publishers, intermediaries, librarians and others in the scholarly communication supply chain together at a conference such as this. It is rare to have the conversations between different stakeholders across the divide. In his summary of the event, Mark Carden noted the tension in the scholarly communication world, saying that we do need a lively debate but also need to show respect for one another.

So while the keynote started promisingly, and said all the things we would like to hear from the publishing industry, there is still the reality that we are not there yet.  And this underlines the whole problem. This interweb thingy didn’t happen last week. What has actually happened  to update the publishing industry in the last 20 years? Very little it seems. However it is not all bad news. Things to watch out for in the near future include plans for micro-payments for individual access to articles, according to Mark Allin, and the highly promising Campus Activated Subscriber-Access system.

Danny Kingsley attended the Researcher to Reader conference thanks to the support of the Arcadia Fund, a charitable fund of Lisbet Rausing and Peter Baldwin.

Published 27 February 2017
Written by Dr Danny Kingsley
Creative Commons License

Copyright case study

In my presentation, I spoke about the children’s campfire song, “Kookaburra sits in the old gum tree” which was written by Melbourne schoolteacher Marion Sinclair in 1932 and first aired in public two years later as part of a Girl Guides jamboree in Frankston. Sinclair had to get prompted to go to APRA (Australasian Performing Right Association) to register the song. That was in 1975, the song had already been around for 40 years but she never expressed any great interest in any propriety to the song.

In 1981 the Men at Work song “Down Under” made No. 1 in Australia. The song then topped the UK, Canada, Ireland, Denmark and New Zealand charts in 1982 and hit No.1 in the US in January 1983. It sold two million copies in the US alone.  When Australia won the America’s Cup in 1983 Down Under was played constantly. It seems extremely unlikely that Marion Sinclair did not hear this song. (At the conference, three people self-identified as never having heard the song when a sample of the song was played.)

Marion Sinclair died in 1988, the song went to her estate and Norman Lurie, managing director of Larrikin Music Publishing, bought the publishing rights from her estate in 1990 for just $6100. He started tracking down all the chart music that had been printed all over the world, because Kookaburra had been used in books for people learning flute and recorder.

In 2007 TV show Spicks and Specks had a children’s music themed episode where the group were played “Down Under” and asked which Australian nursery rhyme the flute riff was based on. Eventually they picked Kookaburra, all apparently genuinely surprised when the link between the songs was pointed out. There is a comparison between the music pieces.

Two years later Larrikin Music filed a lawsuit, initially wanting 60% of Down Under’s profits. In February 2010, Men at Work appealed, and eventually lost. The judge ordered Men at Work’s recording company, EMI Songs Australia, and songwriters Colin Hay and Ron Strykert to pay 5% of royalties earned from the song since 2002 and from its future earnings.

In the end, Larrikin won around $100,000, although legal fees on both sides have been estimated to be upwards $4.5 million, with royalties for the song frozen during the case.

Gregory Ham was the flautist in the band who played the riff. He did not write Down Under, and was devastated by the high profile court case and his role in proceedings. He reportedly fell back into alcohol abuse and was quoted as saying: “I’m terribly disappointed that’s the way I’m going to be remembered — for copying something.” Ham died of a heart attack in April 2012 in his Carlton North home, aged 58, with friends saying the lawsuit was haunting him.

This case, I argued, exemplifies everything that is wrong with copyright.

2016 – that was the year that was

 In January last year we published a blog post ‘2015 that was the year that was‘ which not only helped us take stock about what we have achieved, but also was very well received. So we have decided to do it again. For those who are more visually oriented, the slides ‘The OSC a lightning Tour‘ might be useful. 

Now starting its third year of operation, the Office of Scholarly Communication (OSC) has expanded to a team of 15, managing a wide variety of projects. The OSC has developed a set of strategic goals  to support its mission: “The OSC works in a transparent and rigorous manner to provide recognised leadership and innovation in the open conduct and dissemination of research at Cambridge University through collaborative engagement with the research community and relevant stakeholders.”

1. Working transparently

The OSC maintains an active outreach programme which fits with the transparent manner of the work that the OSC undertakes, which also includes the active documentation of workflows.

One of the ways we work transparently is to share many of our experiences and idea through this blog which receives over 2,000 visits a month. During 2016 the OSC published 41 blogs – eight blogs each on Scholarly Communication and Open Research, 14 on Open Access,  nine on Research Data Management and two on Library and training matters. The blogs we published in Open Access week were accessed 1630 times that week alone.

In addition to our websites for Scholarly Communication and Open Access, our Research Data Management website has been identified internationally as best practice and receives nearly 3,000 visitors a month.

We also run a Twitter feed for both Open Access with 1100 followers, and Open Data with close to 1200 followers. Many of the OSC staff also run their own Twitter feeds which share professional observations.

We also publish monthly newsletters, including one on scholarly communication matters. Our research data management newsletter has close to 2,000 recipients. Our shining achievement for the year however has to be the hugely successful scholarly communication Advent Calendar (which people are still accessing…)

We practise what we preach and share information about our work practices such as our reports to funders on APC spend and so on, through our repository Apollo and also by blogging about it – see Cambridge University spend on Open Access 2009-2016. We also share our presentations through Apollo and in Slideshare.

2. Disseminating research

The OSC has a strong focus on research support in all aspects of the scholarly communication ecosystem, from concept, through study design, preparation of research data management plans, decisions about publishing options and support with the dissemination of research outputs beyond the formal literature. The OSC runs an intense programme of advocacy relating to Open Access and Research Data Management, and has spoken to nearly 3,000 researchers and administrators since January 2015.

2.1 Open Access compliance

In April 2016, the HEFCE policy requiring that all research outputs intended to be claimed for the REF be made open access came into force. As a result, there has been an increased uptake of the Open Access Service with the 10,000th article submitted to the system in October. Our infographics on Repository use and Open Access demonstrate the level of engagement with our services clearly.

Currently half of the entire research output of the University is being deposited to the Open Access Service each month (see the blog: How open is Cambridge?). While this is good from a compliance perspective, it has caused some processing issues due to the manual nature of the workflows and insufficient staff numbers. At the time of writing, there is a deposit backlog of over 600 items to put into the repository and a backlog of over 2,300 items to be checked if they have been published so we can update the records.

The OA team made over 15 thousand ticket replies in 2016 – or nearly 60 per work day!

2.2 Managing theses

Work on theses continues, with the OSC driving a collaboration with Student Services to pilot the deposit of digital theses in addition to printed bound ones with a select group of departments from January 2017. The Unlocking Theses project in 2015-2016 has seen an increase in the number of historic theses in the repository from 700 to over 2,200 with half openly available. An upcoming digitisation project will add a further 1,400 theses. The upgrade of the repository and associated policies means all theses (not just PhDs) can be deposited and the OSC is in negotiation with several departments to bulk upload their MPhils and other sets of theses which are currently held in closed collections and are undiscoverable. This is an example of the work we are doing to unearth and disseminate research held all over the institution.

As a result of these activities it has become obvious that the disjointed nature of thesis management across the Library is inefficient. There is considerable effort being placed on developing workflows for managing theses centrally within the Library which the OSC will be overseeing into the future.

3. Research Support

3.1  Research Data Support

The number of data submissions received by the University repository is continuously growing, with Cambridge hosting more datasets in the institutional repository than any other UK university. Our ‘Data Sharing at Cambridge’ infographic summarises our work in this area.

A recent Primary Research Group report recognised Cambridge as having ‘particularly admirable data curation services’.

3.2 Policy development

The OSC is heavily involved in policy development in the scholarly communication space and participates in several activities external to the University. In July 2016 the UK Concordat on Open Research Data was published, with considerable input from the university sector, coordinated by the OSC.

We have representatives on the RCUK Open Access Practitioners Group, the UK Scholarly Communication License and Model Policy Steering Committee and the CASRAI Open Access Glossary Working Group, plus several other committees external to Cambridge. The OSC has contributed to discussions at the Wellcome Trust about ensuring better publisher compliance with their Open Access policy.

We are also updating and writing policies for aspects of research management across the University.

3.3 Collaborations with the research community

The OSC collaborates directly with the research community to ensure that the funding policy landscape reflects their needs and concerns. To that end we have held several town-hall meetings with researchers to discuss issues such as the mandating of CC-BY licensing, peer review and options relating to moving towards an Open Research landscape. We have also provided opportunities for researchers to meet directly with funders to discuss concerns and articulate amendments to the policies. The OSC has led discussions with the sector and arXiv.org, including visiting Cornell University, to ensure that researchers using this service to make their work openly available can be compliant under the HEFCE policy.

A new Research Data Management Project Group brings researchers and administrators together to work on specific issues relating to the retention and preservation of data and the management of sensitive data. We have also recruited over 40 Data Champions from across the University. Data Champions are researchers, PhD students or support staff who have agreed to advocate for data within their department: providing local training, briefing staff members at departmental meetings, and raising awareness of the need for data sharing and management.

The initiative began as an attempt to meet the growing need for RDM training, provide more subject-specific RDM support and begin more conversations about the benefits of RDM beyond meeting funders’ mandates. There has been a lot of interest in our Data Champions from other universities in the UK and abroad, with applications for our scheme coming from around the world. In response to this we have proposed a Bird of a Feather session at the 9th RDA plenary meeting in April to discuss similar initiatives elsewhere and creating RDM advocacy communities.  

3.3 Professional development for the research community

The OSC provides the research community with a variety of advocacy, training and workshops relating to research data management, sharing research effectively, bibliometrics and other aspects of scholarly communication. The OSC held over 80 sessions for researchers in 2016, including the extremely successful ‘Helping researchers publish’ event which we are repeating in February.

Our work with the Early Career Research (ECR) community has resulted in the development of a series of sessions about the publishing process for the PhD community. These have been enthusiastically embraced and there are negotiations with departments about making some courses compulsory. While this underlines the value of these offerings it does raise issues about staffing and how this will be financed.

The OSC is increasingly managing and hosting conferences at the University. Cambridge is participating in the Jisc Shared Repositories pilot and the OSC hosted an associated Research Data Network conference in September. In July 2016, the OSC organised a conference on research data sharing in collaboration with the Science and Engineering South Consortium, which was extremely well received and attracted over 80 attendees from all over the UK.

In November, the OpenCon Cambridge group – with which the OSC is heavily involved – held a OpenConCam satellite event which was very well attended and received very positive feedback. The storify of tweets is available, as is this blog about the event. The OSC was happy to both be a sponsor of the event and to be able to support the travel of a Cambridge researcher to attend the main OpenCon event in Washington and bring back her experiences.

Increasingly we are livestreaming our events and then making them available online as a resource for later.

3.4 Developing Library capacity for support

We have published a related post which details the training programmes run for library staff members in 2016. In total 500 people attended sessions offered in the Supporting Researchers in the 21st century programme, and we successfully ‘graduated’ the second tranche of the Research Support Ambassador Programme.

Conference session proposals on both the Supporting Researchers and the Research Ambassador programmes have been submitted to various national and international conferences. Dr Danny Kingsley and Claire Sewell have also had an abstract accepted for an article to appear in the 2017 themed issue of The New Review of Academic Librarianship.

4. Updating and integrating systems

The University repository, Apollo has been upgraded and was launched during Open Access Week. The upgrade has incorporated new services, including the ability to mint DOIs which has been enthusiastically adopted. A new Request a Copy service for users wishing to obtain access to embargoed material is being heavily used without any promotion, with around 300 requests a month flowing through. This has been particularly important given the fact that we are depositing works prior to publication, so we have to put them under an infinite embargo until we know the publication date (at which time we can set the embargo lift date). The huge number of over 2,000 items needing to be checked for  publication date means a large percentage of the contents of the repository is discoverable but closed under embargo.

In order to reduce the heavy manual workload associated with the deposit and processing of over 4,000 papers annually, the OSC is working with the Research Information Office on a systems integration programme between the University’s CRIS system – Symplectic – and Apollo, and retaining our integrated helpdesk system which uses a programme called ZenDesk. This should allow better compliance reporting for the research community, and reduce manual uploading of articles.

But this process involves a great deal more than just metadata matching and coding, and touches on the extremely ‘silo’ed nature of the support services being offered to our researchers across the institution. We are trying to work through these issues by instigating and participating in several initiatives with multiple administrative areas of the University.  The OSC is taking the lead with a ‘Getting it Together’ project to align the communication sent to researchers through the research lifecycle and across the range of administrative departments including Communication, Research Operations, Research Strategy and University Information Systems, termed the ‘Joined up Communications’ group. In addition we are heavily involved in the Coordinated and Functional Research Systems Group (CoFRS) the University Research Administration Systems Committee and the Cambridge Big Data Steering Group.

5. Pursuing a research agenda

Many staff members of the OSC originate from the research community and the team have a huge conference presence. The OSC team attended over 80 events in 2016 both within the UK and major conferences worldwide, including Open Scholarship Initiative, FORCE2016, Open Repositories, International Digital Curation Conference, Electronic Thesis & Dissertations, Special Libraries Association, RLUK2016, IFLA, CILIP and Scientific Data Conference.

Increasingly the OSC team is being asked to share their knowledge and experience. In 2016 the team gave four keynote speeches, presented 18 sessions and ran one Master Class. The team has also acted as session chair for two conferences and convened two sessions.

5.1 Research projects

The OSC is undertaking several research projects. In relation to the changing nature of scholarly communication services within libraries, we are in the process of analysing  job advertisements in the area of scholarly communication, we have also conducted a survey (to which we have received over 500 respondents) on the educational and training background of people working in the area of scholarly communication. The findings of these studies will be shared and published during 2017.

Dr Lauren Cadwallader was the first recipient of the Altmetrics Research Grant which she used to explore the types and timings of online attention that journal articles received before they were incorporated into a policy document, to see if there was some way to help research administrators make an educated guess rather than a best guess at which papers will have high impact for the next REF exercise in the UK. Her findings were widely shared internationally, and there is interest in taking this work further.

The team is currently actively pursuing several research grant proposals. Other research includes an analysis of data needs of research community undertaking in conjunction with Jisc.

5.2 Engaging with the research literature

Many members of the OSC hold several editorial board positions including two on the Data Science Journal, and one on the Journal of Librarianship and Scientific Communication. We also hold positions on the Advisory Board for PeerJ Preprints. We have a staff member who is the Associate Editor, New Review of Academic Librarianship . The OSC team also act as peer reviewers for scholarly communication papers.

The OSC is working towards developing a culture of research and publishing amongst the library community at Cambridge, and is one of the founding members of the Centre for Evidence Based Librarianship and Information Practice (C-EBLIP) Research Network.

6. Staffing

Despite the organisational layout remaining relatively stable between 2015 and 2016, this belies the perilous nature of the funding of the Office of Scholarly Communication. Of the 15 staff members, fewer than half are funded from ‘Chest’ (central University) funding. The remainder are paid from a combination of non-recurrent grants, RCUK funding and endowment funds.

The process of applying for funding, creating reports, meeting with key members of the University administration, working out budgets and, frankly, lobbying just to keep the team employed has taken a huge toll on the team. One result of the financial situation is many staff – including some crucial roles – are on short-term contracts and several positions have turned over during the year. This means that a disproportionate amount of time is spent on recruitment. The systems for recruiting staff in the University are, shall we say, reflective of the age of the institution.

In 2016 alone, as the Head of the OSC, I personally wrote five job descriptions and progressed them through the (convoluted) HR review process.  I conducted 32 interviews for OSC staff and participated in 10 interviews for staff elsewhere in the University where I have assisted with the recruitment. This  has involved the assessment of 143 applications. Because each new contract has a probation period, I have undertaken 27 probationary interviews. Given each of these activities involve one (or mostly more) other staff members, the impact of this issue in terms of staff time becomes apparent.

We also conducted some experiments with staffing last year. We have had a volunteer working with us on a research project and run a ‘hotdesk’ arrangement with colleagues from the Research Information Office, the Research Operations Office and Cambridge University Press. We also conducted a successful ‘work from home’ pilot (a first for the University Library).

7. Plans for 2017

This year will herald some significant changes for the University – with a new Librarian starting in April and a new Vice Chancellor in September. This may determine where the OSC goes into the future, but plans are already underway for a big year in 2017.

As always, the OSC is considering both a practical and a political agenda. On the ‘political’ side of the fence we are pursuing an Open Research agenda for the University. We are about to kick off of the two-year Open Research Pilot Project, which is a collaboration between the Office of Scholarly Communication and the Wellcome Trust Open Research team. The Project will look at gaining an understanding of what is needed for researchers to share and get credit for all outputs of the research process. These include non-positive results, protocols, source code, presentations and other research outputs beyond the remit of traditional publications. The Project aims to understand the barriers preventing researchers from sharing (including resource and time implications), as well as what incentivises the process.

We are also now at a stage where we need to look holistically at the way we access literature across the institution. This will be a big project incorporating many facets of the University community. It will also require substantial analysis of existing library data and the presentation of this information in an understandable graphic manner.

In terms of practical activities, our headline task is to completely integrate our open access workflows into University systems. In addition we are actively investigating how we can support our researchers with text and data mining (TDM). We are beginning to develop and roll out a ‘continuum’ of publishing options for the significant amount of grey literature produced within Cambridge. We are also expanding our range of teaching programmes – videos, online tools, and new types of workshops. On a technical level we are likely to be looking at the potential implementation of options offered by the Shared Repository Pilot, and developing solutions for managed access to data. We are also hoping to explore a data visualisation service for researchers.

Published 17 January 2017
Written by Dr Danny Kingsley
Creative Commons License

 

 

Open Data – moving science forward or a waste of money & time?

On the 4 November the Research Data Facility at Cambridge University invited some inspirational leaders in the area of research data management and asked them to address the question: “is open data moving science forward or a waste of money & time?”. Below are Dr Marta Teperek’s impressions from the event.

Great discussion

Want to initiate a thought-provoking discussion on a controversial subject? The recipe is simple: invite inspirational leaders, bright people with curious minds and have an excellent chair. The outcome is guaranteed.

We asked some truly inspirational leaders in data management and sharing to come to Cambridge to talk to the community about the pros and cons of data sharing. We were honoured to have with us:

  • PRE_IntroSlide_V3_20151123Rafael Carazo-Salas, Group Leader, Department of Genetics, University of Cambridge
    @RafaCarazoSalas
  • Sarah Jones, Senior Institutional Support Officer from the Digital Curation Centre; @sjDCC
  • Frances Rawle, Head of Corporate Governance and Policy, Medical Research Council; @The_MRC
  • Tim Smith, Group Leader, Collaboration and Information Services, CERN/Zenodo; @TimSmithCH
  • Peter Murray-Rust, Molecular Informatics, Dept. of Chemistry, University of Cambridge, ContentMine; @petermurrayrust

The discussion was chaired by Dr Danny Kingsley, the Head of Scholarly Communication at the University of Cambridge (@dannykay68).

What is the definition of Open Data?

IMG_PMRWithText_V1_20151126The discussion started off with a request for a definition of what “open” meant. Both Peter and Sarah explained that ‘open’ in science was not simply a piece of paper saying ‘this is open’. Peter said that ‘open’ meant free to use, free to re-use, and free to re-distribute without permission. Open data needs to be usable, it needs to be described, and to be interpretable. Finally, if data is not discoverable, it is of no use to anyone. Sarah added that sharing is about making data useful. Making it useful also involves the use of open formats, and implies describing the data. Context is necessary for the data to be of any value to others.

What are the benefits of Open Data?

IMG_RCSWithText_V1_20151126Next came a quick question from Danny: “What are the benefits of Open Data”? followed by an immediate riposte from Rafael: “What aren’t the benefits of Open Data?”. Rafael explained that open data led to transparency in research, re-usability of data, benchmarking, integration, new discoveries and, most importantly, sharing data kept it alive. If data was not shared and instead simply kept on the computer’s hard drive, no one would remember it months after the initial publication. Sharing is the only way in which data can be used, cited, and built upon years after the publication. Frances added that research data originating from publicly funded research was funded by tax payers. Therefore, the value of research data should be maximised. Data sharing is important for research integrity and reproducibility and for ensuring better quality of science. Sarah said that the biggest benefit of sharing data was the wealth of re-uses of research data, which often could not be imagined at the time of creation.

Finally, Tim concluded that sharing of research is what made the wheels of science turn. He inspired further discussions by strong statements: “Sharing is not an if, it is a must – science is about sharing, science is about collectively coming to truths that you can then build on. If you don’t share enough information so that people can validate and build up on your findings, then it basically isn’t science – it’s just beliefs and opinions.”

IMG_TSWithText_V1_20151126Tim also stressed that if open science became institutionalised, and mandated through policies and rules, it would take a very long time before individual researchers would fully embrace it and start sharing their research as the default position.

I personally strongly agree with Tim’s statement. Mandating sharing without providing the support for it will lead to a perception that sharing is yet another administrative burden, and researchers will adopt the ‘minimal compliance’ approach towards sharing. We often observe this attitude amongst EPSRC-funded researchers (EPSRC is one of the UK funders with the strictest policy for sharing of research data). Instead, institutions should provide infrastructure, services, support and encouragement for sharing.

Big data

Data sharing is not without problems. One of the biggest issues nowadays it the problem of sharing of big data. Rafael stressed that with big data, it was extremely expensive not only to share, but even to store the data long-term. He stated that the biggest bottleneck in progress was to bridge the gap between the capacity to generate the data, and the capacity to make it useful. Tim admitted that sharing of big data was indeed difficult at the moment, but that the need would certainly drive innovation. He recalled that in the past people did not think that one day it would be possible just to stream videos instead of buying DVDs. Nowadays technologies exist which allow millions of people to watch the webcast of a live match at the same time – the need developed the tools. More and more people are looking at new ways of chunking and parallelisation of data downloads. Additionally, there is a change in the way in which the analysis is done – more and more of it is done remotely on central servers, and this eliminates the technical barriers of access to data.

Personal/sensitive data

IMG_FRWithText_V1_20151126Frances mentioned that in the case of personal and sensitive data, sharing was not as simple as in basic sciences disciplines. Especially in medical research, it often required provision of controlled access to data. It was not only important who would get the data, but also what they would do with it. Frances agreed with Tim that perhaps what was needed is a paradigm shift – that questions should be sent to the data, and not the data sent to the questions.

Shades of grey: in-between “open” and “closed”

Both the audience and the panellists agreed that almost no data was completely “open” and almost no data was completely “shut”. Tim explained that anything that gets research data off the laptop to a shared environment, even if it was shared only with a certain group, was already a massive step forward. Tim said: “Open Data does not mean immediately open to the entire world – anything that makes it off from where it is now is an important step forward and people should not be discouraged from doing so, just because it does not tick all the other checkboxes.” And this is yet another point where I personally agreed with Tim that institutionalising data sharing and policing the process is not the way forward. To the contrary, researchers should be encouraged to make small steps at a time, with the hope that the collective move forward will help achieving a cultural change embraced by the community.

Open Data and the future of publishing

Another interesting topic of the discussion was the future of publishing. Rafael started explaining that the way traditional publishing works had to change, as data was not two-dimensional anymore and in the digital era it could no longer be shared on a piece of paper. Ideally, researchers should be allowed to continue re-analysing data underpinning figures in publications. Research data underpinning figures should be clickable, re-formattable and interoperable – alive.

IMG_DKWithText_V1_20151126Danny mentioned that the traditional way of rewarding researchers was based on publishing and on journal impact factors. She asked whether publishing data could help to start rewarding the process of generating data and making it available. Sarah suggested that rather than having the formal peer review of data, it would be better to have an evaluation structure based on the re-use of data – for example, valuing data which was downloadable, well-labelled, re-usable.

Incentives for sharing research data

IMG_SJWithText_V1_20151126The final discussion was around incentives for data sharing. Sarah was the first one to suggest that the most persuasive incentive for data sharing is seeing the data being re-used and getting credit for it. She also stated that there was also an important role for funders and institutions to incentivise data sharing. If funders/institutions wished to mandate sharing, they also needed to reward it. Funders could do so when assessing grant proposals; institutions could do it when looking at academic promotions.

Conclusions and outlooks on the future

This was an extremely thought-provoking and well-coordinated discussion. And maybe due to the fact that many of the questions asked remained unanswered, both the panellists and the attendees enjoyed a long networking session with wine and nibbles after the discussion.

From my personal perspective, as an ex-researcher in life sciences, the greatest benefit of open data is the potential to drive a cultural change in academia. The current academic career progression is almost solely based on the impact factor of publications. The ‘prestige’ of your publications determines whether you will get funding, whether you will get a position, whether you will be able to continue your career as a researcher. This, connected with a frequently broken peer-review process, leads to a lot of frustration among researchers. What if you are not from the world’s top university or from a famous research group? Will you be able to still publish your work in a high impact factor journal? What if somebody scooped you when you were about to publish results of your five years’ long study? Will you be able to find a new position? As Danny suggested during the discussion, if researchers start publishing their data in the ‘open”’ there is a chance that the whole process of doing valuable research, making it useful and available to others will be rewarded and recognised. This fits well with Sarah’s ideas about evaluation structure based on the re-use of research data. In fact, more and more researchers go to the ‘open’ and use blog posts and social media to talk about their research and to discuss the work of their peers. With the use of persistent links research data can be now easily cited, and impact can be built directly on data citation and re-use, but one could also imagine some sort of badges for sharing good research data, awarded directly by the users. Perhaps in 10 or 20 years’ time the whole evaluation process will be done online, directly by peers, and researchers will be valued for their true contributions to science.

And perhaps the most important message for me, this time as a person who supports research data management services at the University of Cambridge, is to help researchers to really embrace the open data agenda. At the moment, open data is too frequently perceived as a burden, which, as Tim suggested, is most likely due to imposed policies and institutionalisation of the agenda. Instead of a stick, which results in the minimal compliance attitude, researchers need to see the opportunities and benefits of open data to sign up for the agenda. Therefore, the Institution needs to provide support services to make data sharing easy, but it is the community itself that needs to drive the change to “open”. And the community needs to be willing and convinced to do so.

Further resources

  • Click here to see the full recording of the Open Data Panel Discussion.
  • And here you can find a storified version of the event prepared by Kennedy Ikpe from the Open Data Team.

Thank you

We also wanted to express a special ‘thank you’ note to Dan Crane from the Library at the Department of Engineering, who helped us with all the logistics for the event and who made it happen.

Published 27 November 2015
Written by Dr Marta Teperek
Creative Commons License