Tag Archives: funders

A Day in the Life of an Open Access Research Adviser

As part of the Office of Scholarly Communication Open Access Week celebrations, we are uploading a blog a day written by members of the team. Monday is a piece by Dr Philip Boyes reflecting on the variety of challenges of working in the Open Access team.

As anyone working in it knows all too well, Open Access can be a complicated field, with multiple policies from funders, institutions and publishers which can be complex, sometimes obscure and sometimes mutually contradictory. While we’re keen to raise awareness of and engagement with Open Access issues, the University of Cambridge’s view is that expecting academics to get to grips with all this themselves would represent an unreasonable demand on their time and likely lead to errors and resentment.

Instead, Cambridge’s policy is that authors should simply send us their Accepted Manuscript at acceptance through our simple upload system and our team of Research Advisers will check out exactly what they need to do to comply with all the relevant funder and journal policies and get back to them with individually-tailored advice. The same system also allows us to take care of deposit into the repository for HEFCE and to manage payments from the block grants we’ve received from the UK Research Councils (RCUK) and the Charities Open Access Fund (COAF – seven biomedical charities, including the Wellcome Trust).

The idea is that from the academic’s point of view the process feels smooth and seamless. But the reality is that very little of the process is automated. Behind the scenes there’s a lot of (thankfully metaphorical) running around by our team of three Open Access Research Advisers to provide this service, as well as working on broader issues of communication, processing APCs and improving our systems.

So what does a Cambridge Open Access Research Adviser do all day? Here’s a typical day in the life…

8.45am- Getting started

Arriving in the office, I check my emails and look at the Open Access Helpdesk. Overnight we’ve received around 15 new tickets, as well as some further correspondence on existing ones. Fairly typical. It’s split between manuscript uploads that need advice, general queries and invoicing correspondence from publishers. I start working through these on a first-come-first served basis.

They’re a real mixed bag. If a submitted article is straightforward we can deal with it in a few minutes – we check the journal site for their green and gold options and then advise the author on which is appropriate in each case. We also flag the manuscript for deposit into our repository – at the moment that’s a manual process and is mostly handled by temps.

Today things aren’t straightforward. A lot of the submissions are conference proceedings and there’s very little information on the conference websites. It’s not even clear whether some of these are being formally published (does private distribution on memory stick count? Do they have ISBNs or ISSNs?) It’s going to be a slow morning of chasing up authors and conference organisers for any information they have.

 10.00am – Complexity

I’m more or less through the conference proceedings, but we’re not through with complex cases. One of the invoices we’ve received is for an article we’ve not heard about before. It’s from a senior professor but he’s never submitted it to the open access service so we weren’t able to advise him on policy or eligibility for block grant funds. He selected the gold option for a Wellcome-funded correspondence article and now wants us to pay the $5000 + VAT bill. The trouble is, letters aren’t covered by the Wellcome policy so technically it isn’t eligible. I contact the author and break the news that he might have to pay this large bill himself and that this is why we like people to contact us first.

 11.00am – Clarity

The professor has got back to us. Although the journal’s classed it as a letter, the paper’s actually a very short research article, he says. I decide to contact Wellcome for guidance and let them decide whether they want this to be paid for from the COAF block grant.

 11:30am – Deja-vu

For the moment the backlog on the helpdesk has been cleared and our temps are busy adding manuscripts to the repository and updating previously-added articles with citation details and embargo end-dates. I have a bit of free time to move on to something else so begin to tackle the stack of publisher APC invoices that need processing.

They’re mostly correct, but some publishers and invoicing companies are better than others. Inevitably there are a few errors that need chasing up or publishers who have invoiced us repeatedly for the same thing. Among the stack is an overdue notice from a major publisher for a familiar article. It’s one we’ve repeatedly confirmed was paid fully almost two years ago but every few months ever since the publisher has told us it’s outstanding. I send them back the payment reference and details yet again and ask them to mark the issue as resolved. I somehow suspect we’ll be seeing it again.

 2.00pm – Presentation

Today offers a welcome opportunity to get out of the office. We’re holding a joint Open Access/Open Data presentation to researchers in one of the University’s departments to try and increase awareness of the policies. Our stats show that this department has particularly low engagement with the Open Access service so we’re keen work out why. It’s a fractious crowd. One or two people are keen Open Access advocates and speak up to say how simple the system is, but some others are vocal about their view that it’s an unwarranted burden and tell us they don’t see why they should bother.

We try to explain the benefits and funder mandates, as well as how we’ve tried to make the system as simple as possible. When we get back to the office we find that one of those present has sent us their back-catalogue of thirty articles stretching back to 2007 to put into the repository.

 4.00 – Compliance

While my colleagues work on the helpdesk I need to turn my attention to compliance and reporting. All too often when we’ve paid an APC the publisher hasn’t delivered Open Access with the correct licence, or in some cases at all. I generally try to do a weekly check of the articles for which we’d paid APCs to see whether they’ve been published correctly but it’s time-consuming and things have been busy lately. It’s been around three weeks since the last check so it really needs doing.

But the deadline is also fast approaching for annual reports to RCUK and COAF. These are both large and complex, and cover slightly different periods (and different again from the Jisc report a couple of months ago). It’s proving a major challenge to get the information together from our various systems and to match it to the relevant figures from the University Finance System. I decide to let the compliance checking wait a bit longer and work on trying to move things along on the reports. I make a bit of progress, but there’s still a huge amount left to do – information on thousands of articles that needs to be manually collated. With luck in the future we’ll have integrated systems that can do much of this automatically, but for now each report represents weeks of work.

Wrap up

There is, then, a huge variety and amount of work that goes into the Open Access service. The Helpdesk and the reporting alone would be more than enough to keep us busy, but we also have to make time for outreach and communications, managing the finances, improving our systems and more. We’re finding that as our team grows, we’re starting to specialise more into particular areas, but we’re still basically all generalists, working on all areas of the job. This balance between specialisation for the purposes of efficiency and the need for individuals to be able to move effectively from one task to another – not least to keep our jobs interesting and varied – is one that’s likely to become ever more challenging as the volume of articles we handle increases.

Published 19 October 2015
Written by Dr Philip Boyes
Creative Commons License

In conversation with Michael Ball from BBSRC

The Biotechnical and Biological Sciences Research Council (BBSRC) Data Sharing Policy states that research data that supports publications must be stored for 10 years and adherence to data management plans will be monitored and built into the Final Report score, which may be taken into account for future proposals.

Recently Michael Ball, the Strategy and Policy Manager at BBSRC accepted an invitation to Cambridge University to discuss the BBSRC policy on opening up access to data. Senior members of the University, the School of Biological Sciences, the Research Office and the Office of Scholarly Communications attended. These notes have been verified by Michael as an accurate reflection of the discussion.

The take home messages from the meeting were the importance of:

  • Disciplines themselves establishing ways of dealing with data
  • Thinking about how to deal with data from the beginning of a research project

The meeting began with a discussion about the support we provide Cambridge University researchers through the Research Data Service , the resources provided on the data website and the enthusiastic uptake of the service since the beginning of the year.

The conversation then moved into issues around the policy, focusing on several aspects – clarification of what needs to be shared, how this will be supported financially, questions about auditing, a discussion about the best place to keep the data and issues with data sharing in the biological sciences.

What data are we expected to share?

What is ‘supporting data’ in the biological sciences?

One of the biggest concerns biological researchers have about data sharing is what is meant by ‘data’. Biology has the most diverse group of data, which makes it hard to talk about biology because the issues are project and problem specific.

Michael confirmed the policy broadly refers to all data ‘but the devil is in the detail, there are lots of caveats’.  He echoed Ben Ryan in answer to a similar question of the EPSRC policy by saying the key points are:

  • What would you expect to see?
  • What do you think is important?

The interpretation of the BBSRC policy depends heavily on the types of data being produced.  Much is dependent on the expected norms, what a researcher would expect to see if they were trying to interpret the paper. What are the underlying supporting data for the paper?

The biological sciences throw up a particular challenge in the range and disparity in disciplinary norms. For example a great deal of data arises from genomics and some time ago they made the decision to share, including making decisions about what to share and what not to share. However, there are vast areas of experimental science where the paper itself is data.

The policy is going one step further back from the published paper towards the lab. In the future these data policies might go further back, if there was greater automation of the research process.

Michael confirmed that if the BBSRC has funded a PhD student they would expect them to make supporting data available.

What do we need to share in the Biological Sciences?

There is no expectation to share lab books unless they are the only place the data exists. Michael noted that when the BBSRC wrote the policy it excluded lab books and organisms.

However there is an expectation to share instrumental output. This is with the caveat that if it is output from an instrument that goes through some sort of amendment then you don’t need to share the original.

An example: A researcher is counting bacteria on a plate and scrupulously making notes in lab books before entering this information put into a computer spreadsheet to crunch the numbers. The expectation would be to share the spreadsheet not the lab book.

Some research requires the construction of a piece of technology where there might not be a great deal of associated data around it. In these instances it is the process of construction or the protocol or the methodology that is important to share.

Michael noted that in some disciplines, given the materials and input parameters and the same instruments, the output data will be the same each time. In these circumstances it is most sensible to share or describe the inputs and repeat the experiments. The question is about what would be the most useful to share.

Show me the money

A stitch in time

Michael confirmed that researchers can ask for the money they need (and can justify) for research data management in grant applications. He did say however that the BBSCR does not ‘generally see a lot of these requests’. He noted that this is because often people haven’t thought about the data they will generate at the start of the project. One of the researchers pointed out it was difficult to know how to fund it because ‘we are not sure what we need’. However, this should not be a reason to ask for nothing.

It may be that some of the discipline specific repositories will have to change their business models in the future to cope with larger data sets.

Michael said that it is worth thinking about data sharing at the project planning stage because different types of data have different requirements. Researchers might need to allow for the cost of getting the data in the right format and metadata. It is advisable to think about where the data will be published so the research team can prepare the data in the first instance.

Michael said that the data management plan should hopefully prompt how much data a research project will produce. It is advisable to consider the maximum amount of data the project may produce. The ideal situation will be to have an ongoing data management plan because in some ways it is useful at the end.

Longer term financial support

Raised in the meeting was the option of charging a flat fee up front regardless of the data being generated. The question arose about whether there was any danger in auditing with this approach? The problem with an up front fee is it becomes more difficult to track and output from a specific grant against what we put into the database. There is a directly incurred and directly allocated component to the cost.

Michael confirmed that any money allocated to data management won’t survive past the end of the grant. He noted this was something that he was ‘not sure how to unpick’. This raises the issue of the cost of longer term data sharing. The BBSRC provides funding to a certain point in time. There can be a secondary experiment funded by someone else and the works are published together. But the researcher can only share the data from the funded part. The BBSRC does not ask researchers to share data that they haven’t funded.

Auditing questions

Who is in charge here?

The academics raised the concern that there could be ‘mission creep’ where the funders expect people to do things that are a waste of time. They mentioned that an ideal situation would be where the research community decide what they want to share and what they don’t wish to share.

Michael noted that the BBSRC has to be guided by the community on their own community norms for data sharing, and this is why aspects of the data sharing policy is quite open. He noted that this meeting represented the first part of the process – where the funder comes together with communities to decide what is essential.

In addition, many journals are now requiring open data. It is the funders, the researchers and the journals who are asking for it. To some extent the BBSRC policy is guided by what the journals are asking for.

The policing process

The group expressed interest in how the BBSRC policy is policed and what would be the focus of that policing. Michael stated that BBSRC are investigating options of how to monitor compliance, but that it does not currently appear feasible to to check all of the submissions. BBSRC will monitor compliance, but will probably start with dipstick testing. They will look at historical projects and see where the process goes from there. In practice, this is likely to initially involve examining the degree of adherence to the submitted data management plans. If a researcher has acted reasonably and justified their mechanisms of data sharing, then it is unlikely that there would be any actions beyond noting where  difficulties had occurred.

Note, however that if a researcher has submitted a grant application with a data sharing statement there is a reasonable expectation to share the data.

Ultimately the data release will be policed. In areas where data sharing is prevalent, communities police themselves because researchers ask and expect the data to be available. In some cases you can’t publish without an accession number.

Michael noted there are places researchers can put information about published data into ResearchFish. ResearchFish is currently the only mechanism to capture information regarding post-award activities.

Where do we put the data?

The question arose about how other universities are managing the policy. Michael responded that many have started institutional repositories. The institutional response depends on where the majority of their research sits.

A possible solution for ensuring the data is discoverable would be a catalogue of what is stored in an institutional repository, with metadata about the data. That metadata would itself need to be discoverable. If the data is being held in a centralised repository it is possible to pay the cost upfront before the end of the grant.

The group noted there was a publishing preference for discipline specific repositories over institutional repositories because the community knows how to look after the work. These repositories are hosted by ‘people who know what they are doing’. They are discoverable, where the community can decide on the metadata and the required standards.

Michael agreed that the ideal was open discoverability. The question is what will be practically possible.

A way of considering the question is asking how would another researcher find the information? If the data is available from a researcher by request this should be noted in the paper. If it is available in a repository then the paper should state that. If the journal has told readers where the data is, then it should be self-evident.

Issues with obsolescence

Michael noted that there is an ongoing issue of obsolete data formats and disks. Given there are ideals and reality, it becomes a question of how to store and handle the information.

When data exists in a proprietary format, the researcher needs to think about how to access it in the longer term. What if the organisation goes out of business? Or the technology upgrades so you can’t get hold of the data in an earlier format? If data exists in a physical format then it is possible to go back and read it. However, if not then it is quite important to think about issues relating to long-term access. Lots of data will be obsolete.

There are some solutions for this issue. The Open Microscopy Environment is a joint project between universities, research establishments, industry and the software development community. It develops open-source software and data format standards for the storage and manipulation of biological microscopy data. This is a community-generated solution as a recognised problem. It has a database that you can upload any file format.

Issues with data sharing in the biological sciences

The BBSRC allows a reasonable embargo until the researcher has exploited the data for publication. If the researcher is planning on releasing further publications then they should consider carefully when to release the data., Michael noted, this is ‘not a forever thing’. The BBSRC do say there are reasonable limits, and some journals will expect data to be released alongside publications.

Commercial partners

Data emerging from BBSRC funded research needs to be shared unless there is a reason why not – and commercial partners who need to protect their intellectual property can be a good reason to delay data sharing. However once the Intellectual Property is protected, it is protected. The BBSRC allows researchers to embargo the data.

Michael also noted there are things that can be done with data, for example releasing it under license. An example is, if a researcher is working with a commercial partner who is concerned about other commercial competitors, it would be possible to require people to sign non-disclosure agreements. There are ways to deal with commercial data, as you would with other intellectual products.

It was noted by the researchers in the meeting that this type of arrangement is likely to mean the company doesn’t want to go through the process and won’t collaborate.

Exceptions

If data was generated before the policy was in place then the researcher has not submitted a grant application that requires them to share their data. The BBSRC is not expecting people to go back into history. Those researchers who wish to share historical research are not discouraged but this is not covered by the policy. The policy came into force in April 2007, however realistically it started in 2008.

In addition there are reasonable grounds for not sharing clearly incorrect or poor quality data. Many disciplinary databases will contain an element of quality control.   But Michael noted that the policy shouldn’t be a way for people to filter out inconvenient data and would expect the community to be self policing.

Future policy direction

Michael noted that this type of policy is becoming more prevalent not less. Open science is one of the Horizon 2020 themes – see the 2013 Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020. Journals are getting involved as well. In the future sharing data will be more common – and driven by disciplinary norms. Anything that has been funded by RCUK will be required to share. It makes sense to government – the US National Institutes of Health and National Science Foundation have data sharing statements.

Continuing the dialogue

Michael indicated that he wants to talk to people about what the questions are so the BBSRC can refine issues in the policy.

Researchers who have questions about the policy can send them through to the Research Data Service team info@data.cam.ac.uk. If we are unable to answer them, we can ask BBSRC directly for clarification. We will then add the information to the University Research Data Management FAQ webpage.

Published 19 October 2015
Written by Dr Danny Kingsley, verified by Michael Ball, BBSRC
Creative Commons License

Data sharing – build it and they will come

If a tree falls in the forest and no one was there to hear it, did it happen? You could ask the same philosophical question of research – if no-one can see the research results, what was the point in the first place?

Moving science forward and increasing the knowledge of the world around implies exchange of findings. Society cannot benefit from research if there is no awareness of what has been done. Managing and sharing research data is a fundamentally important part of the research process. Yet researchers are often reluctant to share their data, and some are openly hostile to the idea.

This blog describes the research data services provided at Cambridge University which are attempting to encourage and assist researchers manage and share their data.

A tough start

The Data Management Facility project at Cambridge began operations in January 2015. At the time there was very little user support for data management in place.  There was no advocacy, no training and no centralised tools to support researchers in research data management.

There had been a substantial body of work undertaken in 2010-2012 as part of the ‘Incremental’ project into research data management, but once the project money ended, the resources remained available but were not updated.

One of the initial challenges was an out of date institutional repository. Cambridge University was one of the original test-bed institutions for DSpace in 2005. While there had been considerable effort invested in the establishment of the repository, it had in recent years been somewhat neglected. The lack of both awareness of the repository and support for researchers was reflected in the numbers: during the first decade of the repository, only 72 datasets had been deposited.

In addition, the Engineering and Physical Sciences Research Council (EPSRC) had compliance expectations for funded research kicking in May 2015. This gave us five months to pull the Research Data Facility together. It was a tough start.

Understanding researchers’ needs

Tight deadlines often mean the temptation is to create short-term solutions. But we did not want to take this path. Solutions created without prior understanding of the need have no guarantee they will resolve the actual issues at hand.

So we started talking with researchers. We met and spoke with hundreds of researchers across all disciplines and fields of study – Principal Investigators, postdocs, students, and staff members. These were both group sessions and individual meetings. We told them about the importance of sharing research data, and in return we listened to what researchers told us about their worries and possible problems with data sharing.

To date, we have spoken with over 1000 researchers, and from each meeting we kept detailed notes of all the questions/comments received.

We have additionally conducted a questionnaire to better understand researchers’ needs for research data management support. Of the researchers surveyed, 83% indicated that it is ‘very useful’ for the University to provided both information about funders’ expectations for research data sharing and management, and support.

Screen Shot 2015-08-24 at 06.45.55

Solution 1 – Providing information

In March 2015 we launched the Research Data Management website which is a single location for solutions to all research data management needs. The website contains:

and much more.

The key idea behind the website is to provide an easy to navigate place with all necessary information. The website is being constantly updated, and new information is regularly added in response to feedback received from researchers.

Concurrently we have been conducting tailored information sessions about funders’ requirements for sharing data and support available at the University of Cambridge. We run these sessions at multiple locations across the University, and to audiences of various types. The sessions ranged from open sessions in central locations to dedicated sessions hosted at individual departments, and speaking with individual research groups. Slides from information sessions are always made available for attendees to download.

Solution 2 – Assistance with data management plans and supporting data management

In the survey 82% of researchers said it would be very helpful if there were someone at the University available to help with data management plans. To address this, we have:

  • Added tailored information about data management plans to our information sessions.
  • Linked the DMPonline tool from our data website. This allows researchers to prepare funder specific data management plans
  • Organised data management plan clinic sessions (one to one appointments on demand)
  • Prepared guidelines on how to fill in a data management plan.

Additionally, 63% researchers indicated that it would be ‘very useful’, and further 31% indicated that it would be ‘useful’ to have workshops on research data management. We have therefore prepared a 1.5 hour interactive introductory workshop to research data management, which is now offered across various departments across the University. We are also developing the skill sets within the library staff across the institution to deliver research data management training to researchers from their field.

Solution 3 – Providing an institutional repository

Finally, 79% of researchers indicated that it would make data sharing easier if the University maintained its own, easy to use data repository. We therefore had to do something about our repository, which had not been updated for a long time. We have rolled-out series of updates to the repository, taking it to Version 4.3, which will allow minting DOIs to datasets.

Meantime we also had to think of a strategy to make data sharing as easy as possible. The existing processes for uploading research data to the repository were very complicated and discouraging to researchers. We did not have any web-mediated facility that would allow researchers to easily get their data to us. In fact, most of the time we asked researchers to bring their data to us on external hard drives. This was not an acceptable solution in the 21st century!

Researchers like simple processes, Dropbox-like solutions, where one can easily drag and drop files. We have therefore created a simple webform, which asks researchers for the minimal necessary metadata information, and allows them to simply drag and drop their data files.

The outcomes

It turned in the end it was really worth the effort of understanding researchers’ needs before considering solutions. As of 24 August 2015, the Research Data Management website has been visited 10,992 times. Our training sessions on research data management and data planning have received extremely good feedback – 73% of respondents indicated that our workshops should be ‘essential’ to all PhD students.

And most importantly, since we launched our easy-to-upload website form for research data, we have received 122 research data submissions – in four months we have received more than 1.5 times more research outputs than in ten years of our repository’s lifetime.

So our advice to anyone wishing to really support researchers is to truly listen to their needs, and address their problems. If you create useful services, there is no need to worry about the uptake.

data-plasma4This infographic demonstrates how successful the Research Data Facility has been. Prepared by Laura Waldoch from the University Library, it is available for download.

To know more about our activities, follow us on Twitter.

 

Published 24 August 2015
Written by Dr Marta Teperek and Dr Danny Kingsley
Creative Commons License