Monthly Archives: January 2017

The art of software maintenance

When it comes to software management there are probably more questions than answers to problems – that was the conclusion of a recent workshop hosted by the Office of Scholarly Communication (OSC) as part of a national series on software sustainability, sharing and management, funded by Jisc. The presentations and notes from the day are available, as is a Storify from the tweets.

The goal of these workshops was to flesh out the current problems in software management and sharing and try to identify possible solutions. The researcher-led nature of this event provided researchers, software engineers and support staff with a great opportunity to discuss the issues around creating and maintaining software collaboratively and to exchange good practice among peers.

Whilst this might seem like a niche issue, an increasing number of researchers are reliant on software to complete their research, and for them the paper at the end is merely an advert for the research it describes. Stephen Eglen described this in his talk as an ‘inverse problem’ – papers are published and widely shared but it is very hard to get to the raw data and code from this end product, and the data and code are what is required to ensure reproducibility.

These workshops were inspired by our previous event in 2015, where Neil Chue Hong and Shoaib Sufi spoke with researchers at Cambridge about software licensing and Open Access. Since then the OSC has had several conversations with Daniela Duca at Jisc and together we came up with an idea of organising researcher-led workshops across several institutions in the UK.

Opening up software in a ‘post-expert world’

We began the day with a keynote from Neil Chue-Hong from the Software Sustainability Institute who outlined the difficulties and opportunities of being an open researcher in a ‘post-expert world’ (the slides are available here). Reputation is crucial to a researcher’s role and therefore researchers seek to establish themselves as experts. On the other hand, this expert reputation might be tricky to maintain since making mistakes is an inevitable part of research and discovery, which is poorly understood outside of academia. Neil introduced Croucher’s Law to help us understand this: everyone will make mistakes, even an expert, but an expert will be aware of this so will automate and share their work as much as possible.

Accepting that mistakes are inevitable in many ways makes sharing less intimidating. Papers are retracted regularly due to errors and Neil gave examples from a variety of disciplines and career stages where people were open about their errors so their communities were accepting of the mistakes. In fact, once you accept that we will all make mistakes then sharing becomes a good way to get feedback on your code and to help you fix bugs and errors.

This feeds into another major theme of the workshop which Neil introduced; that researchers need to stop aiming for perfect and adopt ‘good enough’ software practices for achievable reproducibility. This recognises that one of the biggest barriers to sharing is the time it takes to learn software skills and prepare data to the ‘best’ standards. Good enough practices mean accepting that your work may not be reproducible forever but that it is more important to share your code now so that it is at least partially reproducible now. Stephen Eglen built on this with his paper on ‘Towards standard practices for sharing computer code and programs in neuroscience’ which includes providing data, code, tests for your code and using licences and DOIs.

Both speakers and the focus groups in the afternoon highlighted that political work is needed, as well as cultural change, to normalise code sharing. Many journals now ask for evidence of the data which supports articles and the same standards should apply to software code. Similarly, if researchers ask for access to data when reviewing articles then it makes sense to ask for the code as well.

Automating your research: Managing software

Whilst sharing code can be seen as the end of the lifecycle of research software, writing code with the intention of sharing it was repeatedly highlighted as a good way to make sure it is well-written and documented. This was one of several ‘selfish’ reasons to share, where sharing also helps the management of software, through better collaboration, the ability to track your work and being able to use students’ work after they leave.

Croucher’s Law demonstrates one of the main benefits of automating research through software; the ability to track the mistakes to improve reproducibility and make fixing mistakes easier. There were lots of tools mentioned throughout the day to assist with managing software from the well-known version control and collaboration platform Github to the more dynamic such as Jupyter notebooks and Docker. As well as these technical tools there was also discussion of more straightforward methods to maintain software such as getting a code buddy who can test your code and creating appropriate documentation.

Despite all of these tools and methods to improve software management it was recognised by many participants that automating research through software is not a panacea; the difficulties of working with a mix of technical and non-technical people formed the basis of one of the focus groups.

Sustaining software

Managing software appropriately allows it to be shared but re-using it in the long- (or even medium) term means putting time into sustaining code and make sure it is written in a way that is understandable to others. The main recommendations from our speakers and focus groups to ensure sustainability were to use standards, create thorough documentation and embed extensive comments within your code.

As well as thinking about the technical aspects of sustaining software there was also discussion of what is required to motivate people to make their code re-usable. Contributing to a community seemed to be a big driver for many participants so finding appropriate collaborators is important. However larger incentives are needed and creating and maintaining software is not currently well-rewarded as an academic endeavour. Suggestions to rectify this included more software-oriented funding streams, counting software as an output when assessing academics, and creating a community of software champions to mirror the Data Champions scheme we recently started in Cambridge.

Next steps

This workshop was part of a national discussion around research software so we will be looking at outcomes of other workshops and wider actions the Office of Scholarly Communication can support to facilitate sharing and sustaining research software. Apart from Cambridge, five other institutions held similar workshops (Bristol, Birmingham, Leicester, Sheffield, and the British Library). As one of the next steps, all organisers of these events want to meet up to discuss the key issues raised by researchers to see what national steps should be taken to better support the community of researchers and software engineers and also to consider if there any remaining problems with software which could require a policy intervention.

However, following the maxim to ‘think global, act local’, Neil’s closing remarks urged everyone to consider the impact they can have by influencing those directly around them to make a huge difference to how software is managed, sustained and shared across the research community.

Published 29 January 2017
Written by Rosie Higman
Creative Commons License

‘Paperless research’ solutions – Electronic Lab Notebooks

The Office of Scholarly Communication started 2017 with a discussion about ‘going digital’ – on 13 January 2017 we organised an event at Cambridge University’s Department of Engineering to flesh out the problems preventing researchers from implementing Electronic Lab Notebook solutions. Chris Brown from Jisc wrote an excellent blog post with his reflections of the event* and agreed for us to re-blog it here.

For researchers working in laboratories the importance of recording experiments, results, workflows, etc in a notebook is engrained into you as a student. However, these paper-based solutions are not ideal when it comes to sharing and preservation. They pile on desks and shelves, vary in quality and often include printed data stuck in. To improve on this situation and resolve many of these issues, e-lab notebooks (ELNs) have been developed. Jisc has been involved in this work through funding projects such as CamELN and LabTrove in the past. Recently, interest in this area has been renewed with the Next Generation Research Environment co-design challenge.

On Friday 13 January I attended the E-Lab Notebooks workshop at the University of Cambridge, organised by Office of Scholarly Communication. Its purpose was to open up the discussion about how ELNs are being used in different contexts and formats, and the concerns and motivations for people working in labs. A range of perspectives and experience was given through presentations, group and panel discussions. The audience were mostly from Cambridge, but there was representation from other parts of the UK, as well as Denmark and Germany. A poll at the start showed that the majority of the audience were researchers (57%).

Institutional and researchers’ perspective on ELNs at Cambridge

The first part of the workshop focussed on the practitioners’ perspective with presentations from the School of Biological Sciences. Alastair Downie (Gurdon Institute) talked about their requirements for an ELN as well as anxieties and risks of adopting a particular system. Research groups currently use a variety of tools, such as Evernote and Dropbox, and often these are trusted more than ELNs. The importance of trust frequently came up during the day. Alastair conducted a survey to gather more detail on the use and requirements of ELNs and received an impressive 345 responses. Cost and complexity were given as the main reasons not to use ELNs. However, when asked for the most important features, cost was less important but ease of use was the most. Researchers want training, voice recognition and remote access. There is clear interest across the school at all levels, but it requires a push with guidance and direction.

Pic1Marko Hyvönen (Dept of Biochemistry) gave the PI perspective and the issues with an ELN for a biochemical lab. He reinforced what Alastair had said about ELNs. He showed how paper log books pile up, deteriorate over time and sometimes include printed information. They are hard to read and easy to destroy, a poor return on effort, often disappear and not searchable. It was interesting to hear about bad habits such as storing data in non-standardised ways, missing data, printing out Word documents and sticking them into the lab books.

With 99% of their data electronic many of the issues in the use of lab books generally are around data management and not ELNs. An ELN solution should be easy to use, cross platform, have a browser front end, be generic/adaptable, allow sharing of data and experiments, enforce Standard Operating Procedures when needed, have templates for standard work to minimise repetition, include inputting of data from phones and other non-specific devices. What they don’t want are the “bells and whistles” features they don’t use. Getting buy-in from people is the top issue to overcome in implementing an ELN.

Views on ELNs from outside the UK

Jan Krause from the École pPolytechnique Fédérale de Lausanne (EPFL) gave a non-UK perspective on ELNs. He described a study, as part of a national RDM project, where they separated ELNs (75 proprietary, 12 open source – 91 features) and Lab Info Management Systems (LIMS) (281 proprietary, 9 open source – 95 features) and compared their features. The two tools used mostly in Switzerland are SLims (commercial solution) and openBIS (homemade tool). To decide which tool to use they undertook a three phase selection process. The first selection was based on disciplinary and technical requirements. The second selection involved detailed analysis based on user requirements (interviews and evaluation weighted by feature) and price. The third selection was tendering and live demos.

Data storage, security and compliance requirements

When using and sharing data you need to make sure your data is safe and secure. Kieren Lovell, from the University Information Services, talked about how researchers should keep their data and accounts safe. Since he started in May 2015, all successful hacks on the university have been due to human error, such as unpatched servers, failures in processes, bad password management, and phishing. Even if you think your data and research isn’t important, the reputational damage of security attacks to the university is huge. He recommended that any research data is shared through cloud providers rather than email, never trust public wifi as is not secure so use Cambridge’s VPN service. If using a local machine you should encrypt your hard drive.

Pic2

Providers’ perspective

In the afternoon, presentations were from the providers’ perspective. Jeremy Frey, from the University of Southampton, talked about his experience of developing an open source ELN to support open and interdisciplinary science. He works on getting the people and technology to work together. It’s not just recording what you have done, you need to include the narrative behind what you do. This is critical for understanding and ELNs are one part of the digital ecosystem in the lab. The solution they’ve developed is LabTrove, partly funded by Jisc, which is a flexible open source web based solution. Allowing pictures to be added to the notes has really helped with accessibility and usability, such as dyslexia. Sustainability, as is often the case, came up and how a community is required to support such a system. It also needs to expand beyond Southampton. Finally, Jeremy used Amazon Echo to query the temperature within part of his lab. He hopes that this will be used more in the lab in the future when it can recognise each researcher’s voice.

In the next two presentations, it was over to the vendors to show the advantages of adopting RSpace (by Rory Macneil) and Dotmatics (by Dan Ormsby). The functionality on offer in these types of solutions is attractive for scientists and RSpace showed how it links to most common file stores. With any ELN, it should enhance researchers’ workflow and integrate with the tools they use.

Removing the barriers

After lunch there were three parallel focus group discussions. I attended the one on sustainability, something that comes up frequently in discussions, particularly when looking at open source or proprietary solutions. Each group reported back as follows:

Focus group 1: Managing the supplier lock in risk

Stories of use need to be shared. The PDF is not a great format for sharing. Vendors tell the truth like estate agents. Have to accept the reality that won’t have 100% exporting functionality so need to decide the minimum level. Determine specific users’ requirements.

Focus group 2: Sustainability of ELN solutions

What is the lifetime of an ELN? How long should everything be accessible? Various needs come from group and funder requirements, e.g. 10 years. There is concern if you are relying on one commercial solution as companies can die, so how can you guarantee the data will be available? Have exit policies and support standards and interoperability so data can be moved across ELNs. Broken links and file formats expiring is not just an ELN problem, but relates to the archiving of data in general. Should selection and support of an ELN be at group, department, institution or national level? This is difficult if it’s in one group as adopting any technical solution requires support in place. It requires institutional level support.

Focus group 3: Human element of ELN implementation

The biggest hurdle is culture change and showing the benefits of using an ELN. Training and technical support costs money and time. It would cost more initially but becomes more efficient. You can incentivise people by having champions. There are different needs in a large institution. You may join a lab and find the ELN is not adequate. Legal issues around sensitive data complicates matters. You need to believe it will save time. Long term solutions include using cloud base solutions, even MS Office, but what happens when people leave? Need support from higher level. Functionality should be based on user requirements. A start would be to set up a mailing list of people interested in ELNs.

Remaining barriers to wide ELN adoption

Finally, I chaired a panel session with all the presenters. Marta Teperek had kindly asked me to give a short presentation on what Jisc does as many researchers don’t know (in fact I was asked “what’s Jisc?” in the focus group) and to promote the Next Generation Research Environment co-design challenge. Following my presentation the discussion was prompted by questions from the audience and remotely via sli.do. Much of the discussion re-iterated what had been said in the presentations, such as the importance of an ELN that meets the requirements of researchers. It should allow integration with other tools and exporting of the data for use it other ELNs. Getting ELNs used within a department is often difficult so it does need institution level commitment and support. Without this ELNs are unlikely to be adopted within an institution, never mind nationally. One size does not fit all and we should not try to build an ELN that tries to satisfy the different needs of various disciplines. A modular system that integrates with the tools and systems already in use would be a better solution. Much of what was said tallied with the feedback received for the Next Generation Research Environment co-design challenge.

Closing remarks

Ian Bruno closed the workshop and he reiterated what was said in the panel discussion. I found the event extremely helpful and it provided lots of useful information to feed into the Next Generation Research Environment work. I’d like to thank Marta Teperek for inviting me to chair the panel and for all her hard work putting the event together with @CamOpenData. Marta has put together the tweets from the day into the following storify.  All notes and presentations from the event are now published in Apollo, the University of Cambridge’s research repository.

Follow-up actions at the University of Cambridge – give it a go!

Those of you who are interested in ELNs and who are based at the University of Cambridge might be interested in knowing that we are planning to do some trial access to Electronic Lab Notebooks (ELN). The purpose of this trial will be to test out several ELNs to decide on solutions which might best meet the requirements of the research community. A mailing list has been set up for people who are interested in being part of this pilot or would like to be involved in these discussions. If you would like to be added to the mailing list, please fill in the form here: https://lists.cam.ac.uk/mailman/listinfo/lib-eln

*Originally published by Jisc on 18 January 2017.

Published on 29 January 2017
Written by Chris Brown
Creative Commons License

2016 – that was the year that was

 In January last year we published a blog post ‘2015 that was the year that was‘ which not only helped us take stock about what we have achieved, but also was very well received. So we have decided to do it again. For those who are more visually oriented, the slides ‘The OSC a lightning Tour‘ might be useful. 

Now starting its third year of operation, the Office of Scholarly Communication (OSC) has expanded to a team of 15, managing a wide variety of projects. The OSC has developed a set of strategic goals  to support its mission: “The OSC works in a transparent and rigorous manner to provide recognised leadership and innovation in the open conduct and dissemination of research at Cambridge University through collaborative engagement with the research community and relevant stakeholders.”

1. Working transparently

The OSC maintains an active outreach programme which fits with the transparent manner of the work that the OSC undertakes, which also includes the active documentation of workflows.

One of the ways we work transparently is to share many of our experiences and idea through this blog which receives over 2,000 visits a month. During 2016 the OSC published 41 blogs – eight blogs each on Scholarly Communication and Open Research, 14 on Open Access,  nine on Research Data Management and two on Library and training matters. The blogs we published in Open Access week were accessed 1630 times that week alone.

In addition to our websites for Scholarly Communication and Open Access, our Research Data Management website has been identified internationally as best practice and receives nearly 3,000 visitors a month.

We also run a Twitter feed for both Open Access with 1100 followers, and Open Data with close to 1200 followers. Many of the OSC staff also run their own Twitter feeds which share professional observations.

We also publish monthly newsletters, including one on scholarly communication matters. Our research data management newsletter has close to 2,000 recipients. Our shining achievement for the year however has to be the hugely successful scholarly communication Advent Calendar (which people are still accessing…)

We practise what we preach and share information about our work practices such as our reports to funders on APC spend and so on, through our repository Apollo and also by blogging about it – see Cambridge University spend on Open Access 2009-2016. We also share our presentations through Apollo and in Slideshare.

2. Disseminating research

The OSC has a strong focus on research support in all aspects of the scholarly communication ecosystem, from concept, through study design, preparation of research data management plans, decisions about publishing options and support with the dissemination of research outputs beyond the formal literature. The OSC runs an intense programme of advocacy relating to Open Access and Research Data Management, and has spoken to nearly 3,000 researchers and administrators since January 2015.

2.1 Open Access compliance

In April 2016, the HEFCE policy requiring that all research outputs intended to be claimed for the REF be made open access came into force. As a result, there has been an increased uptake of the Open Access Service with the 10,000th article submitted to the system in October. Our infographics on Repository use and Open Access demonstrate the level of engagement with our services clearly.

Currently half of the entire research output of the University is being deposited to the Open Access Service each month (see the blog: How open is Cambridge?). While this is good from a compliance perspective, it has caused some processing issues due to the manual nature of the workflows and insufficient staff numbers. At the time of writing, there is a deposit backlog of over 600 items to put into the repository and a backlog of over 2,300 items to be checked if they have been published so we can update the records.

The OA team made over 15 thousand ticket replies in 2016 – or nearly 60 per work day!

2.2 Managing theses

Work on theses continues, with the OSC driving a collaboration with Student Services to pilot the deposit of digital theses in addition to printed bound ones with a select group of departments from January 2017. The Unlocking Theses project in 2015-2016 has seen an increase in the number of historic theses in the repository from 700 to over 2,200 with half openly available. An upcoming digitisation project will add a further 1,400 theses. The upgrade of the repository and associated policies means all theses (not just PhDs) can be deposited and the OSC is in negotiation with several departments to bulk upload their MPhils and other sets of theses which are currently held in closed collections and are undiscoverable. This is an example of the work we are doing to unearth and disseminate research held all over the institution.

As a result of these activities it has become obvious that the disjointed nature of thesis management across the Library is inefficient. There is considerable effort being placed on developing workflows for managing theses centrally within the Library which the OSC will be overseeing into the future.

3. Research Support

3.1  Research Data Support

The number of data submissions received by the University repository is continuously growing, with Cambridge hosting more datasets in the institutional repository than any other UK university. Our ‘Data Sharing at Cambridge’ infographic summarises our work in this area.

A recent Primary Research Group report recognised Cambridge as having ‘particularly admirable data curation services’.

3.2 Policy development

The OSC is heavily involved in policy development in the scholarly communication space and participates in several activities external to the University. In July 2016 the UK Concordat on Open Research Data was published, with considerable input from the university sector, coordinated by the OSC.

We have representatives on the RCUK Open Access Practitioners Group, the UK Scholarly Communication License and Model Policy Steering Committee and the CASRAI Open Access Glossary Working Group, plus several other committees external to Cambridge. The OSC has contributed to discussions at the Wellcome Trust about ensuring better publisher compliance with their Open Access policy.

We are also updating and writing policies for aspects of research management across the University.

3.3 Collaborations with the research community

The OSC collaborates directly with the research community to ensure that the funding policy landscape reflects their needs and concerns. To that end we have held several town-hall meetings with researchers to discuss issues such as the mandating of CC-BY licensing, peer review and options relating to moving towards an Open Research landscape. We have also provided opportunities for researchers to meet directly with funders to discuss concerns and articulate amendments to the policies. The OSC has led discussions with the sector and arXiv.org, including visiting Cornell University, to ensure that researchers using this service to make their work openly available can be compliant under the HEFCE policy.

A new Research Data Management Project Group brings researchers and administrators together to work on specific issues relating to the retention and preservation of data and the management of sensitive data. We have also recruited over 40 Data Champions from across the University. Data Champions are researchers, PhD students or support staff who have agreed to advocate for data within their department: providing local training, briefing staff members at departmental meetings, and raising awareness of the need for data sharing and management.

The initiative began as an attempt to meet the growing need for RDM training, provide more subject-specific RDM support and begin more conversations about the benefits of RDM beyond meeting funders’ mandates. There has been a lot of interest in our Data Champions from other universities in the UK and abroad, with applications for our scheme coming from around the world. In response to this we have proposed a Bird of a Feather session at the 9th RDA plenary meeting in April to discuss similar initiatives elsewhere and creating RDM advocacy communities.  

3.3 Professional development for the research community

The OSC provides the research community with a variety of advocacy, training and workshops relating to research data management, sharing research effectively, bibliometrics and other aspects of scholarly communication. The OSC held over 80 sessions for researchers in 2016, including the extremely successful ‘Helping researchers publish’ event which we are repeating in February.

Our work with the Early Career Research (ECR) community has resulted in the development of a series of sessions about the publishing process for the PhD community. These have been enthusiastically embraced and there are negotiations with departments about making some courses compulsory. While this underlines the value of these offerings it does raise issues about staffing and how this will be financed.

The OSC is increasingly managing and hosting conferences at the University. Cambridge is participating in the Jisc Shared Repositories pilot and the OSC hosted an associated Research Data Network conference in September. In July 2016, the OSC organised a conference on research data sharing in collaboration with the Science and Engineering South Consortium, which was extremely well received and attracted over 80 attendees from all over the UK.

In November, the OpenCon Cambridge group – with which the OSC is heavily involved – held a OpenConCam satellite event which was very well attended and received very positive feedback. The storify of tweets is available, as is this blog about the event. The OSC was happy to both be a sponsor of the event and to be able to support the travel of a Cambridge researcher to attend the main OpenCon event in Washington and bring back her experiences.

Increasingly we are livestreaming our events and then making them available online as a resource for later.

3.4 Developing Library capacity for support

We have published a related post which details the training programmes run for library staff members in 2016. In total 500 people attended sessions offered in the Supporting Researchers in the 21st century programme, and we successfully ‘graduated’ the second tranche of the Research Support Ambassador Programme.

Conference session proposals on both the Supporting Researchers and the Research Ambassador programmes have been submitted to various national and international conferences. Dr Danny Kingsley and Claire Sewell have also had an abstract accepted for an article to appear in the 2017 themed issue of The New Review of Academic Librarianship.

4. Updating and integrating systems

The University repository, Apollo has been upgraded and was launched during Open Access Week. The upgrade has incorporated new services, including the ability to mint DOIs which has been enthusiastically adopted. A new Request a Copy service for users wishing to obtain access to embargoed material is being heavily used without any promotion, with around 300 requests a month flowing through. This has been particularly important given the fact that we are depositing works prior to publication, so we have to put them under an infinite embargo until we know the publication date (at which time we can set the embargo lift date). The huge number of over 2,000 items needing to be checked for  publication date means a large percentage of the contents of the repository is discoverable but closed under embargo.

In order to reduce the heavy manual workload associated with the deposit and processing of over 4,000 papers annually, the OSC is working with the Research Information Office on a systems integration programme between the University’s CRIS system – Symplectic – and Apollo, and retaining our integrated helpdesk system which uses a programme called ZenDesk. This should allow better compliance reporting for the research community, and reduce manual uploading of articles.

But this process involves a great deal more than just metadata matching and coding, and touches on the extremely ‘silo’ed nature of the support services being offered to our researchers across the institution. We are trying to work through these issues by instigating and participating in several initiatives with multiple administrative areas of the University.  The OSC is taking the lead with a ‘Getting it Together’ project to align the communication sent to researchers through the research lifecycle and across the range of administrative departments including Communication, Research Operations, Research Strategy and University Information Systems, termed the ‘Joined up Communications’ group. In addition we are heavily involved in the Coordinated and Functional Research Systems Group (CoFRS) the University Research Administration Systems Committee and the Cambridge Big Data Steering Group.

5. Pursuing a research agenda

Many staff members of the OSC originate from the research community and the team have a huge conference presence. The OSC team attended over 80 events in 2016 both within the UK and major conferences worldwide, including Open Scholarship Initiative, FORCE2016, Open Repositories, International Digital Curation Conference, Electronic Thesis & Dissertations, Special Libraries Association, RLUK2016, IFLA, CILIP and Scientific Data Conference.

Increasingly the OSC team is being asked to share their knowledge and experience. In 2016 the team gave four keynote speeches, presented 18 sessions and ran one Master Class. The team has also acted as session chair for two conferences and convened two sessions.

5.1 Research projects

The OSC is undertaking several research projects. In relation to the changing nature of scholarly communication services within libraries, we are in the process of analysing  job advertisements in the area of scholarly communication, we have also conducted a survey (to which we have received over 500 respondents) on the educational and training background of people working in the area of scholarly communication. The findings of these studies will be shared and published during 2017.

Dr Lauren Cadwallader was the first recipient of the Altmetrics Research Grant which she used to explore the types and timings of online attention that journal articles received before they were incorporated into a policy document, to see if there was some way to help research administrators make an educated guess rather than a best guess at which papers will have high impact for the next REF exercise in the UK. Her findings were widely shared internationally, and there is interest in taking this work further.

The team is currently actively pursuing several research grant proposals. Other research includes an analysis of data needs of research community undertaking in conjunction with Jisc.

5.2 Engaging with the research literature

Many members of the OSC hold several editorial board positions including two on the Data Science Journal, and one on the Journal of Librarianship and Scientific Communication. We also hold positions on the Advisory Board for PeerJ Preprints. We have a staff member who is the Associate Editor, New Review of Academic Librarianship . The OSC team also act as peer reviewers for scholarly communication papers.

The OSC is working towards developing a culture of research and publishing amongst the library community at Cambridge, and is one of the founding members of the Centre for Evidence Based Librarianship and Information Practice (C-EBLIP) Research Network.

6. Staffing

Despite the organisational layout remaining relatively stable between 2015 and 2016, this belies the perilous nature of the funding of the Office of Scholarly Communication. Of the 15 staff members, fewer than half are funded from ‘Chest’ (central University) funding. The remainder are paid from a combination of non-recurrent grants, RCUK funding and endowment funds.

The process of applying for funding, creating reports, meeting with key members of the University administration, working out budgets and, frankly, lobbying just to keep the team employed has taken a huge toll on the team. One result of the financial situation is many staff – including some crucial roles – are on short-term contracts and several positions have turned over during the year. This means that a disproportionate amount of time is spent on recruitment. The systems for recruiting staff in the University are, shall we say, reflective of the age of the institution.

In 2016 alone, as the Head of the OSC, I personally wrote five job descriptions and progressed them through the (convoluted) HR review process.  I conducted 32 interviews for OSC staff and participated in 10 interviews for staff elsewhere in the University where I have assisted with the recruitment. This  has involved the assessment of 143 applications. Because each new contract has a probation period, I have undertaken 27 probationary interviews. Given each of these activities involve one (or mostly more) other staff members, the impact of this issue in terms of staff time becomes apparent.

We also conducted some experiments with staffing last year. We have had a volunteer working with us on a research project and run a ‘hotdesk’ arrangement with colleagues from the Research Information Office, the Research Operations Office and Cambridge University Press. We also conducted a successful ‘work from home’ pilot (a first for the University Library).

7. Plans for 2017

This year will herald some significant changes for the University – with a new Librarian starting in April and a new Vice Chancellor in September. This may determine where the OSC goes into the future, but plans are already underway for a big year in 2017.

As always, the OSC is considering both a practical and a political agenda. On the ‘political’ side of the fence we are pursuing an Open Research agenda for the University. We are about to kick off of the two-year Open Research Pilot Project, which is a collaboration between the Office of Scholarly Communication and the Wellcome Trust Open Research team. The Project will look at gaining an understanding of what is needed for researchers to share and get credit for all outputs of the research process. These include non-positive results, protocols, source code, presentations and other research outputs beyond the remit of traditional publications. The Project aims to understand the barriers preventing researchers from sharing (including resource and time implications), as well as what incentivises the process.

We are also now at a stage where we need to look holistically at the way we access literature across the institution. This will be a big project incorporating many facets of the University community. It will also require substantial analysis of existing library data and the presentation of this information in an understandable graphic manner.

In terms of practical activities, our headline task is to completely integrate our open access workflows into University systems. In addition we are actively investigating how we can support our researchers with text and data mining (TDM). We are beginning to develop and roll out a ‘continuum’ of publishing options for the significant amount of grey literature produced within Cambridge. We are also expanding our range of teaching programmes – videos, online tools, and new types of workshops. On a technical level we are likely to be looking at the potential implementation of options offered by the Shared Repository Pilot, and developing solutions for managed access to data. We are also hoping to explore a data visualisation service for researchers.

Published 17 January 2017
Written by Dr Danny Kingsley
Creative Commons License