Openness, integrity & supporting researchers

Universities need to open research to ensure academic integrity and adjust to support modern collaboration and scholarship tools, and begin rewarding people who have engaged in certain types of process rather than relying on traditional assessment schemes. This was the focus of Emeritus Professor Tom Cochrane’s* talk on ‘Open scholarship and links to academic integrity, reward & recognition’  given at Cambridge University on 7 October.

The slides from the presentation are available here: PRE_Cochrane_DisruptingDisincentives_V1_20151007

Benefits of an open access mandate

Tom began with a discussion about aspects of access to research and research data and why it should be as open as possible. Queensland University of Technology introduced an open access mandate 12 years or so ago. They have been able to observe a number of effects on bibliometric citation rates, such as the way authors show up in Scopus.

The other is the way consulting opportunities arise because someone’s research is exposed to reading audiences that do not have access to the toll-gated literature. Another benefit is the recruiting of HDR students.

Tom outlined six areas of advantage for institutions with a mandate – researcher identity and exposure, advantage to the institution. He noted that they can’t argue causation but can argue correlation, with the university’s. improvement in research performance. Many institutions have been able to get some advantage of having an institutional repository that reflects the output of the institution.

However in terms of public policy, the funders have moved the game on anyway. This started with private funders like Wellcome Trust, but also the public funding research councils. This is the government taxpayer argument, which is happening in the US.

Tom noted that when he began working on open access policy he had excluded books because there are challenges with open access when there is a return to the author, but there has been a problem long term with publishing in the humanities and the social sciences. He said there was an argument that there has been a vicious downward spiral that oppresses the discipline, by making the quality scholarship susceptible to judgements about sales appeal for titles in the market, assessments which may be unrelated. Now there is a new model called Knowledge Unlatched which is attempting to break this cycle and improve the number of quality long form outputs in Humanities and Social Sciences.

Nightmare scenarios

Tom started by discussing the correlation between academic integrity and research fraud by discussing the disincentives in the system. What are potential ‘nightmare’ scenarios?

For early career researcher nightmares include the PhD failing, being rejected for a job or promotion application, a grant application fails, industry or consultancy protocols fail or a paper doesn’t get accepted.

However a worse nightmare is a published or otherwise proclaimed finding is found to be at fault – either through a mistake or there is something more deliberate at play. This is a nightmare for the individual.

However it is very bad news for an institution to be on the front page news. This is very difficult to rectify.

Tom spoke about Jan Hendrik Schon’s deception. Schon was a physicist who qualified in Germany, went to work in Bell Labs in the US. He discovered ‘organic semiconductors’. The reviewers were unable to replicate the results because they didn’t have any access to the original data with lab books destroyed and samples damaged beyond recovery. The time taken to investigate and the eventual withdrawal of the research was 12.5 years, and the effort involved was extraordinary.

Incentives for institutions and researchers

Academics work towards recognition and renown, respect and acclaim. This is based on a system of dissemination and publication, which in turn is based on peer review and co-authorship using understood processes. Financial reward is mostly indirect.

Tom then discussed what structures universities might have in place. Most will have some kind of code of conduct to advise people about research misconduct. There are questions about how well understood or implemented this advice or knowledge about those kinds of perspectives actually are.

Universities also often provide teaching about authorship and the attribution of work – there are issues around the extent that student work gets acknowledged and published. Early career researchers are, or should be, advised about requirements in attributing work to others that have not contributed, as well as a good understanding of plagiarism and ethical conduct.

How does openness help?

Tom noted that we are familiar with the idea of open data and open access. But another aspect is ‘open process’. Lab work books for example, showing progress in thinking, approaches and experiments can be made open though there may be some variations in the timing of when this occurs.

The other pressing thing about this is that the nature of research itself is changing profoundly. This includes extraordinary dependence on data, and complexity requiring intermediate steps of data visualisation. In Australia this is called eResearch, in the UK it is called eScience. These eResearch techniques have been growing rapidly, and in a way that may not be understood or well led by senior administrators.

Using data

Tom described a couple of talks by early or mid career researchers at different universities. They said that when they started they were given access to the financial system, the IT and Library privileges. But they say ‘what we want to know are what are the data services that I can get from the University?’. This is particularly acute in the Life Sciences. Where is the support for the tools? What is the University doing by way of scaffolding the support services that will make that more effective for me? What sort of help and training will you provide in new ways of disseminating findings and new publishing approaches?

Researchers are notoriously preoccupied with their own time – they consider they should be supported better with these emerging examples. We need more systematic leadership in understanding these tools with a deliberate attention by institutional leadership to overcoming inertia.

The more sustained argument about things being made open relates to questions about integrity and trust – where arguments are disputes about evidence. What’s true for the academy in terms of more robust approaches to prevent or reduce inaccuracy or fraud, is also true in terms of broader public policy needs for evidence based policy.

Suggestions for improvement

We need concerted action by people at certain levels – Vice Chancellors, heads of funding councils, senior government bureaucrats. Some suggested actions for institutions and research systems at national and international levels include concerted action to:

  • develop and support open frameworks
  • harmonise supporting IP regimes
  • reframe researcher induction
  • improve data and tools support services
  • reward data science methods and re-use techniques
  • rationalise research quality markers
  • foster impact tracking in diverse tools

Discussion

Friction around University tools

One comment noted that disincentives at Cambridge University manifest as frictions around the ways they use the University tools – given they don’t want to waste time.

Tom responded that creating a policy is half the trick. Implementing it in a way that makes sense to someone is the other half. What does a mandate actually mean in a University given they are places where one does not often successfully tell someone else what to do?

However research and support tools are getting more efficient. It is a matter of marshalling the right expertise in the right place. One of the things that is happening is we are getting diverse uptakes of new ideas. This is reliant on the talent of the leadership that might be in place or the team that is in place. It could get held back by a couple of reactionary or unresponsive senior leaders. Conversely the right leadership can make striking progress.

Openness and competition

Another comment was how does openness square with researchers being worried about others finding about what they are doing in a competitive environment?

Tom noted that depending on the field, there may indeed need to be decision points or “gating” that governs when the information is available. The important point is that it is available for review for the reasons of integrity explored earlier. Exceptions will always apply as in the case of contract research being done for a company by an institution that is essentially “black box”. There would always have to be decisions about openness which would be part of working out the agreement in the first place.

Salami slicing publication

A question arose about the habit of salami slicing research into small publications for the benefits of the Research Excellence Framework and how this matches with openness.

Tom agreed that research assessment schemes need to be structured to encourage or discourage certain types of scholarly output in practice. The precursor to this practice was the branching of journal titles in the 1970s – the opportunity for advantage at the time was research groups and publishers. There has to be a leadership view from institutional management on what kind of practical limits there can be on that behaviour.

This sparked a question about the complexity of changing the reward system because researchers are judged by the impact factor, regardless of what we say to them about tweets etc. How could the reward system be changed?

Tom said the change would need to be that the view that reward is only based on research outputs is insufficient. Other research productivity needs reward. This has to be led. It can’t be a half-baked policy – put out by a committee. Needs to be trusted by the research community.

Open access drivers

A question was asked about the extent to which the compliance agenda that has been taken by the funders has led its course? Is this agenda going to be taken by the institutions.

Tom said that he has thought about this for a long time. He thought originally OA would be led by the disciplines because of the example of the High Energy Physics community which built a repository more than 20 years ago. Then there was considerable discussion, eg in the UK in early 2000s about aligning OA with institutional profile. But institutional take up was sporadic. In Australia in 2012 we only had six or seven universities with policies (which doesn’t necessarily mean there had been completely satisfactory take up in each of those).

Through that time the argument for a return on tax payer investment has become the prevalent government one. Tom doesn’t think they will move away from that, even though there has been a level of complexity relating to the position that might not have been anticipated, with large publishers keen to be embedded in process.

This moved to a question of whether this offers an opportunity for the institution beyond the mandate?

Tom replied that he always thought there was an advantage at an institutional and individual level that you would be better off if you made work open. The main commercial reaction has been for the large publishers to seek to convert the value that exists in the subscription market into the same level of value in input fees i.e, Article Processing Charges.

It should be understood finally that academic publishing and the quality certification for research does have a cost, with the question being what that level of cost should really be.

About the speaker

*Emeritus Professor Tom Cochrane was briefly visiting Cambridge from Queensland University of Technology in Australia. During his tenure as the Deputy Vice-Chancellor (Technology, Information and Learning Support), Professor Cochrane introduced the world’s first University-wide open access mandate, in January 2004. Amongst his many commitments Professor Cochrane serves on the Board of Knowledge Unlatched (UK) is a member of the Board of Enabling Open Scholarship (Europe) and was co-leader of the project to port Creative Commons into Australia.

Published 12 October 2015
Written by Dr Danny Kingsley
Creative Commons License

Archiving webpages – securing the digital discourse

We are having discussions around Cambridge about the research activity that occurs through social media. These digital conversations are the ephemera of the 21st century, the equivalent of the Darwin Manuscripts that the University has spent considerable energy preserving and digitising. However, to date we are not currently archiving or preserving this material.

As a starting point, we are sharing here some of the insights Dr Marta Teperek gained from attending the DPTP workshop on Web Archiving on 12 May 2015, led by Ed Pinsent and Peter Webster.

Digital dissemination

Increasingly researchers are realising that online resources are important to disseminate their findings – the subject of our recent blog ‘What is ‘research impact’ in an interconnected word?‘ It is common to use blogs and Twitter to share discoveries.

Some researchers even have dedicated websites to publish information about their research. In the era of Open Science webpages are also used to share research data, especially for programmers, who often use them as powerful tools for providing rich metadata description for their software. It is not uncommon to include a link to a webpage in publications as the source of additional information supporting a paper. In these cases, other researchers need to be able to cite to the webpage as it was at the time of publication. This ensures the content is stable – be it information, dataset, or a piece of software.

The question arises then about preventing ‘linkrot’ and preserving webpages – to ensure the content of a webpage is still going to be accessible (and unaltered) in several years’ time.

What does it mean to archive a webpage?

Archiving is preserving the exact copy of a webpage, as it is at a given moment in time. The most commonly used format for webpage archives are .warc files. These files contain all the information about the page: about its content, layout, structure, interactivity etc. They can be easily re-played to re-create the exact content of the archived webpage, as it was at the time of recording. These .warc files can be shared with colleagues or with the public by various means, for example, by preserving a copy in data repositories.

The right to archive

One of the most interesting topics emerging from almost every talk was who has the right to archive a webpage. The answer would seem simple – the webpage creator. However, webpages often contain information with reference to, or with input from various external resources. Most pages nowadays have feeds from Twitter, allow comments from external users, or have discussion fora. Does the website creator have the rights to archive all these?

In general, anyone can archive the page. Problems start if there are intentions to make the archive available to others – which is typically the driver for archiving the page in the first place. In theory, in order to disseminate the archived page, the archiver should ask all copyright owners of the content of that page for their consent. However, obtaining consent from all copyright owners might be impossible – imagine trying to approach authors of every single tweet on a given webpage.

The recommendation is that people should obtain consent for all elements of the webpage for which it is reasonably possible to get the consent. When making the archive available, there should also be a statement that the best effort was made to obtain consent from all copyright owners. It is good practice to ask any webpage contributors to sign a consent form for archiving and sharing of their contributed content.

Alternative approach to copyright

Some websites have decided to take an alternative approach to dealing with copyright. The Internet Archive simply archives everything, without worrying about copyright. Instead, they have a takedown policy if someone asks them to remove the shared archive. As a consequence of their approach, they are currently the biggest website archive in the world, which as of August 2014 used 50 PetaBytes of storage.

Anyone can archive their websites on the Internet Archive, simply by creating an account to upload the website in the Internet Archive, entering the URL of the webpage to be archived, clicking a button to archive the page, and it is done – the archive will be created and shared.

The workshop inspired us at Cambridge to archive the data website, which is now available on Internet Archive. Snapshots from each of the archiving events can be easily replayed by simply clicking on them.

Can a non-specialist archive the website?

But what if you would like to archive a website yourself – store and share it on your conditions, perhaps using a data repository? Various options for website preservation were discussed during the workshop.

As a non-specialist, the best option is the one which does not require any specialist knowledge, or specialist software installation. A startup company called WebRecorder have created a website which allows anyone to easily archive any page. There is no need to create an account. The user can simply copy the URL of the page to be archived and press ‘record’. This will generate a .warc file of the website.The disadvantage is this needs to be done for every page of the website separately. WebRecorder allows free downloads of .warc files – the files can be downloaded and archived/shared however the user chooses.

If anybody wants to then re-run the website from a .warc file, there are plenty of free software options available to re-play the webpage. Again, an easy solution for non-specialist is to go to WebRecorder. WebRecorder allows one to upload a .warc file and will then easily replay the webpage with a single click on the ‘Replay’ button.

A bouquet for the DPTP workshop

This was an excellent and extremely efficient one-day workshop, due to its dynamic organisation. The workshop was broken down into six main parts, and each of these parts consisted of several very short (usually 10 mins long) presentations and case studies directly related to the subject (no time to draw away!). After every short talk there was time for questions. Furthermore, there were breaks between the main parts of the workshop to allow focused discussions on the subject. This dynamic organisation ensured that every question was addressed, and that all issues were thematically grouped – which in return helped delivering powerful take-home messages from each section.

Furthermore the speakers (who by the way had expert knowledge on the subject) did not recommend any particular solutions, but instead reviewed types of solutions available, discussing their major advantages and disadvantages. This provided the attendees with enough guidance for making informed decisions about solutions most appropriate to their particular situations.

What also greatly contributed to the success of the workshop was the diverse background of attendees: from librarians and other research data managers, to researchers, museum website curators, and European Union projects’ archivists. All these people had different approaches, and different needs about web archiving. Perhaps this is why the breakout sessions were so valuable and deeply insightful.

Published 3 October 2015
Written by Dr Marta Teperek and Dr Danny Kingsley
Creative Commons License

Joint response on the draft UK Concordat on Open Research Data

During August the Research Councils UK on behalf of the UK Open Research Data Forum released a draft Concordat on Open Research Data for which they have sought feedback.

The Universities of Bristol, Cambridge, Manchester, Nottingham and Oxford prepared a joint response which was sent to the RCUK on 28 September 2015. The response is reproduced below in full.

The initial main focus of the Concordat should be good data management, instead of openness.

The purpose of the Concordat is not entirely clear. Merely issuing it is unlikely to ensure that data is made openly available. If Universities and Research Institutes are expected to publicly state their commitment to the Principles then they risk the dissatisfaction of their researchers if insufficient funds are available to support the data curation that is described. As discussed in the Comment #5 below, sharing research data in a manner that is useful and understandable requires putting research data management systems in place and having research data experts available from the beginning of the research process. Many researchers are only beginning to implement data management practices.It might be wiser to start with a Concordat on good data management before specifying expectations about open data. It would be preferable to first get to a point where researchers are comfortable with managing their data so that it is at least able to be citeable and discoverable. Once that is more common practice, then the openness of data can be expected as the default position.

The scope of the Concordat needs to be more carefully defined if it is to apply to all fields of research.

The Introduction states that the Concordat “applies to all fields of research” but it is not clear how the first sentence of the Introduction translates for researchers in the Arts and Humanities, (or in theoretical sciences, e.g. Mathematics). This sentence currently reads:

“Most researchers collect, measure, process and analyse data – in the form of sets of values of qualitative or quantitative variables – and use a wide range of hardware and software to assist them to do so as a core activity in the course of their research.”

The Arts and Humanities are mentioned in Principle #1, but this section also refers to benefits in terms of “progressing science”. We suggest that more input is sought specifically from academics in the Arts and Humanities, so that the wording throughout the Concordat is made more inclusive (or indeed exclusive, if appropriate).

The definition of research data in the Concordat needs to be relevant to all fields of research if the Concordat is to apply to all fields of research.

We suggest that the definition of data at the start of the document needs to be revised if it is to be inclusive of Arts and Humanities research (and theoretical sciences, e.g. Mathematics). The kinds of amendments that might be considered are indicated in italics:

Research Data can be defined as evidence that underpins the answer to the research question, and can be used to validate findings regardless of its form (e.g. print, digital, or physical forms). These might be quantitative information or qualitative statements collected by researchers in the course of their work by experimentation, observation, interview or other methods, or information derived from existing evidence. Data may be raw or primary (e.g. direct from measurement or collection) or derived from primary data for subsequent analysis or interpretation (e.g. cleaned up or as an extract from a larger data set), or derived from existing sources where the copyright may be externally held. The purpose of open research data is not only to provide the information necessary to support or validate a research project’s observations, findings or outputs, but also to enable the societal and economic benefits of data reuse. Data may include, for example, statistics, collections of digital images, software, sound recordings, transcripts of interviews, survey data and fieldwork observations with appropriate annotations, an interpretation, an artwork, archives, found objects, published texts or a manuscript.

The Concordat should include a definition of open research data.

To enable consistent understanding across Concordat stakeholders, we suggest that the definition of research data at the start of the document be followed by a definition of “openness” in relation to the reuse of data and content.

To illustrate, consider referencing The Open Definition which includes the full Open Definition, and presents the most succinct formulation as:

“Open data and content can be freely used, modified, and shared by anyone for any purpose”.

The Concordat refers to a process at the end of the research lifecycle, when what actually needs to be addressed is the support processes required before that point to allow it to occur.

Principle #9 states that “Support for the development of appropriate data skills is recognised as a responsibility for all stakeholders”. This refers to the requirement to develop skills and provision of specialised researcher training. These skills are almost non-existent and training does not yet exist in any organised form (as noted by Jisc in March this year). There is some research data management training for librarians provided by the Digital Curation Centre (DCC) but little specific training for data scientists. The level of researcher support and training required across all disciplines to fulfil expectations outlined in Principle #9 will require a significant increase in both the infrastructure and staffing.

The implementation of, and integration between research data management systems (including systems external to institutions) is a complex process, and is an area of ongoing development across the UK research sector and will also take time for institutions to establish. This is reflected by the final paragraphs of DCC reports on the DCC RDM 2014 Survey and discussions around gathering researcher requirements for RDM infrastructure at the IDCC15 conference of March this year. It is also illustrated by a draft list of basic RDM infrastructure components developed through a Jisc Research Data Spring pilot.

The Concordat must acknowledge the distance between where the Higher Education research sector currently stands and the expectation laid out. While initial good progress towards data sharing and openness has been made in the UK, it will require further substantial culture change to enact the responsibilities laid out in Principle #1 of the Concordat, and this should be recognised within the document. There will be a significant time lag before staff are in place to support the research data management process through the lifecycle of research, so that the information is in a state that it can be shared at the end of the process.

We suggest that the introduction to the Concordat should include text to reflect this, such as:

“Sharing research data in a manner that is useful and understandable requires putting integrated research data management systems in place and having research data experts available from the beginning of the research process. There is currently a deficit of knowledge and skills in the area of research data management across the research sector in the UK. This Concordat is intended to establish a set of expectations of good practice with the goal of establishing open research data as the desired position over the long term. It is recognised that this Concordat describes processes and principles that will take time to establish within institutions.”

The Concordat should clarify more clearly its scope in relation to publicly funded research data and that funded from alternative sources or unfunded.

While the Introduction to the Concordat makes clear reference to publicly-funded research data, Principle #1 states that ‘it is the linking of data from a wide range of public and commercial bodies alongside the data generated by academic researchers’ that is beneficial. In addition, the ‘funders of research’ responsibilities should state whether these responsibilities relate only to public bodies, or wider (Principle #1).

The Concordat should propose sustainable solutions to fund the costs of the long-term preservation and curation of data, and how these costs can be borne by different bodies.

It is welcome that the Concordat states that costs should not fall disproportionately on a single part of the research community. However, currently the majority of costs are placed on the Higher Education Institutions (HEIs) which is not a sustainable position. There should be some clarification of how these costs could be met from elsewhere, for example research funders. In addition an acknowledgement that there will be a transition period where there may be little or no funding to support open data which will make it very difficult for HEIs to meet responsibilities in the short to medium term should be included. Furthermore, Principle #1 says that “Funders of Research will support open research data through the provision of appropriate resources as an acknowledged research cost.” It must be noted that several funders are at present reluctant or refusing to pay for the long-term preservation and curation of data.

The Concordat should propose solutions for paying for the cost of the long-term preservation and curation of data in cases where the ‘funders of research’ refuse to pay for this, or where research is unfunded. In the second paragraph of Principle #4 it is suggested that “…all parties should work together to identify the appropriate resource provider”. It would be useful to have some clarification about what the Working Group envisaged here. For example was it a shared national repository? Perhaps the RCUK (in collaboration with other UK funding bodies) could consider setting up a form of UK Data Service that meets the wider funding body audience for data of long-term value. This would also support the nature of collaboration and enable more re-use by increased data discoverability – data will not be stored at separate institutional repositories.

Additionally, there appears to be a contradiction between the statement in Principle 1 that “Funders of Research will support open research data through the provision of appropriate resources as an acknowledged research cost” and the statement in Principle #4: “…the capital costs for infrastructure may be incorporated into planned upgrades” which suggests that Universities or Research Institutes will need to fund infrastructure and services from capital and operational budgets.

The Concordat should clarify how an appropriate proportionality between costs and benefits might be assessed.

Principle #4 states that: “Such costs [of open research data] should be proportionate to real benefits.” This key relationship needs further amplification. How and at what stage can “real benefits” be determined in order to assess the proportionality of potential costs? The Concordat should state more clearly the ‘real and achievable’ benefits of open data with examples. What is the relationship between the costs and the benefits? Has this relationship been explored? The real benefits of sharing research data will only become clear over time. At the moment it is difficult to quantify the benefits without evidence from the open datasets. Moreover, there might be an amount of time after a project is finished before the real benefits are realised. Are public funders going to put in monetary support for such services?

Additionally, the Concordat should specify to what extent research data should be made easily re-usable by others. Currently Principle #3 mentions: “Open research data should also be prepared in such a manner that it is as widely useable as is reasonably possible…”. What is the definition of “reasonably possible”? Preparing data for use by others might be expensive, depending on the complexity of the data, and should be also taken into consideration when assessing the proportionality of potential costs of data sharing. Principle #4 states: “Both IT infrastructure costs and the on-going costs of training for researchers and for specialist staff, such as data curation experts, are expected to be significant over time.” These costs are indeed significant from the outset.

The Concordat (Principle #2) states: “A properly considered and appropriate research data management strategy should be in place before the research begins so that no data is lost or stored inappropriately. Wherever possible, project plans should specify whether, when and how data will be will be made openly available.” The Concordat should propose a process by which a proposal for data management and sharing in a particular research context is put forward for public funding. This proposal will need to include the cost-benefit-analysis for deciding which data to keep and distribute (and how best to keep and distribute it).

In general, the Concordat must balance open data requirements with allowing researchers enough time, and space to pursue innovation.

The Concordat should acknowledge the costs relating to undertaking regular reviews of progress towards open data.

Principle #4 refers to the following costs:

  • “necessary costs – for IT infrastructure and services, administrative and specialist support staff, and for researchers’ time – are significant”
  • “the additional and continuing revenue costs to sustain services – and rising volumes of data – for the long term are real and substantial”
  • “Both IT infrastructure costs and the on-going costs of training for researchers and for specialist staff, such as data curation experts, are expected to be significant over time”

However, there is no explicit reference to costs relating to Principle #10 regarding “Regular reviews of progress towards open access to research data should be undertaken”.

We suggest that Principle #4 should include text to reflect this, and the kind of amendment that might be considered is indicated in italics:

For research organisations such as universities or research institutes, these costs are likely to be a prime consideration in the early stages of the move to making research data open. Both IT infrastructure costs and the on-going costs of training for researchers and for specialist staff, such as data curation experts, are expected to be significant over time. Significant costs will also arise from Principle #10 regarding the undertaking of regular reviews of progress towards open access to research data.

The Concordat should explore the establishment of a central organisation to lead the transformation towards a cohesive UK research data environment.

Principle #3 states: “Data must be curated […] This can be achieved in a number of ways […] However, these methodologies may vary according to subject and disciplinary fields, types of data, and the circumstances of individual projects. Hence the exact choice of methodology should not be mandated”.

Realising the benefits of curation may have significant costs where curation extends over the long term, such as data relating to nuclear science which may need to be usable for at least 60 years. These benefits would be best achieved, and in a cost-effective manner, through the establishment of a central organisation that will lead the creation of a cohesive national collection of research resources and a richer data environment that will:

  • Make better use of the UK’s research outputs
  • Enable UK researchers to easily publish, discover, access and use data
  • Develop discipline-specific guidelines on data and metadata standards
  • Suggest discipline-specific curation and preservation policies
  • Develop protocols and processes for the access to restricted data
  • Enable new and more efficient research

In Australia this capacity is provided by the Australian National Data Service.

The Concordat should address the issues around sharing research data resulting from collaborations, especially international collaborations.

It has to be explicitly recognised that some researchers will be involved in international collaborations, with collaborators who are not publicly funded, or whose funders to do not require research data sharing. Procedures (and possible exemptions) for sharing of research data in such circumstances should be discussed in the Concordat.

Additionally, the Concordat should suggest a sector-wide approach when considering the costs and complexities of research involving multiple institutions. Currently where multiple institutions are producing research data for one project there is a danger that it is deposited in multiple repositories which is neither pragmatic nor cost-effective.

Non-public funders need to be consulted about sharing of commercially-sponsored data, and the Concordat should acknowledge the possibility of restricting the access to research data resulting from commercial collaborations.

Since the Concordat makes recommendations with regards to making commercially-sponsored data accessible, significant conversation with non-public funders are needed. Otherwise, there is a risk that the expectations on industry are unlikely to be met. The current wording could damage industrial appetite to fund academic research if they are pushed towards openness without major consultation.

We also suggest that in the second paragraph of Principle #5, the sentence: “There is therefore a need to develop protocols on when and how data that may be commercially sensitive should be made openly accessible, taking account of the weight and nature of contributions to the funding of collaborative research projects, and providing an appropriate balance between openness and commercial incentives.” is changed to “There is therefore a need to develop protocols on whether, when and how data that may be commercially sensitive should be made openly accessible, taking account of the weight and nature of contributions to the funding of collaborative research projects, and providing an appropriate balance between openness and commercial incentives.” The Concordat should also recognise that development and execution of these processes is an additional burden on institutional administrative staff which must not be underestimated.

The Concordat should more generally recognise the increasing economic value of data produced by researchers.

Where commercial benefits can be quantified (such as the return on investment of a research project) this should be recognised as a reason to embargo access to data until such things as patents can be successfully applied. University bodies charged with the commercialization of research should be entitled to assess the potential value of research before consenting to data openness.

The Concordat should allow the use of embargo periods to allow release of data to be delayed up to a certain time after publication, where this is appropriate and justifiable.

The Concordat expects research data underpinning publications to be made accessible by the publication date (Principles #6 and #8). This does not, however, take into account disciplinary norms, where sometimes access to research data is delayed until a specified time after publication. For example, in crystallography (Protein Data Bank) the community has agreed a maximum 12-month delay between publishing the first paper on a structure and making coordinates public for secondary use. Delays in making data accessible are accepted by funders. For example, the BBSRC allows exemptions for disciplinary norms, and where best practices do not exist BBSRC suggests release within three years of generation of the dataset; the STFC expects research data from which the scientific conclusions of a publication are derived to be made available within six months of the date of the relevant publication. Research data should be discoverable at the time of publication, but it may be justifiable to delay access to the data.

The Concordat should make mention of the difficulties involved with ethical issues of data sharing, including issues around data licensing, and data use by others.

Ethical issues surrounding release and use of research data are briefly mentioned in Principle #5 and Principle #7. We believe the Concordat could benefit from expansion on the ethical issues surrounding release and use of research data, and advice on how these can be addressed in data sharing agreements. This is a large and complex area that would benefit from a national framework of best practice guidelines and methods of monitoring.

Furthermore, the Concordat does not provide any recommendations about research data licensing. This should be discussed together with issues about associated expertise required, costs and time. It is mentioned briefly above in point 4.

The Concordat’s stated expectations regarding the use of non-proprietary formats should be realistic.

Principle #3 states that:

“Open research data should also be prepared in such a manner that it is as widely useable as is reasonably possible, at least for specialists in the same or linked fields, wherever they are in the world. Any requirement to use specialised software or obscure data manipulations should be avoided wherever possible. Data should be stored in non-proprietary formats wherever possible, or the most commonly used proprietary formats if no equivalent non-proprietary format exists.”

The last two sentences of this paragraph could be regarded as unreasonable, depending on the definition of what is ‘possible’. It might theoretically be possible to convert data for storage but not remotely cost-effective. Other formulations (e.g. from EPSRC) talk about the burden of retrieval from stored formats being on the requester not the originator of the data.

We suggest that this section should be rephrased in-line with EPSRC recommendations, for example:

“Wherever possible, researchers are encouraged to store research data in non-proprietary formats. If this is not possible (or not cost-efficient), researchers should indicate what proprietary software is needed to process research data. Those requesting access to data are responsible for re-formatting it to suit their own research needs and for obtaining access to proprietary third party software that may be necessary to process the data.”

The Concordat should encourage proper management of physical samples and non-digital research data.

The Concordat should also encourage proper management of physical samples, and other forms of non-digital research data. Physical samples such as fossils, core samples, zoological and botanical samples, and non-digital research data such as recordings, papers notes, etc. should be also properly managed. In some areas the management and sharing of these items is well constructed and understood – for example, palaeontology journals will not allow people to publish without the specimen numbers from a museum – but it is less rigid in other areas of research. It would be desirable if the Concordat would encourage development of discipline-specific guidelines for management of physical samples and other non-digital research data.

Principle #5 must recognise the culture change required to remove the decision to share data from an individual researcher.

Principle #5 states that:

“‘Decisions on withholding data should not generally be made by individual researchers but rather through a verifiable and transparent process at an appropriate institutional level.”

Whilst the reasoning behind this Principle is understandable, it must recognise that we are not yet in a mature culture of data sharing and a statement removing data sharing decisions from the researcher will need changes in workflows and more importantly culture and autonomy of the researchers.

The idea that open research data should be formally acknowledged as a legitimate output of the research should form a separate principle.

The last paragraph of Principle #6 states that open research data should be acknowledged as a legitimate output of the research and that it “…should be accorded the same importance in the scholarly record as citations of other research objects, such as publications”. We strongly support this idea but recognise that this is a fundamental shift in working practices and policies. We are probably still several years off from seeing formal citation of datasets as an embedded practice for researchers and the development of products/services around the resulting metrics. This point is completely separate from the rest of Principle #6 and should form a principle in its own right.

Principle #2 must recognise that it may take significant resource for institutions to provide the infrastructure required for good data management.

While the focus of this Principle on good data management through the lifecycle, rather than the focus on open data sharing, is welcome, there are significant human, technical and sociotechnical developments required to meet this requirement; and also resources, in terms of people, time and infrastructure, that will be needed to shift to a mature position. These needs should be recognised in the Concordat.

The Concordat should clarify the reference to “other workers” in Principle #7

We would value some clarification on paragraph 3 of Principle #7 in relation to the reference to “other workers”: “Research organisations bear the primary responsibility for enforcing ethical guidelines and it is therefore vital that such guidelines are amended as necessary to make clear the obligations that are inherent in the use of data gathered by other workers.”