Tag Archives: repository

Watch this space – the first OSI workshop

It was always an ambitious project – trying to gather 250 high level delegates from all aspects of the scholarly communication process with the goal of better communication and idea sharing between sectors of the ecosystem. The first meeting of the Open Scholarship Initiative (OSI) happened in Fairfax, Virginia last week. Kudos to the National Science Communication Institute for managing the astonishing logistics of an exercise like this – and basically pulling it off.

This was billed as a ‘meeting between global, high-level stakeholders in research’ with a goal to ‘lay the groundwork for creating a global collaborative framework to manage the future of scholarly publishing and everything these practices impact’. The OSI is being supported by UNESCO who have committed to the full 10 year life of the project. As things currently stand, the plan is to repeat the meeting annually for a decade.

Structure of the event

The process began in July last year with emailed invitations from Glenn Hampson, the project director. For those who accepted the invitation, a series of emails from Glenn started with tutorials attached to try and ensure the delegates were prepared and up to speed. The emails gathered momentum with online discussions between participants. Indeed much was made of the (many) hundreds of emails the event had generated.

The overall areas the Open Scholarship Initiative hopes to cover include research funding policies, interdisciplinary collaboration efforts, library budgets, tenure evaluation criteria, global institutional repository efforts, open access plans, peer review practices, postdoc workload, public policy formulation, global research access and participation, information visibility, and others. Before arriving delegates had chosen their workgroup topic from the following list:

  • Embargos
  • Evolving open solutions (1)
  • Evolving open solutions (2)
  • Information overload & underload
  • Open impacts
  • Peer review
  • Usage dimensions of open
  • What is publishing? (1)
  • What is publishing? (2)
  • Impact factors
  • Moral dimensions of open
  • Participation in the current system
  • Repositories & preservation
  • What is open?
  • Who decides?

The 190+ delegates from 180+ institutions, 11 countries and 15 stakeholder groups gathered together at George Mason University (GMU), and after preliminary introductions and welcomes the work began immediately with everyone splitting into their workgroups. We spent the first day and a half working through our topics and preparing a short presentation for feedback on the second afternoon. There was then another working session to finalise the presentations before the live-streamed final presentations on the Friday morning. These presentations are all available in Figshare (thanks to Micah Vandegrift).

The event is trying to address some heady and complex questions and it was clear from the first set of presentations that in some instances it had been difficult to come to a consensus, let alone a plan for action. My group had the relative luxury of a topic that is fairly well defined – embargoes. It might be useful for the next event to focus on specific topics and move from the esoteric to the practical.

In addition the meeting had a team of ‘at large’ people who floated between groups to try and identify themes. Unsurprisingly, the ‘Primacy of Promotion and Tenure’ was a recurring theme throughout many of the presentations. It has been clear for some time that until we can achieve some reform of the promotion and tenure process, many of the ideas and innovations in scholarly communication won’t take hold. I would suggest that the different aspects of the reward/incentive system would be a rich vein to mine at OSI2017.

Closed versus open

In terms of outcomes there was some disquiet beforehand, by people who were not attending, about the workshop effectively being ‘closed’. This was because there was a Chatham House Rule for the workgroups to allow people to speak freely about their own experiences.

There was also some disquiet by those people who were attending about a request that the workgroups remain device-free. This was to try and discourage people checking emails and not participating. However people revert to type – in our group we all used our devices to collaborate on our documents. In the end we didn’t have much of a choice, the incredibly high tech room we were using in the modern GMU library flummoxed us and we were unable to get the projector to work.

That all said, there is every intention to disseminate the findings of the workshops widely and openly. During the feedback and presentations sessions there was considerable Twitter discussion at #OSI2016 – there is a downloadable list of all tweets in figshare – note there were enough to make the conference trend on Twitter at one point. This networked graphic shows the interrelationships across Twitter (thanks to Micah and his colleague). In addition there will be a report published by George Mason University Press incorporating the summary reports from each of the groups.

Team Embargo

Our workgroup, like all of them, represented a wide mix of interest groups. We were:

  • Ann Riley – President, Association of College and Research Libraries
  • Audrey McCulloch, Chief Executive, Association of Learned and Professional Societies
  • Danny Kingsley – Head of Scholarly Communication, Cambridge University
  • Eric Massant, Senior Director of Government and Industry Affairs, RELX Group
  • Gail McMillan, Director of Scholarly Communication, Virginia Tech
  • Glenorchy Campbell, Managing Director, British Medical Journal North America
  • Gregg Gordon, President, Social Science Research Network
  • Keith Webster, Dean of Libraries, Carnegie Mellon University
  • Laura Helmuth, incoming president, National Association of Science Writers
  • Tony Peatfield, Director of Corporate Affairs, Medical Research Council, Research Councils, UK
  • Will Schweitzer, Director of Product Development, AAAS/Science

It might be worth noting here that our workgroup was naughty and did not agree beforehand on who would facilitate, so therefore no-one had attended the facilitation pre-workshop webinar. This meant our group was gloriously facilitator and post-it note free – we just got on with it.

Banishing ghosts

We began with some definitions about what embargoes are, noting that press embargoes, publication embargoes and what we called ‘security’ embargoes (like classified documents) all serve different purposes.

Embargoes are not ‘all bad’. In the instance of press embargoes they allow journalists early access to the publication in order for them to be able to investigate and write/present informed pieces in the media. This benefits society because it allows for stronger press coverage. In terms of security embargoes they protect information that is not meant to be in the public domain. However embargoes on Author’s Accepted Manuscripts in repositories are more contentious, with qualified acceptance that these are a transitional mechanism in a shift to full open access.

The causal link of green open access resulting in subscription loss is not yet proven. The September 2013 UK Business, Innovation and Skills Committee Fifth Report: Open Access stated “There is no available evidence base to indicate that short or even zero embargoes cause cancellation of subscriptions”. In 2012 the Committee for Economic Development Digital Connections Council in The Future of Taxpayer-Funded Research: Who Will Control Access to the Results? concluded that “No persuasive evidence exists that greater public access as provided by the NIH policy has substantially harmed subscription-supported STM publishers over the last four years or threatens the sustainability of their journals”.

However there is no argument that traffic on websites for journals that rely on advertising dollars (such as medical journals) suffer when the attention is pulled to another place. This clearly potentially affects advertising revenue which in turn can impact on the financial model of those publication.

During our discussions about the differences between press embargoes and publication embargoes I mentioned some recent experiences in Cambridge. The HEFCE Open Access Policy requires us to collect Author’s Accepted Manuscripts at the time of acceptance and make the metadata about them available, ideally before publication. We respect publishers’ embargoes and keep the document itself locked down until these have passed post-publication. However we have been managing calls from sometimes distressed members of our research community who are worried that making the metadata available prior to publication will result in the paper being ‘pulled’ by the journal. Whether this has ever actually happened I do not know – and indeed would be happy to hear from anyone who has a concrete example so we can start managing reality instead of rumour. The problem in these instances is the researchers are confusing the press embargo with the publication embargo.

And that is what this whole embargo discussion comes down to. Much of the discourse and arguments about embargoes are not evidence based. There is precious little evidence to support the tenet that sits behind embargoes – which is that if publishers allow researchers to make copies of their work available open access then they will lose subscriptions. The lack of evidence does not prevent the possibility it is true however – and that is why we need to settle the situation once and for all. If there is a sustainability issue for journals because of wider green open access then we need to put some longer term management in place and work towards full open access.

It is possible the problem is not repositories, institutional or subject-based. Many authors are making the final version of their published work available in contravention of their Copyright Transfer Agreement in ResearchGate or Academia.edu. It might be that this availability of work is having an impact on researcher’s usage of work on the publishers’ sites. Given that in institutional repositories repository managers make huge efforts to comply with complicated embargoes it is quite possible that repositories are not the problem. Indeed, only a small proportion of work is made available through repositories according to the August 2015 Monitoring the Transition to Open Access report (look at ‘Figure 9. Location of online postings (including illicit postings)’ on page 38).  If this is the case, requiring institutions to embargo the Author’s Accepted Manuscripts they hold in their repositories for long periods will not make any difference. They are not the solution.

Our conclusion from our preliminary discussions was that there needs to be some concrete, rigorous research into the rationale behind embargoes to inform publishers, researchers and funders.

Our proposal – research questions

In response to this the Embargo workgroup decided that the most effective solution was to collaborate on an agreed research process that will have the buy-in of all stakeholders. The overarching question that we want to try and answer is ‘What are the impacts of embargoes on scholarly communication?’ with the goal to create an evidence base for informed discussion on embargoes .

In order to answer that question we have broken the big issue into a series of smaller questions:

  • How are embargoes determined?
  • How do researchers/students find research articles?
  • Who needs access?
  • Impact of embargoes on researchers/students?
  • Effect of embargoes on other stakeholders?

We decided that if the research found there was a case for publication embargoes then agreement on the metrics that should be used to determine the length of an embargo would be helpful. We are hoping that this research will allow standards to be introduced in the area of embargoes.

Discoverability and the issue of searching behaviour is extremely relevant in this space. Our hypothesis is if people are following publishers’ journal pages to find material then the fact that some of the same information is disbursed amongst lots of repositories means that the publisher arguments that embargoes threaten their finances are weakened. However if people are primarily using centralised search engines such as Google Scholar (which favours open versions of articles over paid ones) then that strengthens the publisher argument that they need embargoes to protect revenue.

The other question is whether access really is an issue for researchers. The March 2015 STM Report looked at the research in this area which indicate that well over 90% of researchers surveyed in separate studies said research papers were easy or fairly easy to access which appears to suggests on the face of it little problem in the way of access (look for the ‘Researchers’ access to journals’ section starting p83). Rather than repeating these surveys indicators for how much embargoes restrict access to researchers could include:

  • The usage of Request a Copy buttons in repositories
  • The number of ‘turn-aways’ from publishers platforms
  • The take-up level of Pay Per View options on publisher sites
  • The level of usage of ‘Get it Now’ – where the library obtains a copy through interlibrary loan or document delivery and absorbs the cost.

Our proposal – Research structure

The project will begin with a Literature Review and an investigation into the feasibility of running some Case Studies.

Two clear Case Studies could provide direct evidence if the publishers were willing to share what they have learned. In both cases, there has been a move from an embargo period for green OA to removing embargoes completely. In the first instance, Taylor and Francis began a trial in 2011 to allow immediate green OA for their library and information science journals, meaning that authors published in 35 library and information science journals have the right to deposit their Accepted Manuscript into their institutional repository and make it immediately available. Authors who choose to publish in these journals are no longer asked to assign copyright. They now sign a license to publish, which allows Taylor & Francis to publish the Version of Record. Additionally, authors can choose to make their work green open access with no embargoes applied. In 2014 the pilot was extended for ‘at least a further year’.

As part of the pilot, Taylor and Francis say a survey was conducted by Routledge to canvas opinions on the Library & Information Science Author Rights initiative and also investigated author and researcher behaviour and views on author rights policies, embargoes and posting work to repositories. The survey elicited over 500 responses, including: “Having the option to upload their work to a repository directly after publication is very important to these authors: more than 2/3 of respondents rated the ability to upload their work to repositories at 8, 9, or 10 out of 10, with the vast majority saying they feel strongly that authors should have this right”. There are no links to this survey that I have been able to uncover. It would be useful to include this survey in the Literature Review and possibly build on it for other stakeholders.

The second Case Study is Sage that, in 2013, decided to move to an immediate green policy. Both examples would have enough data by now to indicate if these decisions have resulted in subscription cancellations. I have proposed this type of study before, to no end. Hopefully we might now have more traction.

The Literature Review and Case Studies will then inform the development of a Survey of different stakeholders – which may have to be slightly altered depending on the audience being surveyed.  This is an ambitious goal – because the intention is to have at least preliminary findings available for discussion at the next OSI in 2017.

There was some lively Twitter discussion in the room about our proposal to do the study. Some were saying that the issue is resolved. I would argue that anyone who is negotiating the embargo landscape at the moment (such as repository managers) would strongly disagree with the position. Others referred to research already done in this space, for example the Publishing and Ecology of European Research (PEER) project. This study does discuss embargoes but approached the question with a position that embargoes are valid. The study we are proposing is asking specifically if there is any evidence base for embargoes.

Next steps

We will be preparing a project brief and our report for the OSI publication over the next couple of weeks.

The biggest issue for the project will be for us to gather funding. We have done a preliminary assessment of the time required to do the work so we could work out a ballpark figure for the fundraising goal. Note that our estimation of the number of workdays required for the project was deemed as ‘ludicrously low’ by a consultant in discussion later.

It was noted by a funder in casual discussions that because publishers have a vested interest in embargoes they should fund research that investigates their validity. Indeed Elsevier have already offered to assist financially for which we are grateful, but for this work to be considered robust and for it to be widely accepted it will need to be funded from a variety of sources. To that end we intend to ‘crowd fund’ the research in batches of $5000. The number of those batches will depend on the level of our underestimation of the time required to undertake the work (!).

In terms of governance, Team Embargo (perhaps we might need a better name…) will be working together as the steering committee to develop the brief, organise funding and choose the research team to do the work. We will need to engage an independent researcher or research group to ensure impartiality.

Wrap up summary of the workshop

There were a few issues relating to the organisation of the workshop. Much was made of the many hundreds of emails that were sent both from the organising group and also amongst the delegates before-hand. This level of preliminary discussion was beneficial but using another tool might help. It was noted that the level of email was potentially the reason why some of the delegates who were invited did not attend.

There was a logistic issue in having 190+ delegates staying in a hotel situated in the middle of a set of highways that was a 30 minute bus ride away from the conference location at George Mason University (also situated in an isolated location). The solution was a series of buses to ferry us each way each day, and to and from the airport. We ate breakfast, lunch and dinner together at the workshop location. This combined with the lack of alcohol because we were at an undergraduate American campus (where the legal drinking age is 21) gave the experience something of a school camp feel. Coming from another planned capital city (Canberra, Australia) I am sure that Washington is a beautiful and interesting place. This was not the visit to find that out.

These minor gripes aside, as is often the case, the opportunity to meet people face to face was fantastic. Because there was a heavy American flavour to the attendees, I have now met in person many of the people I ‘know’ well through virtual exchanges. It was also a very good process to work directly with a group of experienced and knowledgeable people who all contributed to a tangible outcome.

OSI is an ambitious project, with plans for annual meetings over the next decade. It will be interesting to see if we really can achieve change.

Published 24 April 2016
Written by Dr Danny Kingsley
Creative Commons License

Forget compliance. Consider the bigger RDM picture

The Office of Scholarly Communication sent Dr Marta Teperek, our Research Data Facility Manager to the  International Digital Curation Conference held in in Amsterdam on 22-25 February 2016. This is her report from the event.

Fantastic! This was my first IDCC meeting and already I can’t wait for next year. There was not only amazing content in high quality workshops and conference papers, but also a great opportunity to network with data professionals from across the globe. And it was so refreshing to set aside our UK problem of compliance with data sharing policies, to instead really focus on the bigger picture: why it is so important to manage and share research data and how to do it best.

Three useful workshops

The first day started really intensely – the plan was for one full day or two half-day workshops, but I managed to squeeze in three workshops in one day.

Context is key when it comes to data sharing

The morning workshop was entitled “A Context-driven Approach to Data Curation for Reuse” by Ixchel Faniel (OCLC), Elizabeth Yakel (University of Michigan), Kathleen Fear (University of Rochester) and Eric Kansa (Open Context). We were split into small groups and asked to decide what was the most important information about datasets from the re-user’s point of view. Would the re-user care about the objects themselves? Would s/he want to get hints about how to use the data?

We all had difficulties in arranging the necessary information in order of usefulness. Subsequently, we were asked to re-order the information according to the importance from the point of view of repository managers. And the take-home message was that for all of the groups the information about datasets required by the re-user was the not same as that required from the repository.

In addition, the presenters provided discipline-specific context based on interviews with researchers – depending on the research discipline, different information about datasets was considered the most important. For example, for zoologists, the information about specimen was very important, but it was of negligible importance to social scientists. So context is crucial for the collection of appropriate metadata information. Insufficient contextual information makes data not useful.

So what can institutional repositories do to address these issues? If research carried out within a given institution only covers certain disciplines, then institutional repositories could relatively easily contextualise metadata information being collected and presented for discovery. However, repositories hosting research from many different disciplines will find this much more difficult to address. For example, Cambridge repository has to host research spanning across particle physics, engineering, economics, archaeology, zoology, clinical medicine and many, many others. This makes it much more difficult (if not impossible) to contextualise the metadata.

It is not surprising that information most important from the repository’s point of view is different that the most important information required by the data re-users. In order to ensure that research data can be effectively shared and preserved in long term, repositories need to collect certain amount of administrative metadata: who deposited the data, what are the file formats, what are the data access conditions etc. However, repositories should collect as much administrative metadata as possible in an automated way. For example, if the user logs in to deposit data, all the relevant information about the user should be automatically harvested by feeds from human resources systems.

EUDAT – Pan-European infrastructure for research data

The next workshop was about EUDAT – the collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers. EUDAT is an impressive project funded by Horizon2020 grant and it offers five different types of services to researchers:

  • B2DROP – a secure and trusted data exchange service to keep research data synchronized, up-to-date and easy to exchange with other researchers;
  • B2SHARE – service for storing and sharing small-scale research data from diverse contexts;
  • B2SAFE – service to safely store research data by replicating it and depositing at multiple trusted repositories (additional data backups);
  • B2STAGE – service to transfer datasets between EUDAT storage resources and high-performance computing (HPC) workspaces;
  • B2FIND – discovery service harvesting metadata from research data collections from EUDAT data centres and other repositories.

The project has a wide range of services on offer and is currently looking for institutions to pilot these services with. I personally think these are services which (if successfully implemented) would be of a great value to Pan-European research community.

However, I have two reservations about the project:

  • Researchers are being encouraged to use EUDAT’s platforms to collaborate on their research projects and to share their research data. However, the funding for the project runs out in 2018. EUDAT team is now investigating options to ensure the sustainability and future funding for the project, but what will happen to researchers’ data if the funding is not secured?
  • Perhaps if the funding is limited it would be more useful to focus the offering on the most useful services, which are not provided elsewhere. For example, another EC-funded project, Zenodo, already offers a user-friendly repository for research data; Open Science Framework offers a platform for collaboration and easy exchange of research data. Perhaps EUDAT could initially focus on developing services which are not provided elsewhere. For example, having a Pan-Europe service harvesting metadata from various data repositories and enabling data discovery is clearly much needed and would be extremely useful to have.

Jisc Shared RDM Services for UK institutions

I then attended the second half of Jisc workshop on shared Research Data Management services for UK institutions. The University of York and the University of Cambridge are two of 13 pilot institutions participating in the pilot. Jenny Mitcham from York and I gave presentations on our institutional perspectives on the pilot project: where we are at the moment and what are our key expectations from the pilot. Jenny gave an overview of an impressive work by her and her colleagues on addressing data preservation gaps at the University of York. Data preservation was one of the areas in which Cambridge hopes to get help from the Jisc RDM shared services project. Additionally, as we described before, Cambridge would greatly benefit from solutions for big data and for personal/sensitive data. My presentation from the session is available here.

Presentations were followed by breakout group discussions. Participants were asked to identify the areas of priorities for the Jisc RDM pilot. The top priority identified by all the groups seemed to be solutions for personal/sensitive data and for effective data access management. This was very interesting to me as at similar workshops held by Jisc in the UK, breakout groups prioritised interoperability with their existing institutional systems and cost-effectiveness. This could be one of the unforeseen effects of strict funders’ research data policies in the UK, which required institutions to provide local repositories to share research data.

As a result of these policies, many institutions were tasked with creating institutional data repositories from scratch in a very short time. Most of the UK universities now have institutional repositories which allow research data to be uploaded and shared. However, very few universities have their repositories well integrated with other institutional systems. Not having the policy pressure in non-UK countries perhaps allowed institutions to think more strategically about developing their RDM service provisions and ensure that developed services are well embedded within the existing institutional infrastructure.

Conference papers and posters

The two following days were full of excellent talks. My main problem was which sessions to attend: talking with other attendees I am aware that the papers presented at parallel sessions were also extremely useful. If the budget allows, I certainly think that it would be useful for more participants from each institution to attend the meeting to cover more parallel sessions.

Below are my main reflections from keynote talks.

Barend Mons – Open Science as a Social Machine

This was a truly inspirational talk, raising a lot of thought-provoking discussions. Barend started from a reflection that more and more brilliant brains, with more and more powerful computers and with billions of smartphones, created a single, interconnected social super-machine. This machine generates data – vast amount of data – which is difficult to comprehend and work with, unless proper tools are used.

Barend mentioned that with the current speed of new knowledge being generated and papers being published, it is simply impossible for human brains to assimilate the constantly expanding amount of new knowledge. Brilliant brains need powerful computers to process the growing amount of information. But in order for science to be accessible to computers, we need to move away from pdfs. Our research needs to be machine-readable. And perhaps if publishers do not want to support machine-readability, we need to move away from the current publishing model.

Barend also stressed that if data is to be useful and correctly interpretable, it needs to be accessible not only to machines, but also to humans, and that effort is needed to make data well described. Barend said that research data without proper metadata description is useless (if not harmful). And how to make research data meaningful? Barend proposed a very compelling solution: no more research grants should be awarded without 5% of money dedicated for data stewardship.

I could not agree more with everything that Barend said. I hope that research funders will also support Barend’s statement.

Andrew Sallans – nudging people to improve their RDM practice

Andrew started his talk from a reflection that in order to improve our researchers’ RDM practice we need to do better than talking about compliance and about making data open. How a researcher is supposed to make data accessible, if the data was not properly managed in the first place? The Open Science Framework has been created with three mission statements:

  • Technology to enable change;
  • Training to enact change;
  • Incentives to embrace change.

So what is the Open Science Framework (OSF)? It is an open source platform to support researchers during the entire research lifecycle: from the start of the project, through data creation, editing and sharing with collaborators and concluding with data publication. What I find the most compelling about the OSF is that is allows one to easily connect various storage platforms and places where researchers collaborate on their data in one place: researchers can easily plug their resources stored on Dropbox, Googledrive, GitHub and many others.

To incentivise behavioural change among researchers, the OSF team came up with two other initiatives:

Personally, I couldn’t agree more with Andrew that enabling good data management practice should be the starting point. We can’t expect researchers to share their research data if we have not helped them with providing tools and support for good data management. However, I am not so sure about the idea of cash rewards.

In the end researchers become researchers because they want to share the outcomes of their research with the community. This is the principle behind academic research – the only way of moving ideas forward is to exchange findings with colleagues. Do researchers need to be paid extra to do the right thing? I personally do not think so and I believe that whoever decides to pursue an academic career is prepared to share. And it is our task to make data management and sharing as easy as possible, and the use of OSF will certainly be of a great aid for the community.

Susan Halford – the challenge of big data and social research

The last keynote was from Susan Halford. Susan’s talk was again very inspirational and thought-provoking. She talked about the growing excitement around big data and how trendy it has become; almost being perceived as a solution to every problem. However, Susan also pointed out the problems with big data. Simply increasing the computational power and not fully comprehending the questions and the methodology used can lead to serious misinterpretations of results. Susan concluded that when doing big data research one has to be extremely careful about choosing proper methodology for data analysis, reflecting on both the type of data being collected, as well as (inter)disciplinary norms.

Again – I could not agree more. Asking the right question and choosing the right methodology are key to make the right conclusions. But are these problems new to big data research? I personally think that we are all quite familiar with these challenges. Questions about the right experimental design and the right methodology have been known to humankind since scientific method is used.

Researchers always needed to design studies carefully before commencing to do the experiments: what will be the methodology, what are the necessary controls, what should be the sample size, what needs to happen for the study to be conclusive? To me this is not a problem of big data, to me this is a problem that needs to be addressed by every researcher from the very start of the project, regardless of the amount of data the project generates or analyses.

Birds of a Feather discussions

I had not experienced Birds of a Feather Discussions (BoF) before at a conference and I am absolutely amazed by the idea. Before the conference started the attendees were invited to propose ideas for discussions keeping in mind that BoF sessions might have the following scope:

  • Bringing together a niche community of interest;
  • Exploring an idea for a project, a standard, a piece of software, a book, an event or anything similar.

I proposed a session about sharing of personal/sensitive data. Luckily, the topic was selected for a discussion and I co-chaired the discussion together with Fiona Nielsen from Repositive. We both thought that the discussion was great and our blog post from the session is available here.

And again, I was very sorry to be the only attendee from Cambridge at the conference. There were four parallel discussions and since I was chairing one of them, I was unable to take part in the others. I would have liked to be able to participate in discussions on ‘Data visualisation’ and ‘Metadata Schemas’ as well.

Workshops: Appraisal, Quality Assurance and Risk Assessment

The last day was again devoted to workshops. I attended an excellent workshop from the Pericles project on the appraisal, quality assurance and risk assessment in research data management. The project was about how an institutional repository should conduct data audits when accepting data deposits and also how to measure the risks of datasets becoming obsolete.

These are extremely difficult questions and due to their complexity, very difficult to address. Still, the project leaders realised the importance of addressing them systematically and ideally in an (semi)automated way by using specialised software to help repository managers making the right preservation decisions.

In a way I felt sorry for the presenters – their project progress and ambitions were so high that probably none of us, attendees, were able to critically contribute to the project – we were all deeply impressed by the high level of questions asked, but our own experience with data preservation and policy automation was nowhere at the level demonstrated by the workshop leaders.

My take home message from the workshop is that proper audit of ingested data is of crucial importance. Even if there is no automation of risk assessment possible, repository managers should at least collect information about files being deposited to be able to assess the likelihood of their obsolescence in the future. Or at least to be able to identify key file formats/software types as selected preservation targets to ensure that the key datasets do not become obsolete. For me the workshop was a real highlight of the conference.

Networking and the positive energy

Lots of useful workshops, plenty of thought-provoking talks. But for me one of the most important parts of the conference was meeting with great colleagues and having fascinating discussions about data management practices. I never thought I could spend an evening (night?) with people who would be willing to talk about research data without the slightest sights of boredom. And the most joyful and refreshing part of the conference was that due to the fact we were from across the globe, our discussions diverted away from the compliance aspect of data policies. Free from policy, we were able to address issues of how to best support research data management: how to best help researchers, what are our priority needs, what data managers should do first with our limited resources.

I am looking forward to catching up next year with all the colleagues I have met in Amsterdam and to see what progress we will have all made with our projects and what should be our collective next moves.

Summarising, I came back with lots of new ideas and full of energy and good attitude – ready to advocate for the bigger picture and the greater good. I came back exhausted, but I cannot imagine spending four days any more productively and fruitfully than at IDCC.

Thanks so much to the organisers and to all the participants!

Published 8 March 2016
Written by Dr Marta Teperek

Creative Commons License