Tag Archives: research data management

Engaging Researchers with Good Data Management: Perspectives from Engaged Individuals

We need to recognise good practice, engage researchers early in their career with research data management and use peers to talk to those who are not ‘onboard’. These were the messages five attendees at the Engaging Researchers in Good Data Management conference held on the 15th of November.

The Data Champions and Research Support Ambassadors programmes are designed to increase confidence in providing support to researchers in issues around data management and all of scholarly communications respectively. Thanks to the generous support of the Arcadia Foundation, five places were made available to attend this event. In this blog post the three Data Champions and two Research Support Ambassadors who were awarded the places give us the low-down on what they got out of the conference and how they might put what they heard into practise.

Recordings of the talks from the event can be found on the Cambridge University Library YouTube channel.

Financial recognition is the key

Dr Laurent Gatto, Senior Research Associate, Department of Biochemistry, University of Cambridge and Data Champion

As a researcher who cherishes good and reproducible data analysis, I naturally view good data management as essential. I have been involved in research data management activities for a long time, acting as a local data champion and participating in open research and open data events. I was interested in participating in this conference because it gathered data champions, stewards and alike from various British and European institutions (Cambridge, Lancaster, Delft), and I was curious to see what approaches were implemented and issues were addressed across institutions. Another aspect of data championship/stewardship I am interested in is the recognition these efforts offer (this post touches on this a bit).

Focusing on the presentations from Lancaster, Cambridge and Delft, it is clear that direct engagement from active researchers is essential to promote healthy data management. There needs to be an enthusiastic researcher, or somebody that has some experience in research, to engage with the research community about open data, reproducibility, transparency, security; a blunt top-down approach lead to limited engagement. This is also important due to the plurality of what researchers across disciplines consider to be data. An informal setting, ideally driven by researchers and, or in collaboration with librarians, focusing on conversations, use-cases, interviews, … (I am just quoting some successful activities cited during the conference) have been the most successful, and have sometime also lead to new collaborations.

Despite the apparent relative success of these various data championing efforts and the support that the data champions get from their local libraries, these activities remain voluntary and come with little academic reward. Being a data champion is certainly an enriching activity for young researchers that value data, but is comes with relatively little credit and without any reward or recognition, suggesting that there is probably room for a professional approach to data stewardship.

With this in mind, I was very interested to hear the approach that is currently in place at TU Delft, where data stewards hold a joint position at the Centre for Research Data and at their respective faculty. This defines research data stewardship as an established and official activity, allows the stewards to pursue a research activity, and, explicitly, links research data to research and researchers.

I am wondering if this would be implemented more broadly to provide financial recognition to data stewards/champions, offer incentives (in particular for early-career researchers) to approach research data management professionally and seriously, make data management a more explicit activity that is part of research itself, and move towards a professionalisation of data management posts.

Inspiration and ideas

Angela Talbot, Research Governance Officer, MRC Biostatistics Unit and Data Champion

Tasked with improving and updating best practice in the MRC Biostatistics Unit, I went along to this workshop not really knowing what to expect but hopeful and eager to learn.

Good data management can meet with resistance as while it’s viewed as an altruistic and noble thing to do many researchers worry that to make their research open and reproducible opens them to criticism and the theft of ideas and future plans. What I wanted to know are ways to overcome this.

And boy did this workshop live up to my expectations! From the insightful opening comments to the though provoking closing remarks I was hooked. All of the audience were engaged in a common purpose, to share their successes and strategies for overcoming the barriers that ensure this becomes best practice.

Three successful schemes were talked through: the data conversations in Lancaster, the Data Champion scheme at the University of Cambridge and the data stewards in TU Delft. All of these successful schemes had one thing in common: they all combine a cross department/ faculty approach with local expertise.

Further excellent examples were provided by the lightning talks and for me, it was certainly helpful to hear of successes in engaging researchers on a departmental level.

The highlight for me were the focus groups – I was involved in Laurent Gatto’s group discussing how to encourage more good data management by highlighting what was in to for researchers who participate but I really wish I could have been in them all as the feedback indicated they had given useful insights and tips.

All in all I came away from the day buzzing with ideas. I spent the next morning jotting down ideas of events and schemes that could work within my own unique department and eager to share what I had learnt. Who knows, maybe next time I’ll be up there sharing my successes!!

We need to speak to the non-converted

Dr Stephen Eglen, Reader in Computational Neuroscience, Department of Applied Mathematics & Theoretical Physics, University of Cambridge and Data Champion

The one-day meeting on Engaging Researchers in Good Data Management served as a good chance to remind all of us about the benefits, but also the responsibilities we have to manage, and share, data. On the positive side, I was impressed to see the diversity of approaches lead by groups around the UK and beyond. It is heartening to see many universities now with teams to help manage and share data.

However, and more critically, I am concerned that meetings like this tend to focus on showcasing good examples to an audience that is already mostly convinced of the benefits of sharing. Although it is important to build the community and make new contacts with like-minded souls, I think we need to spend as much time engaging with the wider academic community.   In particular, it is only when our efforts can be aligned with those of funding agencies and scholarly publishing that we can start to build a system that will give due credit to those who do a good job of managing, and then sharing, their data. I look forward to future meetings where we can have a broader engagement of data managers, researchers, funders and publishers.

I am grateful to the organisers to have given me the opportunity to speak about our code review pilot in Neuroscience. I particularly enjoyed the questions. Perhaps the most intriguing question to report came in the break when Dr Petra ten Hoopen asked me what happens if during code review a mistake is found that invalidates the findings in the paper? To which I answered (a) the code review is supposed to verify that the code can regenerate a particular finding; (b) that this is an interesting question and it would probably depend on the severity of the problem unearthed; (c) we will cross that bridge when we come to it. Dr ten Hoopen noted that this was similar to finding errors in data that were being published alongside papers. These are indeed difficult questions, but I hope in the relatively early days of data and code sharing, we err on the side of rewarding researchers who share.

Teach RDM early and often

Kirsten Elliott, Library Assistant, Sidney Sussex College, University of Cambridge and Research Support Ambassador

Prior to this conference, my experience with Research Data Management (RDM) was limited to some training through the Office of Scholarly Communication and Research Support Ambassadors programme. This however really sparked my interest and so I leapt at the opportunity to learn more about RDM by attending this event. Although at times I felt slightly out of my depth, it was fascinating to be surrounded by such experts on the topic.

The introductory remarks from Nicole Janz were a fascinating overview of the reproducibility crisis, and how this relates to RDM, including strategies for what could be done, for example setting reproducing studies as assignments when teaching statistics. This clarified for me the relationship between RDM and open data, and transparency in research.

There were many examples throughout the day of best practice in promoting good RDM, from the “Data Conversations” held at Lancaster University, international efforts from SPARC Europe and even some from Cambridge itself! Common ground across all of them included the necessity of utilising engaged researchers themselves to spread messages to other researchers, the importance of understanding discipline specific issues with data, and an expansive conception of what counts as “data”.

I am based in a college library and predominantly work supporting undergraduate students, particularly first years. In a way this makes it quite a challenge to present RDM practices as many of the issues are most obviously relevant to those undertaking research. However, I think there’s a strong argument for teaching about RDM from very early in the academic career to ingrain good habits, and I will be thinking about how to incorporate RDM into our information literacy training, and signposting students to existing RDM projects in Cambridge.

Use peers to spread the RDM message

Laura Jeffrey, Information Skills Librarian, Wolfson College, University of Cambridge and Research Support Ambassador

This inspirational conference was organised and presented by people who are passionate about communicating the value of open data and replicability in research processes. It was valuable to hear from a number of speakers (including Rosie Higman from the University of Manchester, Marta Busse-Wicher from the University of Cambridge and Marta Teperek from TU Delft) about the changing role of support staff, away from delivering training to one of coordination. Peers are seen to be far more effective in encouraging deeper engagement, communicating personal rather than prescriptive messages (evidenced by Data Conversations at Lancaster University). A member of the audience commented that where attendance is low for their courses, the institution creates video of researcher-led activities to be delivered at point of need.

I was struck by two key areas of activity that I could act on with immediate effect:

Inclusivity – Beth Montagu Hellen (Bishop Grosseteste) highlighted the pressing need for open data to be made relevant to all disciplines. Cambridge promotes a deliberately broad definition of data for this reason. Yet more could be done to facilitate this; I’ll be following @OpenHumSocSci to monitor developments. We’re fortunate to have a Data Science Group at Wolfson promoting examples of best practice. However, I’m keen to meet with them to discuss how their activities and the language they use could be made more attractive to all disciplines.

Communication – Significant evidence was presented by Nicole Janz, Stephen Eglen and others, that persuading researchers of the benefits of open data leads to higher levels of engagement than compulsion on the grounds of funder requirements. This will have a direct impact on the tone and content of our support. A complimentary approach was proposed: targeted campaigns to coincide with international events in conjunction with frequent, small-scale messages. We’ll be tapping into Love Data Week in 2018 with more regular exposure in email communication and @WolfsonLibrary.

As result of attending this conference, I’ll be blogging about open data on the Wolfson Information Skills blog and providing pointers to resources on our college LibGuide. I’ll also be working closely with colleagues across the college to timetable face-to-face training sessions.

Published 15 December 2017
Written by Dr Laurent Gatto, Angela Talbot, Dr Stephen Eglen, Kirsten Elliott and Laura Jeffrey
Creative Commons License

Benchmarking RDM Training

This blog reports on the progress of the international project to benchmark Research Data Management training across institutions. It is a collaboration of Cambridge Research Data Facility staff with international colleagues – a full list is at the bottom of the post. This is a reblog, the original appeared on 6 October 2017. 

How effective is your RDM training?

When developing new training programmes, one often asks oneself a question about the quality of training. Is it good? How good is it? Trainers often develop feedback questionnaires and ask participants to evaluate their training. However, feedback gathered from participants attending courses does not answer the question how good was this training compared with other training on similar topics available elsewhere. As a result, improvement and innovation becomes difficult. So how to objectively assess the quality of training?

In this blog post we describe how, by working collaboratively, we created tools for objective assessment of RDM training quality.

Crowdsourcing

In order to objectively assess something, objective measures need to exist. Being unaware of any objective measures for benchmarking of a training programme, we asked Jisc’s Research Data Management mailing list for help. It turned out that a lot of resources with useful advice and guidance on creation of informative feedback forms was readily available, and we gathered all information received in a single document. However, none of the answers received provided us with the information we were looking for. To the contrary, several people said they would be interested in such metrics. This meant that objective metrics to address the quality of RDM training either did not exist, or the community was not aware of them. Therefore, we decided to create RDM training evaluation metrics.

Cross-institutional and cross-national collaboration

For metrics to be objective, and to allow benchmarking and comparisons of various RDM courses, they need to be developed collaboratively by a community who would be willing to use them. Therefore, the next question we asked Jisc’s Research Data Management mailing list was whether people would be willing to work together to develop and agree on a joint set of RDM training assessment metrics and a system, which would allow cross-comparisons and training improvements. Thankfully, the RDM community tends to be very collaborative, which was the case also this time – more than 40 people were willing to take part in this exercise and a dedicated mailing list was created to facilitate collaborative working.

Agreeing on the objectives

To ensure effective working, we first needed to agree on common goals and objectives. We agreed that the purpose of creating the minimal set of questions for benchmarking is to identify what works best for RDM training. We worked with the idea that this was for ‘basic’ face-to-face RDM training for researchers or support staff but it can be extended to other types and formats of training session. We reasoned that same set of questions used in feedback forms across institutions, combined with sharing of training materials and contextual information about sessions, should facilitate exchange of good practice and ideas. As an end result, this should allow constant improvement and innovation in RDM training. We therefore had joint objectives, but how to achieve this in practice?

Methodology

Deciding on common questions to be asked in RDM training feedback forms

In order to establish joint metrics, we first had to decide on a joint set of questions that we would all agree to use in our participant feedback forms. To do this we organised a joint catch up call during which we discussed the various questions we were asking in our feedback forms and why we thought these were important and should be mandatory in the agreed metrics. There was lots of good ideas and valuable suggestions. However, by the end of the call and after eliminating all the non-mandatory questions, we ended up with a list of thirteen questions, which we thought were all important. These however were too many to be asked of participants to fill in, especially as many institutions would need to add their own institution-specific feedback questions.

In order to bring down the number of questions which should be made mandatory in feedback forms, a short survey was created and sent to all collaborators, asking respondents to judge how important each question was (scale 1-5, 1 being ‘not important at all that this question is mandatory’ and 5 being ‘this should definitely be mandatory’.). Twenty people participated in the survey. The total score received from all respondents for each question were calculated. Subsequently, top six questions with the highest scores were selected to be made mandatory.

Ways of sharing responses and training materials

We next had to decide on the way in which we would share feedback responses from our courses and training materials themselves . We unanimously decided that Open Science Framework (OSF) supports the goals of openness, transparency and sharing, allows collaborative working and therefore is a good place to go. We therefore created a dedicated space for the project on the OSF, with separate components with the joint resources developed, a component for sharing training materials and a component for sharing anonymised feedback responses.

Next steps

With the benchmarking questions agreed and with the space created for sharing anonymised feedback and training materials, we were ready to start collecting first feedback for the collective training assessment. We also thought that this was also a good opportunity to re-iterate our short-, mid- and long-term goals.

Short-term goals

Our short-term goal is to revise our existing training materials to incorporate the agreed feedback questions into RDM training courses starting in the Autumn 2017. This would allow us to obtain the first comparative metrics at the beginning of 2018 and would allow us to evaluate if our designed methodology and tools are working and if they are fit for purpose. This would also allow us to iterate over our materials and methods as needed.

Mid-term goals

Our mid-term goal is to see if the metrics, combined with shared training materials, could allow us to identify parts of RDM training that work best and to collectively improve the quality of our training as a whole. This should be possible in mid/late-2018, allowing time to adapt training materials as result of comparative feedback gathered at the beginning of 2018 and assessing whether training adaptation resulted in better participant feedback.

Long-term goals

Our long-term goal is to collaboratively investigate and develop metrics which could allow us to measure and monitor long-term effects of our training. Feedback forms and satisfaction surveys immediately after training are useful and help to assess the overall quality of sessions delivered. However, the ultimate goal of any RDM training should be the improvement of researchers’ day to day RDM practice. Is our training really having any effects on this? In order to assess this, different kinds of metrics are needed, which would need to be coupled with long-term follow up with participants. We decided that any ideas developed on how to best address this will be also gathered in the OSF and we have created a dedicated space for the work in progress.

Reflections

When reflecting on the work we did together, we all agreed that we were quite efficient. We started in June 2017, and it took us two joint catch up calls and a couple of email exchanges to develop and agree on joint metrics for assessment of RDM training. Time will show whether the resources we create will help us meet our goals, but we all thought that during the process we have already learned a lot from each other by sharing good practice and experience. Collaboration turned out to be an excellent solution for us. Likewise, our discussions are open to everyone to join, so if you are reading this blog post and would like to collaborate with us (or to follow our conversations), simply sign up to the mailing list.

Resources

Published 9 October 2017
Written by: (in alphabetical order by surname): Cadwallader Lauren, Higman Rosie, Lawler Heather, Neish Peter, Peters Wayne, Schwamm Hardy, Teperek Marta, Verbakel Ellen, Williamson, Laurian, Busse-Wicher Marta
Creative Commons License

Milestone -1000 datasets in Cambridge’s repository

Last week, Cambridge celebrated a huge milestone – the deposit of the 1000th dataset to our repository Apollo since the launch of the Research Data Facility in early 2015. This is the culmination of a huge amount of work by the team in the Office of Scholarly Communication, in terms of developing systems, workflows, policies and through an extensive advocacy campaign. The Research Data team have run 118 events over the past couple of years and published 39 blogs.

In the past 12 months alone there have been 26000 downloads of the data in Apollo. In some cases the dataset has been downloaded many times – 170 – and the data has featured in news, blogs and Twitter.

An event was held at Cambridge University Library last week to celebrate this milestone.

   

Opening remarks

The Director of Library Services, Dr Jess Gardner opened proceedings with a speech where she noted “the Research Data Services and all who sail in her are at the core of our mission in our research library”.

Dr Gardner referred to the library’s long and proud history of collecting and managing research data that “began on vellum, paper, stone and bone”. The research data of luminaries such as Isaac Newton and Charles Darwin was on paper and, she noted “we have preserved that with great care and share it openly on line through our digital library.”

Turning to the future, Dr Gardner observed: “But our responsibility now is today’s researcher and today’s scientists and people working across all disciplines across our great university. Our preservation stewardship of that research data from the digital humanities across the biomedical is a core part of what we now do.”

“In the 21st century our support and our overriding philosophy is all about supporting open research and opening data as widely as possible,” she noted.  “It is about sharing freely wherever it is appropriate to do so”. [Dr Gardner’s speech is in full at the end of this post.]

Perspectives from a researcher

The second speaker was Zoe Adams, a PhD student at Cambridge who talked about the work she has done with Professor Simon Deakin on the Labour Regulation Index in association with the Centre for Business Research.

Ms Adams noted it was only in retrospect she could “appreciate the benefit of working in a collaborative project and open research generally”. She discussed how helpful it had been as an early career researcher to be “associated with something that was freely available”. She observed that few of her peers had many citations, and the reason she did was because “the dataset is online, people use the data, they cite the data, and cite me”.

Working openly has also improved the way she works, she explained, saying “It has given me a new perspective on what research should be about. …  It gives me a sense that people are relying on this data to be accurate and that does change the way you approach it.”

View from the team

The final speaker was Dr Lauren Cadwallader, Joint Deputy Head of the OSC with responsibility for the Research Data Facility, who discussed the “showcase dataset of the data that we can produce in the OSC” which is  taken from usage of our Request a Copy service.

Dr Cadwallader noted there has been an increase in the requests for theses over time. “This is a really exciting observation because the Board of Graduate studies have agreed that all students should deposit a digital copy of their thesis in our repository,” she said. “So it is really nice evidence that we can show our PhD students that by putting a copy in the repository people can read it and people do want to read theses in our repository.”

One observation was that several of the theses that were requested were written 60 years ago, so the repository is sharing older research as well. The topics of these theses covered algebra, Yorkshire evangelists and one of the oldest requested theses was written in 1927 about the Falkland Islands. “So there is a longevity in research and we have a duty to provide access to that research, ” she said.

Thanks go to…

The dataset itself is one created by the OSC team looking at the usage of our Request a Copy service. The analysis undertaken by Peter Sutton Long and we recently published a blog post about the findings.

The music played at the event was complied by Tony Malone and covers almost 1000 years of music, from Laura Cannell’s reworking of Hildegard of Bingen, to Jane Weaver’s Modern Cosmology. There are acknowledgments to Apollo, and Cambridge too. The soundtrack is available for those interested in listening.

This achievement is entirely due to the incredible work of the team in the Research Data Facility and their ability to engage with colleagues across the institution, the nation and the world. In particular the vision and dedication of Dr Marta Teperek cannot be understated.

In the words of Dr Gardner: “They have made our mission different, they have made our mission better, through the work they have achieved and the commitment they have.”

The event was supported by the Arcadia Fund, a charitable fund of Lisbet Rausing and Peter Baldwin.

 

 

Published 21 September 2017
Written by Dr Danny Kingsley
Creative Commons License

Speech by Dr Jess Gardner

First let us begin with some headline numbers. One thousand datasets. This is hugely significant and a very high level when looking at research repositories around the country. There is every reason to be proud of that achievement and what it means for open research.

There have been 26000 downloads of that data in the past 12 months alone – that is about use and reuse of our research data and is changing the face of how we do research. Some of these datasets have been downloaded 117 times and used in news, blogs and Twitter. The Research Data team have written 39 blogs about research data and have run 118 events, most of these have been with researchers.

While the headline numbers give us a sense of volume, perhaps let’s talk about the underlying rationale and philosophy behind this, which is core.

Cambridge University Library has a 600 year old history we are very proud of. In that time we have had an abiding responsibility to collect, care for and make available for use and reuse, information and research objects that form part of the intrinsic international scholarly record of which Cambridge has been such a strong part. And the ability for those ideas to inspire new ideas. The collection began on vellum, paper, and stone and bone.

And today much of that of course is digital. You can’t see that in the same way you can see the manuscripts and collections. It is sometimes hard to grasp when we are in this grand old dame of a building that I dare you not to love. It is home to the physical papers of such greats as Isaac Newton and Charles Darwin. Their research data was on paper and we have preserved that with great care and share it openly on line through our digital library. But our responsibility now is today’s researcher and today’s scientists and people working across all disciplines across our great university. Our preservation stewardship of that research data from the digital humanities across the biomedical is a core part of what we now do.

And the people in this room have changed that. They have made our mission different, they have made our mission better through the work they have achieved and the commitment they have.

Philosophically this is very natural extension of what we have done in the Library and the open library and its great research community for which this very building is designed. Some of you may know there is a philosophy behind this building and the famous ‘open library Cambridge’. In the 19th century and 20th century that was mostly about our open stack of books and we have quite a few of them, we are a little weighed down by them.

Our research data weighs less but it is just as significant and in the 21st century our support and our overriding philosophy is all about supporting open research and opening data as widely as possible. It is about sharing freely wherever it is appropriate to do so and there are many reasons why data isn’t open sometimes, and that is fine. What we are looking for is managing so we can make those choices appropriately, just as we have with the archive for many, many years.

So whilst as there is a fantastic achievement to mark tonight with those 1000 datasets it really is significant, we are really celebrating a deeper milestone with our research partners, our data champions, our colleagues in the research office and in the libraries across Cambridge, and that is about the changing role in research support and library research support in the digital age, and I think that is something we should be very proud of in terms of what we have achieved at Cambridge. I certainly am.

I am relatively new here at Cambridge. One of the things that was said to me when I was first appointed to the job was how lucky I was to be working at this University but also with the Office of Scholarly Communication in particular and that has proved to be absolutely true. I like to take this opportunity to note that achievement of 1000 datasets and to state very publicly that the Research Data Services and all who sail in her are at the core of our mission in our research library. But also to thank you and the teams involved for your superb achievements. It really is something to be very proud of and I thank you.