Tag Archives: conference

Engagement, infrastructure and roles: themes at #ScholComm19

Dr Beatrice Gini, the Office of Scholarly Communication’s new Training Coordinator, recently attended the inaugural Scholarly Communication Conference at the University of Kent. In this post she reviews the main themes and discussions from the event.

ScholComm19 – a brand new conference, a supportive community, an inclusive space: what a treat for a newcomer to scholarly communication! Having recently started a job within the Office of Scholarly Communication, I had high expectations for this conference as an opportunity to learn a lot from fellow practitioners, and I was not disappointed. Sarah Slowe and the team at the University of Kent should be congratulated for their drive in starting up a new gathering that draws together all the different strands of Scholarly Communications, giving those working at the coalface a chance to get together and share best practice.

With the whole of Friday given over to lightning talks, there were too many speakers for me to do them justice individually, so instead I will attempt to summarise the major themes, as I understood them. The full conference programme can be found here.

Engaging researchers

Many of the speakers focused on the way we work with researchers. Hardly surprising, perhaps, as our jobs tend to involve as much advocacy and training as they do practical support. While at times this is a challenge, many have found ways to deliver our messages more effectively:

 

  • A personal touch – Cassie Bowman from London South Bank University was faced with a lack of researcher engagement, due to the limitations of the technological platform, the complex terminology, the conflicting demands of policies, the difficulties in correcting initial misunderstandings, and the researchers’ fear of getting it wrong. She overcame these not by commissioning large scale change, but through her own personal touch. Her one-to-one sessions are carefully tailored to each researcher and produce long-lasting changes in attitudes. She reaches people through posters and infographics, sprinkling on a little competition (for the highest download figures) to boost interest. Lucy Lambe also spoke on the benefits of one-to-one sessions, alongside workshops and advice on the web, for her publishing advice service for researchers at LSE.
  • A bit of fun – The Publishing Trap game is now well-known in ScholComm circles, but it was new to me, and I was blown away. It takes players through a cleverly-crafted path from PhD student to retired researcher and beyond – all the way to gravestone, in fact – replicating the emotional highs and lows of a research career. Most importantly, though, it asks players to make crucial decisions that spark discussions on Open Access, copyright, skills, and more. Why not organise a fun session to surprise those who may (crazily!) believe that copyright is boring?
  • Useful information – We need to deliver information that is trustworthy and useful. Kirsty Wallis (University of Greenwich) stressed the importance of over-preparing and tailoring sessions to the needs of the people in the room. Her talk gave a useful blueprint of how we could teach academics to ‘speak social media’ through a flexible and hands-on workshop. ‘We need to be a credible source of information’ – this was one aspect of Julie Baldwin’s (University of Nottingham) exploration of why academics ‘get copyright so copywrong’. Engaging researchers with copyright issues is more important than ever now, at a time of change in the law. The University of Kent’s Chris Morrison gave a whistle-stop tour of the history of copyright law, followed by a sneak preview of the way the law may change once the new EU directive is implemented (yes, Brexit did flash briefly on the screen at this point, but it should not have a significant impact on copyright decisions).

Compliance vs culture change

Ian Carter’s talk on the study he ran with JISC on Research Data Management and Sharing raised a strong theme, which was echoed in many of the discussions I had during breaks. His interviews with representatives from 34 institutions revealed that there is a tension in the way we attempt to engage researchers with RDM and open data: on the one hand we say ‘you must do this to receive money/progression/recognition’, on the other we say ‘doing this benefits science and the wider world’. My belief is that the former is likely to generate small, short term wins on compliance rates, but potentially generate resentment. The latter requires more advocacy, but it is likely to generate true buy-in from researchers. Dr Carter advocates that the second approach, which aims for culture change, is indeed the most likely to succeed in the long term. He throws a challenge to all of us when he reports that researcher engagement is variable, RDM leadership is often fragile, responsible staff can be isolated, and few institutions consider all important aspects in their strategies. There is hope, however. As repositories develop better functionality and we find better ways to evidence the benefits of RDM and open data, we may see this area of research support grow into new strengths.

Infrastructural headaches

Repositories are the bread-and-butter of any Open Access support team: they are wonderful digital treasure troves, opening up our university’s invaluable research to the world and preserving it in perpetuity… but at times they can cause tremendous headaches too! A number of speakers shared the challenges they faced, as well as their solutions, saving the rest of us a lot of time and paracetamol. While there is still a split between institutions on the issue of whether depositing in a repository is done by researchers or mediated by support staff, it looked to me as though the trend is towards self-deposit by academics, which will mean more and more of us require automated systems for checking and updating records.

  • Nicola Barnett focused on how staff at the University of Leeds deal with the need to update repository records after they are officially published, for instance to set the correct embargo deadlines. She shared a useful set of instructions to automatically generate a list of recently published publications using Excel and a CrossRef API.
  • The diversity of publishers’ policies was arguably the greatest time-consuming hurdle in Suzanne Atkins’ work on making more monographs Open Access at the University of Birmingham. She ran a very successful pilot project to open up book chapters from one department, which had a glut of materials that could be made instantly OA, if the authors consented. While this work was very worthwhile and likely puts the team ahead when it comes to the next REF, it was hindered by the need to check every single policy and by the publishers’ insistence on relying on case-by-case decision, rather than applying blanket policies.
  • If your current system is just not up to requirements, switching to a new one can be a good time investment in the long run, but it can come with its own demands. Catherine Parker and her team at the University of Huddersfield found this out when they had to manually migrate all previous records – a great feat that really brought out their community spirit and was accomplished in (only?) two and a half months of intensive work. Stuart Bentley from the University of Hull highlighted some of the challenges of switching to Worktribe, as well as considering the improved functionality in the new system.

Roles and time

Finally, several speakers examined the way teams are structured, often in the context of the age-old question of how to get it all done in the time we have.

  • Surveys run by Catherine Parker and Ian Carter revealed a great disparity in the size of the research support and data management staff between institutions, with teams varying in size from one to well over a dozen. Even the areas where they are employed vary, with most being in libraries, but some belonging to research strategy offices. Lone workers have the blessing and the curse of having to take on all aspects of the work, from maintaining the repository to liaising with faculty members and running training, while large teams can specialise their staff.
  • Jane Belger and Anne Lawson talked about their experience of sharing the role of Research and Open Access Librarian at the University of West England at Bristol. Having worked out the logistics of syncing schedules and the questions of when to divide up projects and when to collaborate, their main conclusion is that two people can be ‘more than the sum of their parts’.
  • The multiplicity of roles was evident both in the talks and in the chats during breaks. Almost every speaker gave an introduction to their institution, which was key to understanding their perspective. A case in point was from Isabel Benton, from Leeds Arts University. She highlighted the peculiar challenges of working at a place where as many as 43% of outputs are in non-traditional format such as art show or exhibition: how do you capture those in a repository? (Hint: with a creative mix of media, check out the repository to know more.

*****

There was lots to think about on the train home. The overwhelming feeling, though, was of a community that genuinely cares about doing our very best to support researchers, and is dedicated to helping each other, both within institutions and beyond.

Published 30 May 2019
Written by Dr Beatrice Gini
Creative Commons License

Forget compliance. Consider the bigger RDM picture

The Office of Scholarly Communication sent Dr Marta Teperek, our Research Data Facility Manager to the  International Digital Curation Conference held in in Amsterdam on 22-25 February 2016. This is her report from the event.

Fantastic! This was my first IDCC meeting and already I can’t wait for next year. There was not only amazing content in high quality workshops and conference papers, but also a great opportunity to network with data professionals from across the globe. And it was so refreshing to set aside our UK problem of compliance with data sharing policies, to instead really focus on the bigger picture: why it is so important to manage and share research data and how to do it best.

Three useful workshops

The first day started really intensely – the plan was for one full day or two half-day workshops, but I managed to squeeze in three workshops in one day.

Context is key when it comes to data sharing

The morning workshop was entitled “A Context-driven Approach to Data Curation for Reuse” by Ixchel Faniel (OCLC), Elizabeth Yakel (University of Michigan), Kathleen Fear (University of Rochester) and Eric Kansa (Open Context). We were split into small groups and asked to decide what was the most important information about datasets from the re-user’s point of view. Would the re-user care about the objects themselves? Would s/he want to get hints about how to use the data?

We all had difficulties in arranging the necessary information in order of usefulness. Subsequently, we were asked to re-order the information according to the importance from the point of view of repository managers. And the take-home message was that for all of the groups the information about datasets required by the re-user was the not same as that required from the repository.

In addition, the presenters provided discipline-specific context based on interviews with researchers – depending on the research discipline, different information about datasets was considered the most important. For example, for zoologists, the information about specimen was very important, but it was of negligible importance to social scientists. So context is crucial for the collection of appropriate metadata information. Insufficient contextual information makes data not useful.

So what can institutional repositories do to address these issues? If research carried out within a given institution only covers certain disciplines, then institutional repositories could relatively easily contextualise metadata information being collected and presented for discovery. However, repositories hosting research from many different disciplines will find this much more difficult to address. For example, Cambridge repository has to host research spanning across particle physics, engineering, economics, archaeology, zoology, clinical medicine and many, many others. This makes it much more difficult (if not impossible) to contextualise the metadata.

It is not surprising that information most important from the repository’s point of view is different that the most important information required by the data re-users. In order to ensure that research data can be effectively shared and preserved in long term, repositories need to collect certain amount of administrative metadata: who deposited the data, what are the file formats, what are the data access conditions etc. However, repositories should collect as much administrative metadata as possible in an automated way. For example, if the user logs in to deposit data, all the relevant information about the user should be automatically harvested by feeds from human resources systems.

EUDAT – Pan-European infrastructure for research data

The next workshop was about EUDAT – the collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers. EUDAT is an impressive project funded by Horizon2020 grant and it offers five different types of services to researchers:

  • B2DROP – a secure and trusted data exchange service to keep research data synchronized, up-to-date and easy to exchange with other researchers;
  • B2SHARE – service for storing and sharing small-scale research data from diverse contexts;
  • B2SAFE – service to safely store research data by replicating it and depositing at multiple trusted repositories (additional data backups);
  • B2STAGE – service to transfer datasets between EUDAT storage resources and high-performance computing (HPC) workspaces;
  • B2FIND – discovery service harvesting metadata from research data collections from EUDAT data centres and other repositories.

The project has a wide range of services on offer and is currently looking for institutions to pilot these services with. I personally think these are services which (if successfully implemented) would be of a great value to Pan-European research community.

However, I have two reservations about the project:

  • Researchers are being encouraged to use EUDAT’s platforms to collaborate on their research projects and to share their research data. However, the funding for the project runs out in 2018. EUDAT team is now investigating options to ensure the sustainability and future funding for the project, but what will happen to researchers’ data if the funding is not secured?
  • Perhaps if the funding is limited it would be more useful to focus the offering on the most useful services, which are not provided elsewhere. For example, another EC-funded project, Zenodo, already offers a user-friendly repository for research data; Open Science Framework offers a platform for collaboration and easy exchange of research data. Perhaps EUDAT could initially focus on developing services which are not provided elsewhere. For example, having a Pan-Europe service harvesting metadata from various data repositories and enabling data discovery is clearly much needed and would be extremely useful to have.

Jisc Shared RDM Services for UK institutions

I then attended the second half of Jisc workshop on shared Research Data Management services for UK institutions. The University of York and the University of Cambridge are two of 13 pilot institutions participating in the pilot. Jenny Mitcham from York and I gave presentations on our institutional perspectives on the pilot project: where we are at the moment and what are our key expectations from the pilot. Jenny gave an overview of an impressive work by her and her colleagues on addressing data preservation gaps at the University of York. Data preservation was one of the areas in which Cambridge hopes to get help from the Jisc RDM shared services project. Additionally, as we described before, Cambridge would greatly benefit from solutions for big data and for personal/sensitive data. My presentation from the session is available here.

Presentations were followed by breakout group discussions. Participants were asked to identify the areas of priorities for the Jisc RDM pilot. The top priority identified by all the groups seemed to be solutions for personal/sensitive data and for effective data access management. This was very interesting to me as at similar workshops held by Jisc in the UK, breakout groups prioritised interoperability with their existing institutional systems and cost-effectiveness. This could be one of the unforeseen effects of strict funders’ research data policies in the UK, which required institutions to provide local repositories to share research data.

As a result of these policies, many institutions were tasked with creating institutional data repositories from scratch in a very short time. Most of the UK universities now have institutional repositories which allow research data to be uploaded and shared. However, very few universities have their repositories well integrated with other institutional systems. Not having the policy pressure in non-UK countries perhaps allowed institutions to think more strategically about developing their RDM service provisions and ensure that developed services are well embedded within the existing institutional infrastructure.

Conference papers and posters

The two following days were full of excellent talks. My main problem was which sessions to attend: talking with other attendees I am aware that the papers presented at parallel sessions were also extremely useful. If the budget allows, I certainly think that it would be useful for more participants from each institution to attend the meeting to cover more parallel sessions.

Below are my main reflections from keynote talks.

Barend Mons – Open Science as a Social Machine

This was a truly inspirational talk, raising a lot of thought-provoking discussions. Barend started from a reflection that more and more brilliant brains, with more and more powerful computers and with billions of smartphones, created a single, interconnected social super-machine. This machine generates data – vast amount of data – which is difficult to comprehend and work with, unless proper tools are used.

Barend mentioned that with the current speed of new knowledge being generated and papers being published, it is simply impossible for human brains to assimilate the constantly expanding amount of new knowledge. Brilliant brains need powerful computers to process the growing amount of information. But in order for science to be accessible to computers, we need to move away from pdfs. Our research needs to be machine-readable. And perhaps if publishers do not want to support machine-readability, we need to move away from the current publishing model.

Barend also stressed that if data is to be useful and correctly interpretable, it needs to be accessible not only to machines, but also to humans, and that effort is needed to make data well described. Barend said that research data without proper metadata description is useless (if not harmful). And how to make research data meaningful? Barend proposed a very compelling solution: no more research grants should be awarded without 5% of money dedicated for data stewardship.

I could not agree more with everything that Barend said. I hope that research funders will also support Barend’s statement.

Andrew Sallans – nudging people to improve their RDM practice

Andrew started his talk from a reflection that in order to improve our researchers’ RDM practice we need to do better than talking about compliance and about making data open. How a researcher is supposed to make data accessible, if the data was not properly managed in the first place? The Open Science Framework has been created with three mission statements:

  • Technology to enable change;
  • Training to enact change;
  • Incentives to embrace change.

So what is the Open Science Framework (OSF)? It is an open source platform to support researchers during the entire research lifecycle: from the start of the project, through data creation, editing and sharing with collaborators and concluding with data publication. What I find the most compelling about the OSF is that is allows one to easily connect various storage platforms and places where researchers collaborate on their data in one place: researchers can easily plug their resources stored on Dropbox, Googledrive, GitHub and many others.

To incentivise behavioural change among researchers, the OSF team came up with two other initiatives:

Personally, I couldn’t agree more with Andrew that enabling good data management practice should be the starting point. We can’t expect researchers to share their research data if we have not helped them with providing tools and support for good data management. However, I am not so sure about the idea of cash rewards.

In the end researchers become researchers because they want to share the outcomes of their research with the community. This is the principle behind academic research – the only way of moving ideas forward is to exchange findings with colleagues. Do researchers need to be paid extra to do the right thing? I personally do not think so and I believe that whoever decides to pursue an academic career is prepared to share. And it is our task to make data management and sharing as easy as possible, and the use of OSF will certainly be of a great aid for the community.

Susan Halford – the challenge of big data and social research

The last keynote was from Susan Halford. Susan’s talk was again very inspirational and thought-provoking. She talked about the growing excitement around big data and how trendy it has become; almost being perceived as a solution to every problem. However, Susan also pointed out the problems with big data. Simply increasing the computational power and not fully comprehending the questions and the methodology used can lead to serious misinterpretations of results. Susan concluded that when doing big data research one has to be extremely careful about choosing proper methodology for data analysis, reflecting on both the type of data being collected, as well as (inter)disciplinary norms.

Again – I could not agree more. Asking the right question and choosing the right methodology are key to make the right conclusions. But are these problems new to big data research? I personally think that we are all quite familiar with these challenges. Questions about the right experimental design and the right methodology have been known to humankind since scientific method is used.

Researchers always needed to design studies carefully before commencing to do the experiments: what will be the methodology, what are the necessary controls, what should be the sample size, what needs to happen for the study to be conclusive? To me this is not a problem of big data, to me this is a problem that needs to be addressed by every researcher from the very start of the project, regardless of the amount of data the project generates or analyses.

Birds of a Feather discussions

I had not experienced Birds of a Feather Discussions (BoF) before at a conference and I am absolutely amazed by the idea. Before the conference started the attendees were invited to propose ideas for discussions keeping in mind that BoF sessions might have the following scope:

  • Bringing together a niche community of interest;
  • Exploring an idea for a project, a standard, a piece of software, a book, an event or anything similar.

I proposed a session about sharing of personal/sensitive data. Luckily, the topic was selected for a discussion and I co-chaired the discussion together with Fiona Nielsen from Repositive. We both thought that the discussion was great and our blog post from the session is available here.

And again, I was very sorry to be the only attendee from Cambridge at the conference. There were four parallel discussions and since I was chairing one of them, I was unable to take part in the others. I would have liked to be able to participate in discussions on ‘Data visualisation’ and ‘Metadata Schemas’ as well.

Workshops: Appraisal, Quality Assurance and Risk Assessment

The last day was again devoted to workshops. I attended an excellent workshop from the Pericles project on the appraisal, quality assurance and risk assessment in research data management. The project was about how an institutional repository should conduct data audits when accepting data deposits and also how to measure the risks of datasets becoming obsolete.

These are extremely difficult questions and due to their complexity, very difficult to address. Still, the project leaders realised the importance of addressing them systematically and ideally in an (semi)automated way by using specialised software to help repository managers making the right preservation decisions.

In a way I felt sorry for the presenters – their project progress and ambitions were so high that probably none of us, attendees, were able to critically contribute to the project – we were all deeply impressed by the high level of questions asked, but our own experience with data preservation and policy automation was nowhere at the level demonstrated by the workshop leaders.

My take home message from the workshop is that proper audit of ingested data is of crucial importance. Even if there is no automation of risk assessment possible, repository managers should at least collect information about files being deposited to be able to assess the likelihood of their obsolescence in the future. Or at least to be able to identify key file formats/software types as selected preservation targets to ensure that the key datasets do not become obsolete. For me the workshop was a real highlight of the conference.

Networking and the positive energy

Lots of useful workshops, plenty of thought-provoking talks. But for me one of the most important parts of the conference was meeting with great colleagues and having fascinating discussions about data management practices. I never thought I could spend an evening (night?) with people who would be willing to talk about research data without the slightest sights of boredom. And the most joyful and refreshing part of the conference was that due to the fact we were from across the globe, our discussions diverted away from the compliance aspect of data policies. Free from policy, we were able to address issues of how to best support research data management: how to best help researchers, what are our priority needs, what data managers should do first with our limited resources.

I am looking forward to catching up next year with all the colleagues I have met in Amsterdam and to see what progress we will have all made with our projects and what should be our collective next moves.

Summarising, I came back with lots of new ideas and full of energy and good attitude – ready to advocate for the bigger picture and the greater good. I came back exhausted, but I cannot imagine spending four days any more productively and fruitfully than at IDCC.

Thanks so much to the organisers and to all the participants!

Published 8 March 2016
Written by Dr Marta Teperek

Creative Commons License

Sharing personal/sensitive research data

Sharing research data comes with many ethical and legal issues. Since these issues are often complex and can rarely be solved with one size fits all solutions, they tend not to be addressed as topics of conferences and workshops. We therefore thought that gathering of data curation professionals at IDCC 16 would be an excellent opportunity to start these discussions.

This blog post is our informal report from a Birds of a Feather discussion on sharing of personal/sensitive research data which took place at the International Digital Curation Conference in Amsterdam “Visible data, invisible infrastructure” on 23 February 2016.

The need for good models for sharing personal/sensitive data

Many funders and experts in data curation agree that sharing personal and sensitive data needs to be planned from the start of research project in order to be successful. Whenever it is possible to anonymise research data, this is the advised procedure to be followed before data is shared. For data which cannot be anonymised, governance procedures for data access need to be established.

We were interested to find out what are the practical solutions around sharing of personal/sensitive data offered by data curators and data managers who came to the meeting. To our surprise, only two data curators admitted to provide solutions for hosting of personal/sensitive data. Among these two, one repository accepted only anonymised data. The rest were currently not making personal/sensitive data available via their repositories.

Why is sharing personal/sensitive data so difficult to manage? Three main issues were discussed: anonymisation difficulty, problems with providing managed access to research data and technical issues.

Anonymisation difficulty

There was a lot of discussion about data anonymisation. When anonymising data one has to consider both direct and indirect identifiers. One of the data curators present at the meeting explained that their repository would accept anonymised data providing that they had no direct identifiers and maximum three indirect identifiers. But sometimes even a small number of indirect identifiers can make participants identifiable, especially in combination with information available in the public domain.

So perhaps instead of talking about data anonymisation one should rather focus on estimating the risk of re-identification of participants. It would be useful for the community if tools to perform risk assessment of participant re-identification in anonymised datasets were available to provide data curators with means to objectively assess and evaluate these risks.

Problems with managed access to research data

If repositories accept sensitive/personal research data they need to have robust workflows for managing access requests. The Expert Advisory Group on Data Access (EAGDA) has produced a comprehensive guidance document on governance of data access. However, there are difficulties in putting this guidance into practice.

If a request for data access is received by a repository, the request will be forwarded to a person nominated by the research team to handle data requests. However, research data are usually expected to be preserved long-term (5 years plus) and such long term periods are often longer than the time researchers spend at their institutions. This creates a problem: who will be there to respond to data access requests? One of the institutions accepting sensitive/personal data has a workflow in which the initial request is forwarded to the nominated person. If the nominated person is no longer available, the request is then directed to the faculty’s head. However, this also creates problems:

  • Contact details for the nominated person need to be kept up to date and researchers leaving the post might not remember to notify the repository managers.
  • The faculty’s head might be too busy to respond to requests and might have insufficient knowledge about the data to be able to manage access requests effectively.

Technical issues and workflows if things go wrong

There are also technical issues associated with sharing of personal/sensitive research data. One of the institutions reported that due to a technical fault in the repository system, restricted research data was released as open access data and downloaded by several users (who did not sign the data access agreement) before the fault has been noticed.

Follow up discussions led to a reflection that a repository can never be 100% sure of security of personal/sensitive data. Even assuming that technical faults will not happen, repositories can be also subject to hacking attacks. Therefore, when accepting personal/sensitive data for long term preservation, repository managers should also assess risks of data being inappropriately released and decide on a suitable risk mitigation strategy. Additionally, institutions should have workflows in place with procedures to be followed shall things go wrong and restricted data is inappropriately released.

Other issues

Apart from the topics mentioned above we discussed other issues related to sharing personal/sensitive research data. For example:

  • What workflows do organisations have in place to check that data depositors have the rights to share confidential research data or data generated in collaboration with other third parties (external collaborators, external funding bodies, commercial partners)?
  • How do we properly balance the amount of checks required to validate that the data depositor has the rights to share and not discourage data depositors from sharing their research via a repository?
  • Or, if research data cannot be safely shared via a repository, do organisations offer the possibility of creating a metadata-only records to facilitate data discoverability?
  • What are the implications for DOI creation?

Actions

Our discussions revealed that there are clearly more questions than answers available on how to effectively share personal/sensitive data. Therefore it is important that we, as the community of practitioners, start developing workflows and procedures to address these problems.

SciDataCon 2016 (11-13 September 2016) is organising a call for session proposals (deadline: 7 March) and we would like to propose a session on sharing of personal/sensitive data. If you have any practice papers that you would like to propose for this session please fill in a google form here. Please note that the google form is to submit your proposals for the session to us (it is not an official submission form for the conference). We will use your proposed practice papers to form a session proposal for the conference.

Possible topics for practice papers for the session:

  • What are the workflows for sharing commercial and sensitive data via repositories?
  • How is your organisation trying to balance between protection of confidential data and encouragement for sharing?
  • What safety mechanisms are there in place at your organisation to safeguard confidential data shared via your repository?
  • What are the workflows and procedures in place in case confidential/restricted/embargoed data is accidentally released?
  • What are adhered to ensure that data depositors have the rights to share confidential research data or data generated in collaboration with other third parties (external collaborators, external funding bodies, commercial partners)?
  • How do organisations balance the amount of checks required to validate that the data depositor has the rights to share and not to discourage data depositors from sharing their research via a repository?
  • Other case studies/practice papers on the subject

Resources:

Published 29 February 2016
Written by Fiona Nielsen, CEO at DNAdigest and Repositive and Marta Teperek, Research Data Facility Manager at the University of Cambridge
Creative Commons License