Category Archives: Open Research at Cambridge Conference

Engaging Researchers with Good Data Management: Perspectives from Engaged Individuals

We need to recognise good practice, engage researchers early in their career with research data management and use peers to talk to those who are not ‘onboard’. These were the messages five attendees at the Engaging Researchers in Good Data Management conference held on the 15th of November.

The Data Champions and Research Support Ambassadors programmes are designed to increase confidence in providing support to researchers in issues around data management and all of scholarly communications respectively. Thanks to the generous support of the Arcadia Foundation, five places were made available to attend this event. In this blog post the three Data Champions and two Research Support Ambassadors who were awarded the places give us the low-down on what they got out of the conference and how they might put what they heard into practise.

Recordings of the talks from the event can be found on the Cambridge University Library YouTube channel.

Financial recognition is the key

Dr Laurent Gatto, Senior Research Associate, Department of Biochemistry, University of Cambridge and Data Champion

As a researcher who cherishes good and reproducible data analysis, I naturally view good data management as essential. I have been involved in research data management activities for a long time, acting as a local data champion and participating in open research and open data events. I was interested in participating in this conference because it gathered data champions, stewards and alike from various British and European institutions (Cambridge, Lancaster, Delft), and I was curious to see what approaches were implemented and issues were addressed across institutions. Another aspect of data championship/stewardship I am interested in is the recognition these efforts offer (this post touches on this a bit).

Focusing on the presentations from Lancaster, Cambridge and Delft, it is clear that direct engagement from active researchers is essential to promote healthy data management. There needs to be an enthusiastic researcher, or somebody that has some experience in research, to engage with the research community about open data, reproducibility, transparency, security; a blunt top-down approach lead to limited engagement. This is also important due to the plurality of what researchers across disciplines consider to be data. An informal setting, ideally driven by researchers and, or in collaboration with librarians, focusing on conversations, use-cases, interviews, … (I am just quoting some successful activities cited during the conference) have been the most successful, and have sometime also lead to new collaborations.

Despite the apparent relative success of these various data championing efforts and the support that the data champions get from their local libraries, these activities remain voluntary and come with little academic reward. Being a data champion is certainly an enriching activity for young researchers that value data, but is comes with relatively little credit and without any reward or recognition, suggesting that there is probably room for a professional approach to data stewardship.

With this in mind, I was very interested to hear the approach that is currently in place at TU Delft, where data stewards hold a joint position at the Centre for Research Data and at their respective faculty. This defines research data stewardship as an established and official activity, allows the stewards to pursue a research activity, and, explicitly, links research data to research and researchers.

I am wondering if this would be implemented more broadly to provide financial recognition to data stewards/champions, offer incentives (in particular for early-career researchers) to approach research data management professionally and seriously, make data management a more explicit activity that is part of research itself, and move towards a professionalisation of data management posts.

Inspiration and ideas

Angela Talbot, Research Governance Officer, MRC Biostatistics Unit and Data Champion

Tasked with improving and updating best practice in the MRC Biostatistics Unit, I went along to this workshop not really knowing what to expect but hopeful and eager to learn.

Good data management can meet with resistance as while it’s viewed as an altruistic and noble thing to do many researchers worry that to make their research open and reproducible opens them to criticism and the theft of ideas and future plans. What I wanted to know are ways to overcome this.

And boy did this workshop live up to my expectations! From the insightful opening comments to the though provoking closing remarks I was hooked. All of the audience were engaged in a common purpose, to share their successes and strategies for overcoming the barriers that ensure this becomes best practice.

Three successful schemes were talked through: the data conversations in Lancaster, the Data Champion scheme at the University of Cambridge and the data stewards in TU Delft. All of these successful schemes had one thing in common: they all combine a cross department/ faculty approach with local expertise.

Further excellent examples were provided by the lightning talks and for me, it was certainly helpful to hear of successes in engaging researchers on a departmental level.

The highlight for me were the focus groups – I was involved in Laurent Gatto’s group discussing how to encourage more good data management by highlighting what was in to for researchers who participate but I really wish I could have been in them all as the feedback indicated they had given useful insights and tips.

All in all I came away from the day buzzing with ideas. I spent the next morning jotting down ideas of events and schemes that could work within my own unique department and eager to share what I had learnt. Who knows, maybe next time I’ll be up there sharing my successes!!

We need to speak to the non-converted

Dr Stephen Eglen, Reader in Computational Neuroscience, Department of Applied Mathematics & Theoretical Physics, University of Cambridge and Data Champion

The one-day meeting on Engaging Researchers in Good Data Management served as a good chance to remind all of us about the benefits, but also the responsibilities we have to manage, and share, data. On the positive side, I was impressed to see the diversity of approaches lead by groups around the UK and beyond. It is heartening to see many universities now with teams to help manage and share data.

However, and more critically, I am concerned that meetings like this tend to focus on showcasing good examples to an audience that is already mostly convinced of the benefits of sharing. Although it is important to build the community and make new contacts with like-minded souls, I think we need to spend as much time engaging with the wider academic community.   In particular, it is only when our efforts can be aligned with those of funding agencies and scholarly publishing that we can start to build a system that will give due credit to those who do a good job of managing, and then sharing, their data. I look forward to future meetings where we can have a broader engagement of data managers, researchers, funders and publishers.

I am grateful to the organisers to have given me the opportunity to speak about our code review pilot in Neuroscience. I particularly enjoyed the questions. Perhaps the most intriguing question to report came in the break when Dr Petra ten Hoopen asked me what happens if during code review a mistake is found that invalidates the findings in the paper? To which I answered (a) the code review is supposed to verify that the code can regenerate a particular finding; (b) that this is an interesting question and it would probably depend on the severity of the problem unearthed; (c) we will cross that bridge when we come to it. Dr ten Hoopen noted that this was similar to finding errors in data that were being published alongside papers. These are indeed difficult questions, but I hope in the relatively early days of data and code sharing, we err on the side of rewarding researchers who share.

Teach RDM early and often

Kirsten Elliott, Library Assistant, Sidney Sussex College, University of Cambridge and Research Support Ambassador

Prior to this conference, my experience with Research Data Management (RDM) was limited to some training through the Office of Scholarly Communication and Research Support Ambassadors programme. This however really sparked my interest and so I leapt at the opportunity to learn more about RDM by attending this event. Although at times I felt slightly out of my depth, it was fascinating to be surrounded by such experts on the topic.

The introductory remarks from Nicole Janz were a fascinating overview of the reproducibility crisis, and how this relates to RDM, including strategies for what could be done, for example setting reproducing studies as assignments when teaching statistics. This clarified for me the relationship between RDM and open data, and transparency in research.

There were many examples throughout the day of best practice in promoting good RDM, from the “Data Conversations” held at Lancaster University, international efforts from SPARC Europe and even some from Cambridge itself! Common ground across all of them included the necessity of utilising engaged researchers themselves to spread messages to other researchers, the importance of understanding discipline specific issues with data, and an expansive conception of what counts as “data”.

I am based in a college library and predominantly work supporting undergraduate students, particularly first years. In a way this makes it quite a challenge to present RDM practices as many of the issues are most obviously relevant to those undertaking research. However, I think there’s a strong argument for teaching about RDM from very early in the academic career to ingrain good habits, and I will be thinking about how to incorporate RDM into our information literacy training, and signposting students to existing RDM projects in Cambridge.

Use peers to spread the RDM message

Laura Jeffrey, Information Skills Librarian, Wolfson College, University of Cambridge and Research Support Ambassador

This inspirational conference was organised and presented by people who are passionate about communicating the value of open data and replicability in research processes. It was valuable to hear from a number of speakers (including Rosie Higman from the University of Manchester, Marta Busse-Wicher from the University of Cambridge and Marta Teperek from TU Delft) about the changing role of support staff, away from delivering training to one of coordination. Peers are seen to be far more effective in encouraging deeper engagement, communicating personal rather than prescriptive messages (evidenced by Data Conversations at Lancaster University). A member of the audience commented that where attendance is low for their courses, the institution creates video of researcher-led activities to be delivered at point of need.

I was struck by two key areas of activity that I could act on with immediate effect:

Inclusivity – Beth Montagu Hellen (Bishop Grosseteste) highlighted the pressing need for open data to be made relevant to all disciplines. Cambridge promotes a deliberately broad definition of data for this reason. Yet more could be done to facilitate this; I’ll be following @OpenHumSocSci to monitor developments. We’re fortunate to have a Data Science Group at Wolfson promoting examples of best practice. However, I’m keen to meet with them to discuss how their activities and the language they use could be made more attractive to all disciplines.

Communication – Significant evidence was presented by Nicole Janz, Stephen Eglen and others, that persuading researchers of the benefits of open data leads to higher levels of engagement than compulsion on the grounds of funder requirements. This will have a direct impact on the tone and content of our support. A complimentary approach was proposed: targeted campaigns to coincide with international events in conjunction with frequent, small-scale messages. We’ll be tapping into Love Data Week in 2018 with more regular exposure in email communication and @WolfsonLibrary.

As result of attending this conference, I’ll be blogging about open data on the Wolfson Information Skills blog and providing pointers to resources on our college LibGuide. I’ll also be working closely with colleagues across the college to timetable face-to-face training sessions.

Published 15 December 2017
Written by Dr Laurent Gatto, Angela Talbot, Dr Stephen Eglen, Kirsten Elliott and Laura Jeffrey
Creative Commons License

Plans for scholarly communication professional development

Well now there is a plan. The second meeting of the Scholarly Communication Professional Development Group was held on 9 October in the Jisc offices in London. This followed on from the first meeting in June about which there is a blog. The attendance list is again at the end of this blog.

The group has agreed we need to look at four main areas:

  • Addressing the need for inclusion of scholarly communication in academic library degree courses
  • Mapping scholarly communication competencies against training provision options
  • Creating a self assessment tool to help individuals decide if scholarly communication is for them
  • Costing out ‘on the job training’ as an option

What are the competencies in scholarly communication?

The group discussed the types of people in scholarly communication, noting that scholarly communication is not a traditional research support role either within research administration or in libraries. Working in scholarly communication requires the ability to present ideas and policies that are not always accepted or embraced by the research community.

The group agreed it would be helpful to identify what a successful scholarly communication person looks like – identifying the nature of the role, the types of skill sets and what the successful attributes are. The group has identified several examples of sets of competencies in the broad area of ‘scholarly communication’:

The group agreed it would be useful to review the NASIG Competencies and see if they map to the UK situation and to ask NASIG about how they are rolling it out across the US.

The end game that we are trying to get to is a suite of training products at various levels that as a community is going to make a difference to the roles we are recruiting.  We agreed it would be useful to explore how these frameworks relate to the various existing professional frameworks, such as CILIP, ARMA and Vitae. 

The approach is asking people: ‘Do you have a skills gap?’ rather than: ‘Do you (or your staff) need training?’. It would be helpful then, to develop a self assessment tool to allow people to judge their own competencies against the NASIG or COAR set (or an adaptation of these). The plan is to map the competencies against training provision options. 

Audiences

We have two audiences in terms of professional training in scholarly communication:

  1. New people coming into the profession – the initial training that occurs in library schools.
  2. Those people already in a research support environment who are taking on scholarly communication roles. 

The group also discussed scope. It would be helpful to consider how many people across the UK are affected by the need for support and training.

Another issue is qualifications over skills – there are people who are working in administrative roles who have expanded their skills but don’t necessarily have a qualification. Some libraries are looking at weighting past experience higher over qualifications. 

There needs to be a sense of equity if we were to introduce new requirements. While large research intensive institutions can afford professional development, in some places there is one person who has to do the scholarly communication functions as only part of their job – they are isolated and they don’t have funds for training. An option could be that if a training provision is to be ‘compliant’ with this group then it must allow some kind of free online training.

Initial training in library schools

As was discussed the previous time the group met, there is a problem in that library schools do not seem to be preparing graduates adequately for work in scholarly communication. Even the small number of graduates who have had some teaching in this area are not necessarily ready to hit the ground running and still need further development. The group agreed the sector needs to define how we skill library graduates for this detailed and complex area.

One idea that arose in the discussion was the suggestion we engage with library schools at their own conferences, perhaps asking to have a debate to ask them what they think they are doing to meet this need. 

The next conference of the library schools Association of Library and Information Science Educators is 6-9 February 2018 in Denver. Closer to home, iConference 2018 will be 25-28 March and will be jointly hosted by the UK’s University of Sheffield’s Information School and the iSchool at Northumbria. However, when we considered the conference options it became clear that this would not necessarily work, the focus of these conferences is academic focus, not practitioner or case studies. This might point to the source of some of the challenges we see in this space.

One of the questions was: what is really different now to the way it was 10-20 years ago? We need to survey people who are one or two years out from their qualifications.

Suggestions to address this issue included:

  • Identify which library schools are running a strand on academic librarianship and what their curriculum is
  • We work with those library schools which are trying to address this area, such as Sheffield, Strathclyde and UCL to try and identify examples of good practice of producing graduates who have the competencies we need
  • Integrate their students into ‘real life’, taking students in for a piece of work so they have experience

Professional Development option 1 – Institutional-based training

In the environment where there is little in the way of training options, ‘on the job’ training becomes the default. But is there a perception that on the job training comes without cost While the amount of training that happens in this environment is seen as cost neutral, it could be that sending someone on a paid for course could be more effective.

How much does it cost for us to get someone fully skilled using on the job training? There are time costs of both the new recruit and the loss of work time for the staff member doing the training. There is also the cost of the large amount of time spent recruiting staff because we cannot get people who are anywhere near up to speed. 

One action is to gain an understanding of how much it does actually cost to train a staff member up. 

Professional Development option 2 – Mentoring

There is an issue in scholarly communication with new people coming through continuously who need to be brought up to speed. One way of addressing this issue could be by linking people together. UKCORR are interested in creating some kind of mentoring system. ARMA also has a mentoring network which they are looking to relaunch shortly.

 The group discussed whether mentoring was something that can be brokered by an external group, creating an arrangement where if someone is new they can go and spend some time with someone else who is doing the same job. However, to do this we would need a better way of connecting with people. 

This idea ties into the work on institutional based training and the cost associated with it. We are aware there is a lot of cost in sharing and receiving info done by goodwill at present.

Professional Development option 3 – Community peer support events

Another way of getting people together is community and peer support, which is already part of this environment and could be very valuable. Between members of the group there are several events being held throughout the year. These range from free community events to paid for conferences. For example, Jisc is looking at running two to three community events each year. They recently trialled a webinar format to see if it is an opportunity to get online discussions going.

The group discussed whether we need more events, and what is the best way of supporting each other and what kind of remote methods could be used. There is a need to try and document this activity systematically.

Professional Development option 4 – Courses we can run now

The group agreed that while it might be too early for us to look at presenting courses, it would be useful to have an idea of who is offering what amongst the member organisations of the group and that we can start to glean a picture of what is covered. If we were to then map this to the competencies it helps decision making.

For example, UKSG have webinars on every month that are free which fulfils a need. Is there a topic we can put on for an hour?

 UKSG is planning a course towards the end of next year – a paid seminar face to face, outlining the publication process, particularly from the open access environment. This could be useful to publishers as well. It explains what needs to happen in a sequence of events – why it is important to track submission and acceptance dates. Pitching it to people who are new in the role and at senior managers who are responsible for staffing.

Professional Development option 5 – Private providers

Given the pull on resources for many in this sector we need to consider promoting and creating accessible training for all. So in that context the discussion moved to whether we were prepared to promote private training providers. This is a tricky area because there is such a range under the banner ‘private’ – from freelance trainers, to organisations who train as their primary activity to organisations who offer training as part of their wider suite of activities. Any training provision needs to look at sustainability, it isn’t always possible to rely on the goodwill of volunteers to deliver staff development and training.

For example, UKSG as an organisation is not profit-making — it is a charity and events are run on a non-profit basis. Jisc is looking at revenue on a non-profit basis to feed into Jisc’s support for the sector. ARMA work on a cost recovery basis – ARMA events are always restricted to members. Many of the member groups engage with private providers and pay them to come along and speak for the day.

We agreed that when we look at developing the competencies framework and identify how someone can achieve these skills we should be linking to all training provision, either through a paid course, online webinar or mentoring.  The group agreed we are not excluding private providers from the discussion. We are looking to get the best provision for the sector.

However, the topic came up about our own expertise. Experts working in the field already give talks at many events on work time, which is being paid for by their employer — who are in effect subsidising the cost of running the training or event. Can we use our own knowledge base to share this information amongst the community? Perhaps it is not about what you pay, it is what you provide into the community. 

Opening up the discussion

The group talked about tapping into existing conferences held by member organisations of the group to specifically look at this issue ‘branded’ under the umbrella of the group.  To ensure inclusion it would be good to have a webinar as part of the discussion at each of these conference so people who are not there can attend and contribute. Identified conferences were:

We also need to address other groups involved in the scholarly communication process within institutions, such as research managers, researcher developers and researchers themselves.

Next steps

  • Engaging with library schools to discuss the need for inclusion of scholarly communication in their academic library degree courses, possibly looking at examples of good practice
  • Discussion with NASIG about rolling out their scholarly communication competencies
  • Mapping scholarly communication competencies against current training provision options
  • Creating a self assessment tool to help individuals decide if scholarly communication is for them
  • Costing out ‘on the job training’ to evaluate the impact of this on the existing team

Attendees

  • Helen Blanchett – Jisc
  • Fiona Bradley – RLUK 
  • Sarah Bull – UKSG 
  • Helen Dobson – Manchester University 
  • Anna Grigson representing UKSG
  • Danny Kingsley – Cambridge University
  • Valerie McCutcheon – representing ARMA
  • Ann Rossiter – SCONUL
  • Claire Sewell – Cambridge University
  • Nick Shepherd – representing UKCoRR

 Published 27 November 2017
Written by Dr Danny Kingsley
Creative Commons License

It’s hard getting a date (of publication)

As part of Open Access Week 2017, the Office of Scholarly Communication is publishing a series of blog posts on open access and open research. In this post Maria Angelaki describes how challenging it can be to interpret publication dates featured on some publishers’ websites.

More than three weeks a year. That’s how much time we spend doing nothing but determining the publication date of the articles we process in the Open Access team.

To be clear about what we are talking about here: All we need to know for HEFCE compliance is when the final Version of Record was made available on the publisher’s website. Also, if there is a printed version of the journal, for our own metadata, we need to know the Issue publication date too.

Surely, it can’t be that hard.

Defining publication date

The Policy for open access in Research Excellence framework 2021 requires the deposit of author’s outputs within three months of acceptance. However, the first two years of this policy has allowed deposits as late as three months from the date of publication.

It sounds simple doesn’t it? But what does “date of publication” mean? According to HEFCE the Date of Publication of a journal article is “the earliest date that the final version-of-record is made available on the publisher’s website. This generally means that the ‘early online’ date, rather than the print publication date, should be taken as the date of publication.

When we create a record in Apollo, the University of Cambridge’s institutional repository, we input the acceptance date, the online publication date and the publication date.

We define the “online publication date” as the earliest online date the article has appeared on the publisher’s website and “publication date” as the date the article appeared in a print issue. These two dates are important since we rely on them to set the correct embargoes and assess compliance with open access requirements.

The problems can be identified as:

  • There are publishers that do not feature clearly the “online date” and the “paper issue date”. We will see examples further on.
  • To make things more complicated, some publishers do not always specify which version of the article was published on the “online date”. It can variously mean the author’s accepted manuscript (AAM), corrected proof, or the Version of Record (VoR), and there are sometimes questions in the latter as to whether these include full citation details.
  • Lastly, there are cases where the article is first published in a print issue and then published online. Often print publications are only identified as “Spring issue’ or the like.

How can we comply with HEFCE’s deposit timeframes if we do not have a full publication date cited in the publisher’s website? Ideally, it would only take a minute or so for anybody depositing articles in an institutional repository to find the “correct” publication date. But these confusing cases mean the minute inevitably becomes several minutes, and when you are uploading 5000 odd papers a year this turns into 17 whole days.

Setting rules for consistency

In the face of all of this ambiguity, we have had to devise a system of ‘rules’ to ensure we are consistent. For example:

  • If a publication year is given, but no month or day, we assume that it was 1st January.
  • If a publication year and month are given but no day, we assume that it was 1st of the month.
  • If we have an online date of say, 10th May 2017 and a print issue month of May 2017, we will use the most specific date (10th May 2017) rather than assuming 1st May 2017 (though it is earlier).
  • Unless the publisher specifies that the online version is the accepted manuscript, we regard it as the final VOR with or without citation details.
  • If we cannot find a date from any other source, we try to check when the pdf featured on the website was created.

This last example does start to give a clue to why we have to spend so much time on the date problem.

By way of illustration, we have listed below some examples by publisher of how this affects us. This is a deliberate attempt to name and shame, but if a publisher is missing from this list, it is not because they are clear and straightforward on this topic. We just ran out of space. To be fair though, we have also listed one publisher as an example to show how simple it is to have a clear and transparent article publication history.

Taylor & Francis – ‘published online’

Publication date of an article online

There are several ways you can read an article. If the article is open access or if you subscribe, then you can download a pdf of the article from the publisher website. Otherwise, you see the online version on the website. The two versions of a particular article are below, the pdf and the online HTML version.

Both the pdf and the online version of the article list the article history as:
Received 14 March 2016
Accepted as 23 December 2016
Published online 12 January 2017

and also cite the Volume, year of publication and issue.

But does the ‘Published online’ date refer to when the Version of Record was made available online or the first time the Accepted Manuscript was made available online? We can’t distinguish this to provide the date for HEFCE.

Publication date of the printed journal

While we know the volume, year of publication and issue number, we don’t know what the exact publication date of the printed journal is for our metadata records. If we drill a bit more and we visit past volumes of the journal, we can see that the previous complete year (2016) features 12 issues. So we can make an educated guess that the issue number refers to the publication month (in our example it is issue 5, so it is May 2017).

However, we are wrong. The 12 issues refer to the online publication issues and not the print issues. According to Taylor & Francis’ agents customer service page they “have a number of journals where the print publication schedule differs to the online”. They have a list of those journals available  and in our case we can see that this particular journal has 12 online issues but 4 paper issues in a year. So when did this actual article appear in print? Who knows.

Implications

Remember the 17 days a year? This is the type of activity that fills the time. Do we really need to do this time consuming exercise? Some might suggest that we contact the publisher and ask, but it is time-consuming and not always successful.

Elsevier’s Articles in Press

Elsevier’s description of Articles in Press states they are “articles that have been accepted for publication in Elsevier journals but have not yet been assigned to specific issues”. They could be any of an Accepted Manuscript, a Corrected Proof or an Uncorrected Proof. Elsevier have a page that answers questions about ‘grey areas’ and in a section discussing whether it is permissible for Elsevier to remove an article for some reason, they state they do not remove articles that have been published but “…papers made available in our “Articles in Press” (AiP) service do not have the same status as a formally published article…)”

This means the same article could be an ‘Article in Press’ in three different stages, none of which are ‘published’.  Even when an article has moved beyond “In Press” mode and has been published in an issue we are not informed which version Elsevier refers to when the “available online” date is featured.

Let’s look at an example. Is the ‘Available online’ date of 13  December 2016 when it was available online as an Accepted Manuscript, a Corrected Proof or an Uncorrected Proof? This is very unclear.

So we have a disconnect. The earliest online date is not the final published version as per HEFCE’s requirement. There is no way of determining the date when the final published date does actually appear online, so we need to wait until the article is allocated an issue and volume for us to determine the date. This could be some considerable time AFTER the work has been finalised. So open access is delayed, we risk non compliance and waste huge amounts of time.

Well done, Wiley

Wiley features all possible stages of the article’s various publication stages making it easy to distinguish the VoR online publication date, exactly what HEFCE (and we) require.

Article published in an issue

This is an example of when an article is published online and the print issue is published too.

Article published online (awaiting for a print issue date)

Wiley states the publication history clearly even when an article is published online but not yet included in a publication issue.

If you have a closer look at the screenshot, Wiley regards as “First published” the VoR online publication date (shown also on the left under Publication History) and not the Accepted Manuscript online date.

In this case, the publisher clearly states which version they refer to when the term “First Published” is used and also gives the reader the full history of the article’s “life stages” as well as inform us that the article is yet not included in an issue (circle on the right).

Conclusions

If you have made it this far through the blog post, you are probably working in this area and have some experience of this issue. If you are new to the topic, hopefully the above examples have illustrated how frustrating it is sometimes to find the correct information in order to comply with not only HEFCE’s timeframe requirements, but other open access compliance issues, especially when you set embargoes.

A simple task can become an expensive exercise because we are wasting valuable working hours. We are in the business of supporting the research community to openly share research outputs, not in the business of deciphering information in publishers’ websites.

We need clear information in order to effectively deposit an article to our institutional repository and meet whatever requirements need to be met. It is not unreasonable to expect consistency and standards in the display of publication history and dates of articles.

Published 27 October 2017
Written by Maria Angelaki
Creative Commons License