Tag Archives: scholarly communication

Manuscript detectives – submitted, accepted or published?

In the blog post “It’s hard getting a date (of publication)”, Maria Angelaki discussed how a seemingly straightforward task may turn into a complicated and time-consuming affair for our Open Access Team. As it turns out, it isn’t the only one. The process of identifying the version of a manuscript (whether it is the submitted, accepted or published version) can also require observation and deduction skills on par with Sherlock Holmes’.

Unfortunately, it is something we need to do all the time. We need to make sure that the manuscript we’re processing isn’t the submitted version, as only published or accepted versions are deposited in Apollo. And we need to differentiate between published and accepted manuscripts, as many  publishers – including the biggest players Elsevier, Taylor & Francis, Springer Nature and Wiley  – only allow self-archiving of accepted manuscripts in institutional repositories, unless the published version has been made Open Access with a Creative Commons licence.

So it’s kind of important to get that right… 

Explaining manuscript versions

Manuscripts (of journal articles, conference papers, book chapters, etc.) come in various shapes and sizes throughout the publication lifecycle. At the onset a manuscript is prepared and submitted for publication in a journal. It then normally goes through one or more rounds of peer-review leading to more or less substantial revisions of the original text, until the editor is satisfied with the revised manuscript and formally accepts it for publication. Following this, the accepted manuscript goes through proofreading, formatting, typesetting and copy-editing by the publisher. The final published version (also called the version of record) is the outcome of this. The whole process is illustrated below.

Identifying published versions

So the published version of a manuscript is the version… that is published? Yes and no, as sometimes manuscripts are published online in their accepted version. What we usually mean by published version is the final version of the manuscript which includes the publisher’s copy-editing, typesetting and copyright statement. It also typically shows citation details such as the DOI, volume and page numbers, and downloadable files will almost invariably be in a PDF format. Below are two snapshots of published articles, with citation details and copyright information zoomed in. On the left is an article from the journal Applied Linguistics published by Oxford University Press and on the right an article from the journal Cell Discovery published by Springer Nature (click to enlarge any of the images).

 

Published versions are usually obvious to the eye and the easiest to recognise. In a way the published version of a manuscript is a bit like love: you may mistake other things for it but when you find it you just know. In order to decide if we can deposit it in our institutional repository, we need to find out whether the final version was made Open Access with a Creative Commons (CC) licence (or in rarer cases with the publisher’s own licence). This isn’t always straightforward, as we will now see.

Published Open Access with a CC licence?

When an article has been published Open Access with a CC licence, a statement usually appears at the bottom of the article on the journal website. However as we want to deposit a PDF file in the repository, we are concerned with the Open Access statement that is within the PDF document itself. Quite a few articles are said to be Open Access/CC BY on their HTML version but not on the PDF. This is problematic as it means we can’t always assume that we can go ahead with the deposit from the webpage – we need to systematically search the PDF for the Open Access statement. We also need to make sure that the CC licence is clearly mentioned, as it’s sometimes omitted even though it was chosen at the time of paying Open Access charges.

The Open Access statement will appear at various places on the file depending on the publisher and journal, though usually either at the very end of the article or in the footer of the first page as in the following examples from Elsevier (left) and Springer Nature (right).

 

A common practice among the Open Access team is to search the file for various terms including “creative”, “cc”, “open access”, “license”, “common” and quite often a combination of these. But even this isn’t a foolproof method as the search may retrieve no result despite the search terms appearing within the document. The most common publishers tend to put Open Access statements in consistent places, but others might put them in unusual places such as in a footnote in the middle of a paper. That means we may have to scroll through a whole 30- or 40-page document to find them – quite a time-consuming process.

 Identifying accepted versions

The accepted manuscript is the version that has gone through peer-review. The content should be the same as the final published version, but it shouldn’t include any copy-editing, typesetting or copyright marking from the publisher. The file can be either a PDF or a Word document. The most easily recognisable accepted versions are files that are essentially just plain text, without any layout features, as shown below. The majority of accepted manuscripts look like this.

However sometimes accepted manuscripts may at first glance appear to be published versions. This is because authors may be required to use publisher templates at the submission stage of their paper. But whilst looking like published versions, accepted manuscripts will not show the journal/publisher logo, citation details or copyright statement (or they might show incomplete details, e.g. a copyright statement such as © 20xx *publisher name*). Compare the published version (left) and accepted manuscript (right) of the same paper below.

 

As we can see the accepted manuscript is formatted like the published version, but doesn’t show the journal and publisher logo, the page numbers, issue/volume numbers, DOI or the copyright statement.

So when trying to establish whether a given file is the published or accepted version, looking out for the above is a fairly foolproof method.

Identifying submitted versions

This is where things get rather tricky. Because the difference between an accepted and submitted manuscript lies in the actual content of the paper, it is often impossible to tell them apart based on visual clues. There are usually two ways to find out:

  • Getting confirmation from the author
  • Going through a process of finding and comparing the submission date and acceptance date of the paper (if available), mostly relevant in the case of arXiv files

Getting confirmation from the author of the manuscript is obviously the preferable and time-saving option. Unfortunately many researchers mislabel their files when uploading them to the system, describing their accepted/published version file as submitted (the fact that they do so when submitting the paper to us may partly explain this). So rather than relying on file descriptions, having an actual statement from the author that the file is the submitted version is better. Although in an ideal world this would never happen as everyone would know that only accepted and published versions should be sent to us.

A common incarnation of submitted manuscripts we receive is arXiv files. These are files that have been deposited in arXiv, an online repository of pre-prints that is widely used by scientists, especially mathematicians and physicists. An example is shown below.

Clicking on the arXiv reference on the left-hand side of the document (circled) leads to the arXiv record page as shown below.

The ‘comments’ and ‘submission history’ sections may give clues as to whether the file is the submitted or accepted manuscript. In the above example the comments indicate that the manuscript was accepted for publication by the MNRAS journal (Monthly Notices of the Royal Astronomical Society). So this arXiv file is probably the accepted manuscript.

The submission history lists the date(s) on which the file (and possible subsequent versions of it) was/were deposited in arXiv. By comparing these dates with the formal acceptance date of the manuscript which can be found on the journal website (if published), we can infer whether the arXiv file is the submitted or accepted version. If the manuscript hasn’t been published and there is no way of comparing dates, in the absence of any other information, we assume that the arXiv file is the submitted version.

Conclusion

Distinguishing between different manuscript versions is by no means straightforward. The fact that even our experienced Open Access Team may still encounter cases where they are unsure which version they are looking at shows how confusing it can be. The process of comparing dates can be time-consuming itself, as not all publishers show acceptance dates for papers (ring a bell?).

Depositing a published (not OA) version instead of an accepted manuscript may infringe publisher copyright. Depositing a submitted version instead of an accepted manuscript may mean that research that hasn’t been vetted and scrutinised becomes publicly available through our repository and possibly be mistaken as peer-reviewed. When processing a manuscript we need to be sure about what version we are dealing with, and ideally we shouldn’t need to go out of our way to find out.

Published 27 March 2018
Written by Dr Melodie Garnier
Creative Commons License

Plans for scholarly communication professional development

Well now there is a plan. The second meeting of the Scholarly Communication Professional Development Group was held on 9 October in the Jisc offices in London. This followed on from the first meeting in June about which there is a blog. The attendance list is again at the end of this blog.

The group has agreed we need to look at four main areas:

  • Addressing the need for inclusion of scholarly communication in academic library degree courses
  • Mapping scholarly communication competencies against training provision options
  • Creating a self assessment tool to help individuals decide if scholarly communication is for them
  • Costing out ‘on the job training’ as an option

What are the competencies in scholarly communication?

The group discussed the types of people in scholarly communication, noting that scholarly communication is not a traditional research support role either within research administration or in libraries. Working in scholarly communication requires the ability to present ideas and policies that are not always accepted or embraced by the research community.

The group agreed it would be helpful to identify what a successful scholarly communication person looks like – identifying the nature of the role, the types of skill sets and what the successful attributes are. The group has identified several examples of sets of competencies in the broad area of ‘scholarly communication’:

The group agreed it would be useful to review the NASIG Competencies and see if they map to the UK situation and to ask NASIG about how they are rolling it out across the US.

The end game that we are trying to get to is a suite of training products at various levels that as a community is going to make a difference to the roles we are recruiting.  We agreed it would be useful to explore how these frameworks relate to the various existing professional frameworks, such as CILIP, ARMA and Vitae. 

The approach is asking people: ‘Do you have a skills gap?’ rather than: ‘Do you (or your staff) need training?’. It would be helpful then, to develop a self assessment tool to allow people to judge their own competencies against the NASIG or COAR set (or an adaptation of these). The plan is to map the competencies against training provision options. 

Audiences

We have two audiences in terms of professional training in scholarly communication:

  1. New people coming into the profession – the initial training that occurs in library schools.
  2. Those people already in a research support environment who are taking on scholarly communication roles. 

The group also discussed scope. It would be helpful to consider how many people across the UK are affected by the need for support and training.

Another issue is qualifications over skills – there are people who are working in administrative roles who have expanded their skills but don’t necessarily have a qualification. Some libraries are looking at weighting past experience higher over qualifications. 

There needs to be a sense of equity if we were to introduce new requirements. While large research intensive institutions can afford professional development, in some places there is one person who has to do the scholarly communication functions as only part of their job – they are isolated and they don’t have funds for training. An option could be that if a training provision is to be ‘compliant’ with this group then it must allow some kind of free online training.

Initial training in library schools

As was discussed the previous time the group met, there is a problem in that library schools do not seem to be preparing graduates adequately for work in scholarly communication. Even the small number of graduates who have had some teaching in this area are not necessarily ready to hit the ground running and still need further development. The group agreed the sector needs to define how we skill library graduates for this detailed and complex area.

One idea that arose in the discussion was the suggestion we engage with library schools at their own conferences, perhaps asking to have a debate to ask them what they think they are doing to meet this need. 

The next conference of the library schools Association of Library and Information Science Educators is 6-9 February 2018 in Denver. Closer to home, iConference 2018 will be 25-28 March and will be jointly hosted by the UK’s University of Sheffield’s Information School and the iSchool at Northumbria. However, when we considered the conference options it became clear that this would not necessarily work, the focus of these conferences is academic focus, not practitioner or case studies. This might point to the source of some of the challenges we see in this space.

One of the questions was: what is really different now to the way it was 10-20 years ago? We need to survey people who are one or two years out from their qualifications.

Suggestions to address this issue included:

  • Identify which library schools are running a strand on academic librarianship and what their curriculum is
  • We work with those library schools which are trying to address this area, such as Sheffield, Strathclyde and UCL to try and identify examples of good practice of producing graduates who have the competencies we need
  • Integrate their students into ‘real life’, taking students in for a piece of work so they have experience

Professional Development option 1 – Institutional-based training

In the environment where there is little in the way of training options, ‘on the job’ training becomes the default. But is there a perception that on the job training comes without cost While the amount of training that happens in this environment is seen as cost neutral, it could be that sending someone on a paid for course could be more effective.

How much does it cost for us to get someone fully skilled using on the job training? There are time costs of both the new recruit and the loss of work time for the staff member doing the training. There is also the cost of the large amount of time spent recruiting staff because we cannot get people who are anywhere near up to speed. 

One action is to gain an understanding of how much it does actually cost to train a staff member up. 

Professional Development option 2 – Mentoring

There is an issue in scholarly communication with new people coming through continuously who need to be brought up to speed. One way of addressing this issue could be by linking people together. UKCORR are interested in creating some kind of mentoring system. ARMA also has a mentoring network which they are looking to relaunch shortly.

 The group discussed whether mentoring was something that can be brokered by an external group, creating an arrangement where if someone is new they can go and spend some time with someone else who is doing the same job. However, to do this we would need a better way of connecting with people. 

This idea ties into the work on institutional based training and the cost associated with it. We are aware there is a lot of cost in sharing and receiving info done by goodwill at present.

Professional Development option 3 – Community peer support events

Another way of getting people together is community and peer support, which is already part of this environment and could be very valuable. Between members of the group there are several events being held throughout the year. These range from free community events to paid for conferences. For example, Jisc is looking at running two to three community events each year. They recently trialled a webinar format to see if it is an opportunity to get online discussions going.

The group discussed whether we need more events, and what is the best way of supporting each other and what kind of remote methods could be used. There is a need to try and document this activity systematically.

Professional Development option 4 – Courses we can run now

The group agreed that while it might be too early for us to look at presenting courses, it would be useful to have an idea of who is offering what amongst the member organisations of the group and that we can start to glean a picture of what is covered. If we were to then map this to the competencies it helps decision making.

For example, UKSG have webinars on every month that are free which fulfils a need. Is there a topic we can put on for an hour?

 UKSG is planning a course towards the end of next year – a paid seminar face to face, outlining the publication process, particularly from the open access environment. This could be useful to publishers as well. It explains what needs to happen in a sequence of events – why it is important to track submission and acceptance dates. Pitching it to people who are new in the role and at senior managers who are responsible for staffing.

Professional Development option 5 – Private providers

Given the pull on resources for many in this sector we need to consider promoting and creating accessible training for all. So in that context the discussion moved to whether we were prepared to promote private training providers. This is a tricky area because there is such a range under the banner ‘private’ – from freelance trainers, to organisations who train as their primary activity to organisations who offer training as part of their wider suite of activities. Any training provision needs to look at sustainability, it isn’t always possible to rely on the goodwill of volunteers to deliver staff development and training.

For example, UKSG as an organisation is not profit-making — it is a charity and events are run on a non-profit basis. Jisc is looking at revenue on a non-profit basis to feed into Jisc’s support for the sector. ARMA work on a cost recovery basis – ARMA events are always restricted to members. Many of the member groups engage with private providers and pay them to come along and speak for the day.

We agreed that when we look at developing the competencies framework and identify how someone can achieve these skills we should be linking to all training provision, either through a paid course, online webinar or mentoring.  The group agreed we are not excluding private providers from the discussion. We are looking to get the best provision for the sector.

However, the topic came up about our own expertise. Experts working in the field already give talks at many events on work time, which is being paid for by their employer — who are in effect subsidising the cost of running the training or event. Can we use our own knowledge base to share this information amongst the community? Perhaps it is not about what you pay, it is what you provide into the community. 

Opening up the discussion

The group talked about tapping into existing conferences held by member organisations of the group to specifically look at this issue ‘branded’ under the umbrella of the group.  To ensure inclusion it would be good to have a webinar as part of the discussion at each of these conference so people who are not there can attend and contribute. Identified conferences were:

We also need to address other groups involved in the scholarly communication process within institutions, such as research managers, researcher developers and researchers themselves.

Next steps

  • Engaging with library schools to discuss the need for inclusion of scholarly communication in their academic library degree courses, possibly looking at examples of good practice
  • Discussion with NASIG about rolling out their scholarly communication competencies
  • Mapping scholarly communication competencies against current training provision options
  • Creating a self assessment tool to help individuals decide if scholarly communication is for them
  • Costing out ‘on the job training’ to evaluate the impact of this on the existing team

Attendees

  • Helen Blanchett – Jisc
  • Fiona Bradley – RLUK 
  • Sarah Bull – UKSG 
  • Helen Dobson – Manchester University 
  • Anna Grigson representing UKSG
  • Danny Kingsley – Cambridge University
  • Valerie McCutcheon – representing ARMA
  • Ann Rossiter – SCONUL
  • Claire Sewell – Cambridge University
  • Nick Shepherd – representing UKCoRR

 Published 27 November 2017
Written by Dr Danny Kingsley
Creative Commons License

Summer camp – the scholarly communication way

Growing up, a diet of B-grade movies gave the impression of American summer camps as places where teenagers undertake a series of slapstick events in the wilderness. That may indeed be the case sometimes, but at the University of California San Diego campus recently, a group of decidedly older people bunked in together for a completely different type of summer camp.

The inaugural FORCE11 Scholarly Communications Institute (FSCI) was held in the first week of August, bringing together librarians, researchers and administrators from around the world. The event was planned as a week long intensive summer school on improving research communication. The activities were spread all over the campus, although not, unfortunately in the mother of all spaceships for a library.

The event hashtag was #FSCI and the specific hashtag for the course, “Building an Open and Information Rich Institution”  I ran with Sarah Shreeves from University of Miami was #FSCIAM3. This blog is a brief run down of what we covered in the course.

Our course

We had a wonderful group of people, primarily from the library sector, and from around the world (although many were working in American universities).

From the delivery perspective this was an intense experience requiring 14 hours of delivery plus the documentation and follow up each day. It was further complicated by the fact that Sarah and I met for first time in person half an hour before delivery on the Monday.

Working within open and F.A.I.R principles, we have made all of our resources and information available and links to all the Google documents are included in this blog post.The shared Google Drive has links to everything. These presentations will be uploaded to the FSCI Zenodo site when it is available. In addition the group created a Zotero page which collects together relevant links and resources as they arose in discussion.

Monday – Problem definition

Using an established process the group worked together to define the problems we were looking to address in scholarly communication:

  • OA takes time and money – and the tools are annoying.
  • We need to reduce complexity – make it easy administratively
  • It is important to recognise difference – one size does not fit all, there are cultural and country norms in publishing and prestige
  • Motivation – what are the incentives? How can we demonstrate benefit?
  • There is a need for advocacy and training of various stakeholders including within library
  • We can demonstrate the repository as a free way of publishing with impact tracking – for both the author and the institution.
  • Whose responsibility is this?

The slides from the first day (including the workings of the group) are available.

Tuesday – Stakeholder mapping

On the second day we discussed the different stakeholders in institutions and external to institutions in this space. Each table created a pile of post it notes which were then classified on a large grid on the wall against ‘interest’ versus ‘influence’. We then discussed which stakeholders we needed to work with, and whether it is possible to move the stakeholder from one of the quadrants into another. We also discussed the value in using some stakeholders to reach others.

A second exercise we ran was ‘responding to objections’ – where we gave the group a few minutes to create objections that different stakeholders may have to aspects of scholarly communication. These were then randomised and the group had only a couple of minutes to develop an ‘elevator pitch’ to respond to that objection. The slides from day 2 incorporate the comments, objections and counter arguments.

 

Wednesday – Communication

We started the day with a ‘gathering evidence’ exercise that consisted of a series of questions that were allocated to each table to discuss with a view to the kind of information held in an institution that might be helpful to answer it. Examples of the type of questions we asked the group to consider are: How do we better understand and communicate with the range of disciplines on campus? (with a goal of creating advocacy materials that support the range of disciplinary needs of the institution) or Who is doing collaborative research with others on campus and with others outside of the university? Is there interdisciplinary research? (with a goal of creating a map of collaborations on campus).

We moved to an exercise to demonstrate the need for clear communication. People worked in pairs and had a pile of building bricks which they were asked to build a shape from. They then had five minutes to describe their shape. After this the instructions were swapped and the opposite pair tried to reproduce the shape from the instructions.The results were surprising – with fewer than 50% of shapes reproduced. However, looking at some of the instructions, things became clearer. Note the description ‘cute kitty’ in these instructions.

 

The final session on day three was a risk assessment exercise where we put up the proposal ‘that we will make all digitised older theses open access without obtaining permission from the authors’. The tables were asked to come up with potential risks that could arise from this proposal, and then asked to map these onto a grid that considered the likelihood and severity of each risk.

Then the group discussed what could be done to mitigate the risks they identified, and then determine if the risk could then be moved within the grid. Again, all discussions are captured in the slides.

Thursday – Governance

On the Thursday we considered matters of governance. Dominic Tate from Edinburgh talked the group through the management structure at his institution, and how they have managed to create a strong decision making governance.

Using a system of mapping organisational structure to the decision structure, the group identified a goal they would like to achieve at their workplace and then to consider the aspects that are Strategic, Tactical and Operational. They then identified the person/people/group that will need to agree at each of these stages to achieve the end goal,and whether this was something that could be managed within the immediate organisation or does it involve the wider institution. We also discussed whether policies would need to be changed or created, and the level of consultation needed. The slides describe the process.

At the end of this day we broke into two groups for an unconference. One group discussed the UK Scholarly Communication Licence, the other continued on the governance discussion by identifying stakeholders and working out how to approach them.

Friday – The future

On the last day we discussed the best way to share stories with the relevant stakeholders – what is the best way to present the information? How do you get it to the person?

We then looked to the future, first by considering big disruptor technologies on the last 20 years. We asked people to share their work experiences before these technologies existed, to give us an idea of how much things will potentially change into the future with the next big disruptors. We then asked individuals to identify future issues that they will need to address at their institution, which they then sorted at the table level before we did a group consolidation to identify what the issues will be.

Each group chose one of these issues/futures, and in a mini overview of the work we had done throughout the week, they undertook a stakeholder assessment – who would they have to engage to make this happen? They also identified the governance structures in place, and the type of information they would want in place to make decisions about moving in this area. Sone of the discussions are captured in the slides.

Assessment of the course

When developing the course we articulated what we hoped the participants would get out of the week. These included the ability to:

  • Think strategically and comprehensively about openness and their institution
  • Articulate the ‘why’ of openness for a variety of stakeholders within an institution
  • Articulate how information related to research and outputs flows through an institution and understand challenges to this flow of information
  • Understand the practicalities of delivering open access to research outputs and research data management within an institution
  • Consider the technology, expertise, and resources required to support open research

So how did we do? Well according to the feedback at the beginning and end of the week we certainly hit all the targets the participants identified.

The responses at the beginning of the week were:

  

And the feedback at the end of the week was:

           

Interestingly, the Governance session was the least popular session we ran, but it rated extremely highly in the areas the participants self identified as learning about.

 

Several people went out of their way to tell Sarah and I that ‘this was the best training/workshop I have ever done’ which is very high praise.

On the Friday afternoon all of the participants for FSCI got back together to provide feedback about what happened in their courses. These ranged from an explanation of what people did, to participants describing what they knew, to poems. There was no expressionist dance unfortunately (perhaps next year). Sarah and I chose to describe our week in pictures.

Wrap of the week

While it was slightly disorienting spending a week in student accommodation, overall this was a valuable and rewarding experience – if extremely intense. Our group of just over 120 people was only one of several ‘camps’ happening at the same time, including electronic music and programming groups. We all converged on the dining hall each meal, a big hodge podge of people.

The largest and most intrusive group was the teenagers at the San Diego District Police camp. This is a para military organisation, we discovered. This did go some way to explain the line ups at 6am and also at 9pm, the groups shouting their responses in unison, and the instructors wandering around with guns on their hips.

On a much more peaceful note, San Diego is where Dr Seuss lived, and looking at the vegetation and landscape it is easy to see where his inspiration originated.

    

Published 22 August 2017
Written by Dr Danny Kingsley 
Creative Commons License