Category Archives: Uncategorized

Tales of Discovery: stories inspired by Cambridge research

Five research papers and five traditional stories were combined during Cambridge Science Festival in March 2018 to make Tales of Discovery.

The session was aimed at families, to show them that there is a world of research available to the general public stored on Apollo, the University’s repository – and it’s all cool stuff.

It was also aimed at researchers, to get them thinking about new ways to make their research available to a general public – including uploading their research on to the Apollo repository.

At the end of each story the audience were challenged to interpret the stories and research in their own way.

Here’s what happened during the morning.

Labour Pains: Scenes of Birth and Becoming in Old Norse Legendary Literature

The research

Kate Olley’s article looks at the drama of childbirth as depicted in Old Norse legendary literature. This article made a great beginning to the session, because it looks into the power of story to give an insight into the past. Childbirth stories are fascinating and informative because they are such important moments – ‘moments of crisis’ not just for an individual but for a whole society. They show so much about a culture, from the details of everyday life, to a picture of a society’s values and structure. Also, unlike stories of great battles and adventures, they put women and everyday life at the centre of the story.

The story

I retold one of the stories from Kate’s article, ‘Hrolf and the elvish woman’ from the saga of Hrolf the Walker. An elvish woman summons Hrolf, a king who has fallen on hard times, to help her daughter who is under a curse. She has been in labour for 19 days, but cannot give birth unless she is touched by human hands.

Kate pointed out some things the story shows us: the extreme danger of childbirth in those times; and the way a birth changes everybody’s role. The woman becomes a mother, but the fortunes of Hrolf the midwife are also changed for ever. 

The challenge

We talked about how people still tell childbirth stories, and they often have the same mythic resonances as old Icelandic saga. Is there a story you tell your children about when they were a baby? (or a story that your parents tell you?)

Revolutionising Computing Infrastructure for Citizen Empowerment

The research

‘Internet dragon’

Noa Zilberman explains that almost every aspect of our lives today is being digitally monitored: from our social networks activity, through online shopping habits to financial records. Can new technology enable us to choose who holds this data? Her research, based on highly technical computer engineering, addresses a social issue that Noa feels passionate about. I chose a story that was a metaphor for her research, with a hero taking on the might of a huge and greedy dragon.

The story

‘Dragon’

I based the story on the epic account of Beowulf fighting the dragon, which reflected Noa’s passion and how important she felt the issues raised by her research are to society in general. But, as is the way with story, more links emerged during the telling. The flickering flames of the dragon’s cave, reflected the heat emitted by internet server farms. The ease with which a thief can steal gold from the hoard, and the potential harm this can do, proved highly topical. When the hero asks the blacksmith to make a shield of metal to protect him from the dragon’s breath, Noa produced her secret weapon: a programmable board, not more than six inches long, which enables data to be moved more efficiently by individual computers.

The challenge

It can be hard to visualise what ‘the internet’ really is. What might an ‘internet dragon’ look like? Can you draw one?

The provenance, date and significance of a Cook-voyage Polynesian sculpture

The research

Trisha Biers paper sheds light on the shifting sands of anthropological investigation. It has a particular Cambridge link: she uncovers the secrets of a wooden carving brought back from Captain Cook’s voyage to the Pacific in the 18th Century. The mysterious carving – of two figures and a dog – is now the logo of the Museum of Archaeology and Anthropology.

The story

Two men and pig

As I searched for a Polynesian story about two men and a dog, I discovered many of the same factors that Trisha highlights. Stories travel across the Pacific Ocean just as commerce, people, and artworks do, making it hard to pinpoint the source of the story.

The story I chose, about the deity/hero Maui, turning his annoying brother-in-law, Irawaru, into the first dog, fits only partially – just like the many theories about the carving. Stories about Maui are known all over Polynesia: but the trickster Maui from New Zealand, where this story comes from, is different from the godlike Maui of Tahiti, the carving’s likely provenance. Like the carving, the stories of Maui have travelled to the Western world, in films like Moanna, as well as to Cambridge. Stories, which can’t be carbon-dated like the carving, shift and change just like the dog Irawaru.

The challenge

Not knowing the true story can set our imaginations free! I asked the audience to draw or write their own story about two men and a dog.

Treated Incidence of Psychotic Disorders in the Multinational EU-GEI Study

The research

Hannah Jongsma’s research looks at the risk of developing a psychotic disorder, which for a long time was thought to be due to genetics. She finds that it is influenced by many factors – both genetic and environmental – for example the risk is higher in young men and ethnic minorities.

The story

I paired the story with a sinister little tale from Grimm, Bearskin, about an outsider who is rejected by society because of his wild appearance – he wears the unwashed skin of a ferocious bear and lives like a wild man. It touched the issues of Hannah’s research at many points. The hero is a rootless and penniless young man far from home – a situation identified as high-risk in Hannah’s study. His encounter with a wild bear with whom he swaps coats is the stuff of hallucination. Like psychosis, in Hannah’s view, the problem is partly one of the way society views the outsider. And, as in Hannah’s study, being accepted by a family and given emotional support is a protection against psychosis. The remarkable thing about this this wonder-tale, so far removed from reality, was how it opened up a wide-ranging conversation about the research. Its far-fetched images helped us explore the issues of real-life research. Hannah was surprised that her research could be re-envisioned and presented in such a different way.

The challenge

Using the ideas from Hannah’s paper, suggest an alternative ending to the story.

Determining the Molecular Pathology of Inherited Retinal Disease

The research

‘DNA helix’

Crina Samarghitean shows how bioinformatics tools help researchers find new genes, and doctors find diagnosis in difficult disorders. Her article looks at better treatment and quality of life for patients with primary immunodeficiencies, and focuses on inherited retinal disease which is a common cause of visual impairment.

The story

The story of the telescope, the carpet and the lemon turned out to be a celebration of the possibilities of medical research with bioinformatics. Three brothers search for the perfect gift to win the heart of the princess, and find that these three magical objects allow them to save her life. This piece of research was the first one I tried to find a story for, and it seemed to be the hardest to translate into non-specialist language, until Crina said ‘I see the research as a quest for treasure: someone who has looked everywhere for a cure for their illness comes to this data-bank, and it’s like a treasure chest with the answer to their problem.’

The challenge

Crina is already committed to the idea that the arts can be used to interpret science. She has made artworks inspired by the gene sequences she has been working on. The challenge was to make pictures inspired by Crina’s paintings and models.

Published 10 April 2018
Written by Marion Leeper
Creative Commons License

Perspectives on the Open future

‘More cash, more clarity and don’t make this compulsory’ is the take home message from a recent workshop held with Cambridge researchers on the question of Open Research.

The recent session, called “An Open Future? How Cambridge is Responding to Challenges in the Open Landscape” was with a group of new Cambridge lecturers at a seminar organized by Pathways in Higher Education Practice. This event  offered us an opportunity to go beyond the usual information we provide in our training workshops*.

This session provided a unique opportunity to speak with researchers from various disciplines further along in their career who already had a basic knowledge of Open Access and Research Data sharing requirements. This meant we were able to have more of an informed discussion rather than a lecture and we wanted to hear what they thought about Open Research.

(* The OSC is often asked to provide training on all things Open Research. Generally our training is focused on PhD students and early career researchers. We create our PowerPoint slides that explain the benefits of Open Access, the necessity of a good Data Management Plan or how to promote your research through social media (all of which are freely available here). We try to make these sessions as interactive as possible.)

Quiz Time

The session started by laying out how the current academic publishing model works. Basically, researchers submit their latest findings to a journal for FREE, peer reviewers review the paper for FREE, editors oversee the journal for FREE and the publishers format the article then turn around and charge libraries exorbitant subscription fees (yep, that about sums it up). This got a good laugh from the audience.

So our first activity was a short quiz. We were interested to know if researchers knew how much things cost. We asked them a set of questions:

  1. How much do you think we pay in subscription costs every year?
  2. What’s the average APC?
  3. How many papers were made gold OA and had at least one Cambridge author on it in 2016?

There was a lot of debate among the groups. Some of the answers were wildly overestimated (one researcher suggested £50 million GBP for subscriptions per year), others were quite low.

What are people sharing?

For our next activity, we wanted to know what they were already sharing and what tools they were using to share. We presented each table with a Venn diagram and a bunch of post-its:

Unsurprisingly, the ‘Publication’ circle had the most post-its. Answers included tools such as ArXiv, ResearchGate, and Academia.edu as well as personal websites and Facebook. There were also mentions of Cambridge Open Access and the Departmental Libraries. Interestingly a few noted that they made their work available to researchers through personal contact such as email requests.

There were a few post-its in the ‘Data’ circle describing what tools they used to deposit, such as university repositories and Zenodo.

The ‘Other’ category mostly talked about sharing code and software through github; although, one lecturer noted free workshops they offered. There was only one post-it that made it into the centre and that was for “webpage”. For the future, it may be interesting to know which discipline the researchers were from when they were posting because this theme came up quite a few times during the discussions.

When are people prepared to share?

The second activity involved lots of sticky dots and large pieces of paper. The participants were asked if they were comfortable sharing different aspects of their research at different stages in the research lifecycle. Each sheet was laid out in a grid as follows:

All of the researchers were asked to stick dots in the grid. The results were interesting. Most researchers were happy to share the published version of their paper, but a large number were uncomfortable sharing their pre-print or submitted version. There were only two dots in the “yes” square to share pre-prints. During the discussion it was apparent that this was probably down to the culture of the discipline where one physics researcher said it was part of the process versus one of the lecturers from English who disliked having more than one version of her paper available to read. The Book Chapter had similar results.

Data and Data Management Plans were all over the place. There were quite a few dots in the ‘Not sure’ squares. Most were happy to share data at the time of publication or at the end of the project. For the Data Management Plans it was evenly split between ‘yes’ to sharing at the end of the project versus ‘not sure’. No one wanted to share their DMP at the start of the project. There was some confusion among researchers (mostly from the humanities) who felt they didn’t have any data and therefore there was nothing to share.

The majority of the researchers were unenthusiastic about sharing their Grant Applications or Grey literature at any stage. For Grant Applications the overall feeling was that if the grant was successful then researchers didn’t want to share their methodology. If the grant was unsuccessful, they were reluctant to share their failures or they planned to submit to another granting agency. Most lecturers in the room agreed that they were fine sharing an abstract of their grant awards (which many funders post on their website).

As for Grey Literature which we defined as working papers or opinion papers, no one wanted to share anything that could be considered unfinished or not well thought out. One member of the law faculty said that if they had produced any grey literature worth sharing, then they would publish it in a journal. Moreover, it could be detrimental to their career if they shared anything that wasn’t well-researched and presented.

More money please

To finish up the session, we asked researchers what more could the University be doing to promote Open Research. Not surprisingly most people were resistant to any University mandate telling them what to do. In addition, they were strongly against any Open Research requirements being tied in with HR practices like promotions. The researchers supported discipline specific requirements for Open Research.

Clearer instructions from the University and from funders of what is required of researchers was also desired. Having a myriad of policies is quite confusing and burdensome for researchers who already feel pressured to publish. In the end, most said that if the University would pay, then they would be happy to share their published work.

Published 4 April 2018
Written by Katie Hughes
Creative Commons License

Manuscript detectives – submitted, accepted or published?

In the blog post “It’s hard getting a date (of publication)”, Maria Angelaki discussed how a seemingly straightforward task may turn into a complicated and time-consuming affair for our Open Access Team. As it turns out, it isn’t the only one. The process of identifying the version of a manuscript (whether it is the submitted, accepted or published version) can also require observation and deduction skills on par with Sherlock Holmes’.

Unfortunately, it is something we need to do all the time. We need to make sure that the manuscript we’re processing isn’t the submitted version, as only published or accepted versions are deposited in Apollo. And we need to differentiate between published and accepted manuscripts, as many  publishers – including the biggest players Elsevier, Taylor & Francis, Springer Nature and Wiley  – only allow self-archiving of accepted manuscripts in institutional repositories, unless the published version has been made Open Access with a Creative Commons licence.

So it’s kind of important to get that right… 

Explaining manuscript versions

Manuscripts (of journal articles, conference papers, book chapters, etc.) come in various shapes and sizes throughout the publication lifecycle. At the onset a manuscript is prepared and submitted for publication in a journal. It then normally goes through one or more rounds of peer-review leading to more or less substantial revisions of the original text, until the editor is satisfied with the revised manuscript and formally accepts it for publication. Following this, the accepted manuscript goes through proofreading, formatting, typesetting and copy-editing by the publisher. The final published version (also called the version of record) is the outcome of this. The whole process is illustrated below.

Identifying published versions

So the published version of a manuscript is the version… that is published? Yes and no, as sometimes manuscripts are published online in their accepted version. What we usually mean by published version is the final version of the manuscript which includes the publisher’s copy-editing, typesetting and copyright statement. It also typically shows citation details such as the DOI, volume and page numbers, and downloadable files will almost invariably be in a PDF format. Below are two snapshots of published articles, with citation details and copyright information zoomed in. On the left is an article from the journal Applied Linguistics published by Oxford University Press and on the right an article from the journal Cell Discovery published by Springer Nature (click to enlarge any of the images).

 

Published versions are usually obvious to the eye and the easiest to recognise. In a way the published version of a manuscript is a bit like love: you may mistake other things for it but when you find it you just know. In order to decide if we can deposit it in our institutional repository, we need to find out whether the final version was made Open Access with a Creative Commons (CC) licence (or in rarer cases with the publisher’s own licence). This isn’t always straightforward, as we will now see.

Published Open Access with a CC licence?

When an article has been published Open Access with a CC licence, a statement usually appears at the bottom of the article on the journal website. However as we want to deposit a PDF file in the repository, we are concerned with the Open Access statement that is within the PDF document itself. Quite a few articles are said to be Open Access/CC BY on their HTML version but not on the PDF. This is problematic as it means we can’t always assume that we can go ahead with the deposit from the webpage – we need to systematically search the PDF for the Open Access statement. We also need to make sure that the CC licence is clearly mentioned, as it’s sometimes omitted even though it was chosen at the time of paying Open Access charges.

The Open Access statement will appear at various places on the file depending on the publisher and journal, though usually either at the very end of the article or in the footer of the first page as in the following examples from Elsevier (left) and Springer Nature (right).

 

A common practice among the Open Access team is to search the file for various terms including “creative”, “cc”, “open access”, “license”, “common” and quite often a combination of these. But even this isn’t a foolproof method as the search may retrieve no result despite the search terms appearing within the document. The most common publishers tend to put Open Access statements in consistent places, but others might put them in unusual places such as in a footnote in the middle of a paper. That means we may have to scroll through a whole 30- or 40-page document to find them – quite a time-consuming process.

 Identifying accepted versions

The accepted manuscript is the version that has gone through peer-review. The content should be the same as the final published version, but it shouldn’t include any copy-editing, typesetting or copyright marking from the publisher. The file can be either a PDF or a Word document. The most easily recognisable accepted versions are files that are essentially just plain text, without any layout features, as shown below. The majority of accepted manuscripts look like this.

However sometimes accepted manuscripts may at first glance appear to be published versions. This is because authors may be required to use publisher templates at the submission stage of their paper. But whilst looking like published versions, accepted manuscripts will not show the journal/publisher logo, citation details or copyright statement (or they might show incomplete details, e.g. a copyright statement such as © 20xx *publisher name*). Compare the published version (left) and accepted manuscript (right) of the same paper below.

 

As we can see the accepted manuscript is formatted like the published version, but doesn’t show the journal and publisher logo, the page numbers, issue/volume numbers, DOI or the copyright statement.

So when trying to establish whether a given file is the published or accepted version, looking out for the above is a fairly foolproof method.

Identifying submitted versions

This is where things get rather tricky. Because the difference between an accepted and submitted manuscript lies in the actual content of the paper, it is often impossible to tell them apart based on visual clues. There are usually two ways to find out:

  • Getting confirmation from the author
  • Going through a process of finding and comparing the submission date and acceptance date of the paper (if available), mostly relevant in the case of arXiv files

Getting confirmation from the author of the manuscript is obviously the preferable and time-saving option. Unfortunately many researchers mislabel their files when uploading them to the system, describing their accepted/published version file as submitted (the fact that they do so when submitting the paper to us may partly explain this). So rather than relying on file descriptions, having an actual statement from the author that the file is the submitted version is better. Although in an ideal world this would never happen as everyone would know that only accepted and published versions should be sent to us.

A common incarnation of submitted manuscripts we receive is arXiv files. These are files that have been deposited in arXiv, an online repository of pre-prints that is widely used by scientists, especially mathematicians and physicists. An example is shown below.

Clicking on the arXiv reference on the left-hand side of the document (circled) leads to the arXiv record page as shown below.

The ‘comments’ and ‘submission history’ sections may give clues as to whether the file is the submitted or accepted manuscript. In the above example the comments indicate that the manuscript was accepted for publication by the MNRAS journal (Monthly Notices of the Royal Astronomical Society). So this arXiv file is probably the accepted manuscript.

The submission history lists the date(s) on which the file (and possible subsequent versions of it) was/were deposited in arXiv. By comparing these dates with the formal acceptance date of the manuscript which can be found on the journal website (if published), we can infer whether the arXiv file is the submitted or accepted version. If the manuscript hasn’t been published and there is no way of comparing dates, in the absence of any other information, we assume that the arXiv file is the submitted version.

Conclusion

Distinguishing between different manuscript versions is by no means straightforward. The fact that even our experienced Open Access Team may still encounter cases where they are unsure which version they are looking at shows how confusing it can be. The process of comparing dates can be time-consuming itself, as not all publishers show acceptance dates for papers (ring a bell?).

Depositing a published (not OA) version instead of an accepted manuscript may infringe publisher copyright. Depositing a submitted version instead of an accepted manuscript may mean that research that hasn’t been vetted and scrutinised becomes publicly available through our repository and possibly be mistaken as peer-reviewed. When processing a manuscript we need to be sure about what version we are dealing with, and ideally we shouldn’t need to go out of our way to find out.

Published 27 March 2018
Written by Dr Melodie Garnier
Creative Commons License