Tag Archives: authorship

The case for Open Research: the authorship problem

This is the second in a blog series about why we need to move towards Open Research. The first post about the mis-measurement problem considered issues with assessment. We now turn our attention to problems with authorship. Note that as before this is a topic of research in itself – and there is a rich vein of literature to be mined here for the interested observer.

Hyperauthorship

In May last year a high energy physics paper was published with over 5,000 authors. Of the 33 pages in this article, the paper occupied nine with the remainder listing the authors. This paper caused something of a storm of protest about ‘hyperauthorship’ (a term coined in 2001 by Blaise Cronin).

Nature published a news story on it, which was followed a week later by similar stories decrying the problem. The Independent published  a story with the angle that many people are just coasting along without contributing. The Conversation’s take on the story looked at the challenge of effectively rewarding researchers. The Times Higher Education was a bit slower off the mark, in August publishing a story questioning whether mass authorship was destroying the credibility of papers.

This paper was featured in  a keynote talk given at this year’s FORCE2016 conference. Associate Professor Cassidy Sugimoto from the School of Informatics and Computing, Indiana University Bloomington spoke about ‘Structural Disruptions in the Reward System of Science’ (video here). She noted that authorship is the coin of the realm the pivot point of the whole of the scientific system and this has resulted in the growth of authors listed on a paper.

Sugimoto asked: What does ‘authorship’ mean when there are more authors than words in a document? This type of mass authorship raises concerns about fraud and attribution. Who is responsible if something goes wrong?

The authorship ‘proxy for credit’ problem

Of course not all of those 5,000 people actually contributed to the writing of the article – the activity we would normally associate with the word ‘authorship’. Scientific authorship does not follow the logic of literary authorship because of the nature of what is being written about.

In 1998 Biagioli (who has literally written the book on Scientific Authorship or at least edited it) in a paper called ‘The Instability of Authorship: Credit and Responsibility in Contemporary Biomedicine’ said that “the kind of credit held by a scientific author cannot be exchanged for money because nature (or claims about it) cannot be a form of private property, but belongs in the public domain”.

Facts cannot be copyrighted. The inability to write for direct financial remuneration in academia has implications for responsibility (addressed further down), but first let’s look at the issue of academic credit.

When we say ‘author’ what do we mean in this context? Often people are named as ‘authors’ on a paper because their inclusion will help to have the paper accepted, or it is a token thanks for providing the grant funding for the work. These are practices referred to as ‘gift authorship‘ where co-authorship awarded to a person who has not contributed significantly to the study.

In an attempt to stop some of the more questionable practices above, the International Committee of Medical Journal Editors (ICMJE) has defined what it means to be an author which says authorship should be based on:

  • a substantial contribution
  • drafting the work
  • giving final approval and
  • agreeing to be accountable for the integrity of the work.

The problem, as we keep seeing, is that authorship on a publication is the only thing that counts for reward. This means that ‘authorship’ is used as a proxy for crediting people’s contribution to the study.

Identifying contributions

Listing all of the people who had something to do with a research project as ‘authors’ on the final publication fails to credit different aspects of the labour involved in the research. In an attempt to address this, PLOS asks for the different contributions by those named on a paper to be defined on articles, with their guidelines suggesting categories such as Data Curation, Methodology, Software, Formal Analysis and Supervision (amongst many).

Sugimoto has conducted some research to find what this reveals about what people are contributing to scientific labour. In an analysis of PLOS data on contributorship, her team showed that in most disciplines the labour was distributed. This means that often the person doing the experiment is not the person who is writing up the work. (I should note that I was rather taken aback by this when it arose in interviews I conducted for my PhD).

It is not particularly surprising that in the Arts, Humanities and Social Sciences that the listed ‘author’ is most often the person who wrote the paper. However in Clinical Medicine, Biomedicine or Biology very few authors are associated with the task of writing.  (As an aside, the analysis found women are disproportionately likely to be doing the experimentation, and men are more likely to be authoring, conceiving experimentation or obtaining resources.)

So, would it not be better if rather than placing the only emphasis on authorship of journal articles in high impact journals, we were able to reward people for different contributions to the research?

And while everyone takes credit, not all people take responsibility.

Authorship – taking responsibility

It is not just the issue of the inability to copyright ‘facts of nature’ that makes copyright unusual in academia. The academic reward system works on the ‘academic gift principle’ – academics provide the writing, the editing and the peer review for free and do not expect payment. The ‘reward’ is academic esteem.

This arrangement can seem very odd to an outsider who is used to the idea of work for hire. But there are broader implications than what is perceived to be ‘fair’ – and these relate to accountability. It is much more difficult to sue a researcher for making incorrect statements than it is to sue a person who writes for money (like a journalist).

Let us take a short meander into the world of academic fraud. Possibly the biggest and certainly highly contentious case was Andrew Wakefield and the discredited (and retracted) claim that the MMR vaccine was associated with autism in children. This has been discussed at great length elsewhere – the latest study debunking the claim was published last year. Partly because of the way science is credited and copyright is handled, there were minimal repercussions for Wakefield. He is barred from practicing medicine in the UK, but enjoys a career on the talkback circuit in the US. Recently a film about the MMR claims, directed by Wakefield was briefly shown at the Tribeca film festival before protests saw it removed from the programme.

Another high profile case is Diedderik Stapel, a Dutch social psychologist who entirely fabricated his data over many years. Despite several doctoral students’ work being based on this data and over 70 articles having to be retracted there were no charges laid. The only consequence he faced was having his professorship stripped.

Sometimes the consequences of fraud are tragic. A Japanese stem cell researcher, Haruko Obokata, who fabricated her results had her PhD stripped from her. There were no criminal charges laid but her supervisor committed suicide and the funding for the centre she was working in was cut.  The work had been published in Nature which then retracted the work and wrote some editorial about the situation.

The question of scientific accountability is so urgent that there was a call last year to criminalise scientific misconduct in this paper. Indeed things do seem to be changing slowly and there have been some high profile cases where scientific fraud has resulted in criminal charges being laid. A former University of Queensland academic is currently facing fraud related charges over his fabricated results from a study into Parkinson’s disease and multiple sclerosis. This time last year, Dong-Pyou Han, a former biomedical scientist at Iowa State University in Ames, was sentenced to 57 months for fabricating and falsifying data in HIV vaccine trials. Han has also been fined US$7.2 million. In both the cases the issue is the misuse of grant funding rather than publication of false results.

The combination of great ‘reward’ from publication in high profile journals and little repercussion (other than having that ‘esteem’ taken away) has proven to be too great a temptation for some.

Conclusion

The need to publish in high impact journals has caused serious authorship issues –  resulting in huge numbers of authors on some papers because it is the only way to allocate credit. And there is very little in the way we reward researchers that adequately allows for calling researchers to take responsibility when something goes wrong, in some cases resulting in serious fraud.

The next instalment in this series will look at ‘reproducibility, retractions and retrospective hypotheses.

Published 12 July 2016
Written by Dr Danny Kingsley
Creative Commons License

What is ‘research impact’ in an interconnected world?

Perhaps we should start this discussion with a definition of ‘impact’. The term impact is used by many different groups for different purposes, and much to the chagrin of many researchers it is increasingly a factor in the Higher Education Funding Councils for England’s (HECFE) Research Excellence Framework. HEFCE defined impact as:

‘an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’.

So we are talking about research that affects change beyond the ivory tower. What follows is a discussion about strengthening the chances of increasing the impact of research.

Is publishing communicating research?

Publishing a paper is not a good way of communicating work. There is some evidence that much published work is not read by anyone other than the reviewers. During an investigation of claims that huge numbers of papers were never cited, Dahlia Remler found that:

  • Medicine  – 12% of articles are not cited
  • Humanities – 82% of articles are not cited – note however that their prestigious research is published in books, however many books are rarely cited too.
  • Natural Sciences – 27% of articles are never cited
  • Social Sciences – 32% of articles are never cited

Hirsch’s 2005 paper: An index to quantify and individual’s scientific research output, proposing the h index – defined as the number of papers with citation number ≥h. So an h index of 5 means the author has at least 5 papers with at least 5 citations. Hirsch suggested this as a way to characterise the scientific output of researchers. He noted that after 20 years of scientific activity, an h index of 20 is a ‘successful scientist’. When you think about it, 20 other researchers are not that many people who found the work useful. And that ignores those people who are not ‘successful’ scientists who are, regardless, continuing to publish.

Making the work open access is not necessarily enough

Open access is the term used for making the contents of research papers publicly available – either by publishing them in an open access journal or by placing a copy of the work in a subject or institutional repository. There is more information about open access here.

I am a passionate supporter of open access. It breaks down cost barriers to people around the world, allowing a much greater exposure of publicly funded research. There is also considerable evidence showing that making work open access increases citations.

But is making the work open access enough? Is a 9.5MB pdf downloadable onto a telephone, or through a dail-up connection?  If the download fails at 90% you get nothing. Some publishing endeavours have recogised this as an issue, such as the Journal of Humanitarian Engineering (JHE), which won the Australian Open Access Support Group‘s 2013 Open Access Champion award for their approach to accessibility.

Language issues

The primary issue, however is the problem of understandability. Scientific and academic papers have become increasingly impenetrable as time has progressed. It’s hard to believe now that at the turn of last century scientific articles had the same readability as the New York Times.

‘This bad writing is highly educated’ is a killer sentence from Michael Billig’s well researched and written book ‘Learn to Write Badly: How to Succeed in the Social Sciences‘.  This phenomenon is not restricted to the social sciences, specialisation and a need to pull together with other members of one’s ‘tribe‘ mean that academics increasingly write in jargon and specialised language that bears little resemblance to the vernacular.

There are increasing arguments for scientific communication to the public being part of formal training. In a previous role I was involved in such a program through the Australian National Centre for the Public Awareness of Science. Certainly the opportunities for PhD students to share their work more openly have never been more plentiful. There are many three minute thesis competitions around the world. Earlier this year the British Library held a #Share your thesis competition where entrants were first asked to tweet why their PhD research is/was important using the hashtag #ShareMyThesis. The eight shortlisted entrants were asked to write a short article (up to 600 words) elaborating on their tweet and explaining why their PhD research is/was important  in an engaging and jargon-free way.

Explaining work in understandable language is not ‘dumbing it down’.  It is simply translating it into a different language. And students are not restricted to the written word. In November the eighth winner of the annual ‘Dance your PhD‘ competition sponsored by Science, Highwire Press and the AAAS will be announced.

Other benefits

There is a flow-on effect from communicating research in understandable language. In September, the Times Higher Education recently published an article ‘Top tips for finding a permanent academic job‘ where the information can be summarised as ‘communicate more’.

The Thinkable.org group’s aim is to widen the reach and impact of research projects using short videos (three minutes or less). The goal of the video is to engage the research with a wide audience. The Thinkable Open Innovation Award is a research grant that is open to all researchers in any field around the world and awarded openly by allowing Thinkable researchers and members to vote on their favourite idea. The winner of the award receives $5000 to help fund their research. This is specifically the antithesis of the usual research grant process where grants “are either restricted by geography or field, and selected via hidden panels behind closed doors”.

But the benefit is more than the prize money. This entry from a young Uni of Manchester PhD biomedical student did not win, but thousands of people engaged in her work in just few weeks of voting.

Right. Got the message. So what do I need to do?

Researcher Mike Taylor pulled together a list of 20 things a researcher needs to do when they publish a paper.  On top of putting a copy of the paper in an institutional or subject repository, suggestions include using various general social media platforms such as Twitter and blogs, and also uploading to academic platforms.

The 101 Innovations in Scholarly Communication research project run from the University of Utrecht is attempting to determine scholarly use of  communication tools. They are analysing the different tools that researchers are using through the different phases of the research lifecycle – Discovery, Analysis, Writing, Publication, Outreach and Assessment through a worldwide survey of researchers. Cambridge scholars can use a dedicated link to the survey.

There are a plethora of scholarly peer networks which all work in slightly different ways and have slightly different foci.  You can display your research into your Google Scholar or CarbonMade profile. You can collate the research you are finding into Mendeley or Zotero. You can also create an environment for academic discourse or job searching with Academia.edu, ResearchGate and LinkedIn. Other systems include Publons – a tool to register peer reviewing activity.

Publishing platforms include blogging (as evidenced here), Slideshare, Twitter, figshare, Buzzfeed. Remember, this is not about broadcasting. Successful communicators interact.

Managing an online presence

Kelli Marshall from DePaul University asks ‘how might academics—particularly those without tenure, published books, or established freelance gigs—avoid having their digital identities taken over by the negative or the uncharacteristic?’

She notes that as an academic or would-be academic, you need to take control of your public persona and then take steps to build and maintain it. If you do not have a clear online presence, you are allowing Google, Yahoo, and Bing to create your identity for you. There is a risk that the strongest ‘voices’ will be ones from websites such as Rate My Professors.

Digital footprint

Many researchers belong to an institution,  a discipline and a profession. If these change your online identity associated with them will also change. What is your long term strategy? One thing to consider is obtaining a persistent unique identifier such as an ORCID – which is linked to you and not your institution.

When you leave an institution, you not only lose access to the subscriptions the library has paid for, you also lose your email address. This can be a serious challenge when your online presence in academic social media sites like Academia.edu and ResearchGate are linked to that email address. What about content in a specific institutional repository? Brian Kelly discussed these issues at a recent conference.

We seem to have drifted a long way from impact?

The thing is that if it can be measured it will be. And digital activity is fairly easily measured. There are systems in place now to look at this kind of activity. Altmetrics.org moves beyond the traditional academic internal measures of peer review, Journal Impact Factor (JIF) and the H-index. There are many issues with the JIF, not least that it measures the vessel, not the contents. For these reasons there are now arguments such as the San Francisco Declaration on Research Assessment (DORA) which calls for the scrapping of the JIF to assess a researcher’s performance. Altmetrics.org measures the article itself, not where it is published. And it measures the activity of the articles beyond academic borders. To where the impact is occurring.

So if you are serious about being a successful academic who wants to have high impact, managing your online presence is indeed a necessary ongoing commitment.

NOTE: On 26 September, Dr Danny Kingsley spoke on this topic to the Cambridge University Alumni festival. The slides are available in Slideshare. The Twitter discussion is here.

Published 25 September 2015
Written by Dr Danny Kingsley
Creative Commons License