Tag Archives: publishing

Rights retention built into Cambridge Self-Archiving Policy

We’re delighted to announce that the University of Cambridge has a new Self-Archiving Policy, which took effect from 1 April 2023.  The policy gives researchers a route to make the accepted version of their papers open access without embargo under a licence of their choosing (subject to funder requirements). We believe that researchers should have more control over what happens to their own work and are determined to do what we can to help them to do that.

This policy has been developed after a year-long rights retention pilot in which more than 400 researchers voluntarily participated. The pilot helped us understand the implications of this approach across a wide range of disciplines so we could make an informed decision. We are also not alone in introducing a policy like this – Harvard has been doing it since 2008, cOAlition S have been a catalyst for development of similar policies, and we owe a debt of gratitude to the University of Edinburgh for sharing their approach with us. 

Some of the issues that cropped up during the pilot were outlined by Samuel Moore, our Scholarly Communications Specialist, in an earlier post on the Unlocking Research blog.  The patterns we saw at that stage continued throughout the year-long pilot – there was no issue for most articles, but some publishers caused confusion through misinformation or by presenting conflicting licences for the researchers to sign. We do recognise that there are costs involved in high quality publishing, and we are willing to cover reasonable costs (while noting our concerns around inequities in scholarly publishing).   The fact is that some publishers are trying to charge the sector multiple times for the same content – subscription fees, OA fees, other admin fees – all while receiving free content courtesy of researchers that are usually funded by the taxpayer and charity funders. 

Many researchers and funders are understandably becoming firmer in their convictions that publicly funded research should be openly and publicly available. We are fortunate that at Cambridge we are in a position to support this through our support for diamond publishing initiatives (in which the costs of publishing are absorbed for example by universities and no fees are charged to the reader or the author), through read and publish agreements negotiated on behalf of the UK higher education sector and through payment of costs associated with publishing in fully open access venues. Rights retention gives researchers a back-up plan for when other routes are not available to them, e.g. when a journal moves unexpectedly out of a read and publish agreement or a publisher does not offer any publishing route that meets their funder requirements. 

This is not the end goal, we have work to do to reach an equitable approach to global scholarly publishing, and we can learn a lot especially from how South America approaches these issues. We welcome opportunities to work together with others around the world to create a more sustainable and equitable future for scholarly communications.

Read more about the new Cambridge Self-Archiving Policy on the Cambridge Open Access website.

Open Research in the Humanities: The Future of Scholarly Communication

Authors: Emma Gilby, Matthias Ammon, Rachel Leow and Sam Moore

This is the second of a series of blog posts, presenting the reflections of the Working Group on Open Research in the Humanities.  Read the opening post here. The working group aimed to reframe open research in a way that was more meaningful to humanities disciplines, and their work will inform the University of Cambridge approach to open research.  This post considers the future of scholarly communication from a humanities perspective. 

PILLAR ONE: THE FUTURE OF SCHOLARLY COMMUNICATION 

This first pillar deals with ‘open access’ narrowly understood: the future of the publication landscape, and the question of the sustainability and viability of different publication models in an open access world.  

Opportunities 

The open access initiative in general values a wide range of contributions to academic life. The arts and humanities thrive on long-term, multi-scale, conversational, collaborative, interdisciplinary projects; all cultural work can be so defined. Any move towards research diversity therefore works in the favour of the arts and humanities.  

Open Research aims first at opening out ‘traditional’ research content, such as that published in journals and monographs. Thus it aims also to demystify the existing publication process. In general, it prioritizes the wide dissemination of public-facing research. Further, it allows us to envisage new forms of publication, such as the use of dynamic images and data visualisation as already undertaken in investigative journalism.1 Other examples of new Open Access formats include semi-public peer-to-peer review and the opportunity for readers to highlight passages and contribute to a crowd-sourced index of terms.2

Support required 

In the immediate and short term, A&H colleagues require institutional support to understand and get to grips with the current routes to open access within academic publishing, which present various advantages and challenges. For more detail see Plan S and the History Journal Landscape, A Royal Historical Society Guidance Paper https://royalhistsoc.org/policy/publication-open-access/plan-s-and-history-journals/ 

Current routes to OA in scholarly publishing include:  

  1. Paying directly for article or book processing charges levied by publishers. This is easy if one’s research falls among the very small percentage of A&H research that is funded by the research councils, who allow for such fees, but otherwise challenging.  
  1. Taking advantage of a ‘read and publish’ deal set up between a publisher and an institution. This is easy if one is at the right institution at the right time, but otherwise challenging. There is also confusion amongst colleagues about what happens when these time-limited, transitional deals expire: will publishers revert to simple processing charges (see above)? Or will all published material by then be fully OA (see below)?  
  1. The self-deposit in an OA institutional repository of a manuscript that is accepted for publication and peer reviewed but that has not been edited or typeset by the publisher in any way. This is easy with the right systems in place, but problematic because it neglects the import of the editing process in A&H research. Without undergoing this process, ‘accepted manuscripts’ are very vulnerable to errors, especially in the case of the very many scholars who regularly work in languages that are not their first, or in the case of early career scholars who are less familiar with critical processes and how to evidence them, or in the case of colleagues with various kinds of disabilities such as dyslexia. Other issues also abound with the deposit of manuscripts in repositories. In cases where scholars receive an acceptance that is subject to improvement, the final ‘date of acceptance’ is ambiguous for legal purposes. And in cases where the work in question uses copyrighted material, further legal issues emerge about when and how it may be possible to circulate this. In all these senses, then, many A&H colleagues simply dislike the thought of their ‘accepted manuscript’ circulating. In the case of institutional repositories, there seems to be a direct and obvious tension between the goals of open research and quality control.  
  1. Publishing with a fully OA journal or academic publisher that does not require a processing charge. This is obviously the most straightforward and therefore best route to OA, but raises the fundamental question of how such work is conducted and funded. The notion of the ‘scholar-led’ press, established and monitored by scholars themselves, presupposes that academics can somehow fit the work of the professional editor, copy editor, translator or type setter etc. into their spare time. In addition, many OA journals rely on charitable donations. Fundraising is also a skilled business: will universities’ development directors and offices be diverted to do the work of seeking these charitable donations? Is it possible for existing publishing houses and presses to construct a sustainable business model that allows for free and open publishing, while overlaying their own professional services onto the scholarly work provided by academics? Can already successful enterprises such as Open Book Publishers in Cambridge3 be ‘scaled up’? The members of the working group have not seen any impact assessments or pilot studies considering which of the current forms of scholarly communication will simply die out in the absence of subscription and royalty income. We would like to see evidence-based impact assessments as a matter of priority. In general, it is unclear whether even the largest and most prestigious scholarly societies will survive the loss of income that will result from a move to OA. As one member of our group put it, ‘the research is not open if it is dead’.  

Many questions remain, above and beyond those already evoked:  

  • The situation with respect to the goal of publishing of all academic monographs freely and openly remains extremely fluid, and all the enquiries we were able to make in the working group confirmed that this is an area of great uncertainty. Academic books require considerable up-front investment by publishers, and it is vital that this labour and expertise is properly supported in an open access model. How to ensure that open access books do not entail a race to the bottom in terms of editorial and production standards? 
  • Researchers and publishers will also have to think carefully about content such as book reviews, notices, short discussion pieces, author interviews and so on: content that is useful to the discipline, but peripheral to the article form and that would not generally appear in a repository, for example.   
  • The place of UK debates in the global publishing industry is unclear. Like all scholarly publishing, A&H publishing is international in nature and most journals and presses will draw from as wide an international field as possible. How will the editor of a UK-based journal, responding to the OA requirements of UK decision-making bodies, deal with international authors who are not subject to the same requirements or set of priorities? How will an international editor deal with UK academics?5 These questions come up repeatedly in conversations with colleagues.  
  • Scholarly societies in the arts and humanities do not charge a fortune for their journals, and also offer conferences, communities and support (financial and otherwise) for early-career scholars. To analyse the costs and benefits of access to their publications, it will be necessary to look across cost centres within any given institution. To offer a worked example of library costs from 2019, ‘the bundled UK cost for 2020 the RHS’s Transactions and its Camden book series is £205 (this is a maximum figure, excluding all discounts). In the financial year 1 July 2018-30 June 2019, RHS awarded (for example) £2,781.56 to support ECR researchers at York University and £3,177.16 to support ECR researchers at Oxford.’6 So it would be useful to see studies of the rate of institutional return on investment in publications by university libraries.  
  • Concerns about licensing were already well documented and summarized by Peter Mandler in 2014: ‘For one thing, we do not have full ownership of our texts ourselves – we use others’ words and images, often by permission. For another, we have our own norms of how best to incorporate one work within another – e.g. by quotation – which derivative use denies. Most important is our moral right (long acknowledged in law and ethics) to protect the integrity of our work. By all means read and disseminate our work free of charge, but do not change it as you are doing so – write your own work.’6  
  • Concerns about distortions allowed by CC BY in the reuse of oral history interviews and other sensitive/polemical content are important for many A&H colleagues as they are for our colleagues in the social sciences. 
  • Evidence of predatory publishers simply reusing content from repositories is starting to emerge, seemingly justifying concerns about CC BY as opposed to CC BY- NC-ND or CC BY-ND.7 

Footnotes

1See for instance a project on the takeover of real estate by the Church of Scientology in Clearwater, Florida: https://projects.tampabay.com/projects/2019/investigations/scientology-clearwater-real-estate, or a series of investigative articles on the post-9/11 burgeoning of the US intelligence services collected here: https://www.washingtonpost.com/people/william-m-arkin/

2Matthew Gold & Lauren Klein, eds. Debates in the Digital Humanities (2012), https://dhdebates.gc.cuny.edu

3 ‘We are a nonprofit independent publisher with no institutional backing. Open Book relies on sales and donations to continue publishing high-quality and free to read titles. We gratefully acknowledge the generous support of The Polonsky Foundationthe Thriplow Charitable Trust, the Jessica E. Smith and Kevin R. Brine Charitable Trust, The Progress Foundation and the Dutch Research Council (NWO).’ https://www.openbookpublishers.com

4 See the following testimony: ‘The bi-lingual, topic-specific journal I edit…draws articles from authors across the world and is published in Switzerland. Hence, specific OA requirements pertaining to UK-based authors will be considered in setting OA policy but will probably not be a determining factor. Hence, if strict requirements are introduced around OA in relation to UK funders, this may serve to reduce the possibility for UK-based authors to submit articles to my journal. This would obviously be an issue for the journal but would also be one for UK academics also, as it would result a more limited range of potential publication outlets.’ Margot Finn, Plan S and the History Journal Landscape, A Royal Historical Society Guidance Paper, pp. 47-8. 

5 Plan S and the History Journal Landscape, A Royal Historical Society Guidance Paper, p. 69, n. 110. 

6 Peter Mandler, ‘Open Access: a Perspective from the Humanities’, Insights 27 (2), 2014, http://doi.org/10.1629/2048-7754.89 

7 Guy Lavender, Jane Secker and Chris Morrison, ‘ What happens when you find your open access PhD thesis for sale on Amazon?’, 8th July 2021, https://blogs.lse.ac.uk/impactofsocialsciences/2021/07/08/what-happens-when-you-find-your-open-access-phd-thesis-for-sale-on-amazon/ 

‘Be nice to each other’ – the second Researcher to Reader conference

Aaaaaaaaaaargh! was Mark Carden’s summary of the second annual Researcher to Reader conference, along with a plea that the different players show respect to one another. My take home messages were slightly different:

  • Publishers should embrace values of researchers & librarians and become more open, collaborative, experimental and disinterested.
  • Academic leaders and institutions should do their bit in combating the metrics focus.
  • Big Deals don’t save libraries money, what helps them is the ability to cancel journals.
  • The green OA = subscription cancellations is only viable in a utopian, almost fully green world.
  • There are serious issues in the supply chain of getting books to readers.
  • And copyright arrangements in academia do not help scholarship or protect authors*.

The programme for the conference included a mix of presentations, debates and workshops. The Twitter hashtag is #r2rconf.

As is inevitable in the current climate, particularly at a conference where there were quite a few Americans, the shadow of Trump was cast over the proceedings. There was much mention of the political upheaval and the place research and science has in this.

[*please see Kent Anderson’s comment at the bottom of this blog]

In the publishing corner

Time for publishers to raise to the challenge

The conference opened with an impassioned speech by Mark Allin, the President and CEO of John Wiley & Sons, who started with the statement this was “not a time for retreat, but a time for outreach and collaboration and to be bold”.

The talk was not what was expected from a large commercial publisher. Allin asked: “How can publishers act as advocates for truth and knowledge in the current political climate?” He mentioned that Proquest has launched a displaced researchers programme in reaction to world events, saying, “it’s a start but we can play a bigger role”.

Allin asked what publishers can do to ensure research is being accessed. Referencing “The content trap” by Bharat Anand, Allin said “We won’t as a media industry survive as a media content and putting it in a bottle and controlling its distribution. We will only succeed if we connect the users. So we need to re-engineer the workflows making them seamless, frictionless. “We should be making sure that … we are offering access to all those who want it.”

Allin raised the issue of access, noting that ResearchGate has more usage than any single publisher. He made the point that “customers don’t care if it is the version of record, and don’t care about our arcane copyright laws”. This is why people use SciHub, it is ease of access. He said publishers should not give up protecting copyright but must realise its limitations and provide easy access.

Researchers are the centre of gravity – we need to help them spend more time researching and less time publishing, he says. There is a lesson here, he noted, suppliers should use “the divine discontent of the customer as their north star”. He used the example of Amazon to suggest people working in scholarly communication need to use technology much better to connect up. “We need to experiment more, do more, fail more, be more interconnected” he said, where “publishing needs open source and open standards” which are required for transformational impact on scholarly publishing – “the Uber equivalent”.

His suggestion for addressing the challenges of these sharing platforms is to “try and make your experience better than downloading from a pirate site”, and that this would be a better response than taking the legal route and issuing takedown notices.  He asked: “Should we give up? No, but we need to recognise there are limits. We need to do more to enable access.”

Allin called the situation, saying publishing may have gone online but how much has the internet really changed scholarly communication practices? The page is still a unit of publishing, even in digital workflows. It shouldn’t be, we should have a ‘digital first’ workflow. The question isn’t ‘what should the workflow look like?’, but ‘why hasn’t it improved?’, he said, noting that innovation is always slowed by social norms not technology. Publishers should embrace values of researchers & librarians and become more open, collaborative, experimental and disinterested.

So what do publishers do?

Publishers “provide quality and stability”, according to Kent Anderson, speaking on the second day (no relation to Rick Anderson) in his presentation about ‘how to cook up better results in communicating research’. Anderson is the CEO of Redlink, a company that provides publishers and libraries with analytic and usage information. He is also the founder of the blog The Scholarly Kitchen.

Anderson made the argument that “publishing is more than pushing a button”, by expanding on his blog on ‘96 things publishers do’. This talk differed from Allin’s because it focused on the contribution of publishers.

Anderson talked about the peer review process, noting that rejections help academics because usually they are about mismatch. He said that articles do better in the second journal they’re submitted to.

During a discussion about submission fees, Anderson noted that these “can cover the costs of peer review of rejected papers but authors hate them because they see peer review as free”. His comment that a $250 journal submission charge with one journal is justified by the fact that the target market (orthopaedic surgeons) ‘are rich’ received (rather unsurprisingly) some response from the audience via Twitter.

Anderson also made the accusation that open access publishers take lower quality articles when money gets tight. This did cause something of a backlash on the Twitter discussion with a request for a citation for this statement, a request for examples of publishers lowering standards to bring in more APC income with the exception of scam publishers. [ADDENDUM: Kent Anderson below says that this was not an ‘accusation’ but an ‘observation’. The Twitter challenge for ‘citation please?’ holds.]

There were a couple of good points made by Anderson. He argued that one of the value adds that publishers do is training editors. This is supported by a small survey we undertook with the research community at Cambridge last year which revealed that 30% of the editors who responded felt they needed more training.

The library corner

The green threat

There is good reason to expect that green OA will make people and libraries cancel their subscriptions, at least it will in the utopian future described by Rick Anderson (no relation to Kent Anderson), Associate Dean of University of Utah in his talk “The Forbidden Forecast, Thinking about open access and library subscriptions”.

Anderson started by asking why, if we’re in a library funding crisis, aren’t we seeing sustained levels of unsubscription? He then explained that Big Deals don’t save libraries money. They lower the cost per article, but this is a value measure, not a cost measure. What the Big Deal did was make cancellations more difficult. Most libraries have cancelled every journal that they can without Faculty ‘burning down the library’, to preserve the Big Deal. This explains the persistence of subscriptions over time. The library is forced to redirect money away from other resources (books) and into serials budget. The reason we can get away with this is because books are not used much.

The wolf seems to be well and truly upon us. There have been lots of cancellations and reduction of library budgets in the USA (a claim supported by a long list of examples). The number of cancellations grows as the money being siphoned off book budgets runs out.

Anderson noted that the emergence of new gold OA journals doesn’t help libraries, this does nothing to relieve the journal emergency. They just add to the list of costs because it is a unique set of content. What does help libraries is the ability to cancel journals. Professor Syun Tutiya, Librarian Emeritus at Chiba University in a separate session noted that if Japan were to flip from a fully subscription model to APCs it would be about the same cost, so that would solve the problem.

Anderson said that there is an argument that “there is no evidence that green OA cancels journals” (I should note that I am well and truly in this camp, see my argument). Anderson’s argument that this is saying the future hasn’t happened yet. The implicit argument here is that because green OA has not caused cancellations so far means it won’t do it into the future.

Library money is taxpayers’ money – it is not always going to flow. There is much greater scrutiny of journal big deals as budgets shrink.

Anderson argued that green open access provides inconsistent and delayed access to copies which aren’t always the version of record, and this has protected subscriptions. He noted that Green OA is dependent on subscription journals, which is “ironic given that it also undermines them”. You can’t make something completely & freely available without undermining the commercial model for that thing, Anderson argued.

So, Anderson said, given green OA exists and has for years, and has not had any impact on subscriptions, what would need to happen for this to occur? Anderson then described two subscription scenarios. The low cancellation scenario (which is the current situation) where green open access is provided sporadically and unreliably. In this situation, access is delayed by a year or so, and the versions available for free are somewhat inferior.

The high cancellation scenario is where there is high uptake of green OA because there are funder requirements and the version is close to the final one. Anderson argued that the “OA advocates” prefer this scenario and they “have not thought through the process”. If the cost is low enough of finding which journals have OA versions and the free versions are good enough, he said, subscriptions will be cancelled. The black and white version of Anderson’s future is: “If green OA works then subscriptions fail, and the reverse is true”.

Not surprisingly I disagreed with Anderson’s argument, based on several points. To start, there would need to have a certain percentage of the work available before a subscription could be cancelled. Professor Syun Tutiya, Librarian Emeritus at Chiba University noted in a different discussion that in Japan only 6.9% of material is available Green OA in repositories and argued that institutional repositories are good for lots of things but not OA. Certainly in the UK, with the strongest open access policies in the world, we are not capturing anything like the full output. And the UK is itself only 6% of the research output for the world, so we are certainly a very long way away from this scenario.

In addition, according to work undertaken by Michael Jubb in 2015 – most of the green Open Access material is available in places other than institutional repositories, such as ResearchGate and SciHub. Do librarians really feel comfortable cancelling subscriptions on the basis of something being available in a proprietary or illegal format?

The researcher perspective

Stephen Curry, Professor of Structural Biology, Imperial College London, spoke about “Zen and the Art of Research Assessment”. He started by asking why people become researchers and gave several reasons: to understand the world, change the world, earn a living and be remembered. He then asked how they do it. The answer is to publish in high impact journals and bring in grant money. But this means it is easy to lose sight of the original motivations, which are easier to achieve if we are in an open world.

In discussing the report published in 2015, which looked into the assessment of research, “The Metric Tide“, Curry noted that metrics & league tables aren’t without value. They do help to rank football teams, for example. But university league tables are less useful because they aggregate many things so are too crude, even though they incorporate valuable information.

Are we as smart as we think we are, he asked, if we subject ourselves to such crude metrics of achievement? The limitations of research metrics have been talked about a lot but they need to be better known. Often they are too precise. For example was Caltech really better than University of Oxford last year but worse this year?

But numbers can be seductive. Researchers want to focus on research without pressure from metrics, however many Early Career Researchers and PhD students are increasingly fretting about publications hierarchy. Curry asked “On your death bed will you be worrying about your H-Index?”

There is a greater pressure to publish rather than pressure to do good science. We should all take responsibility to change this culture. Assessing research based on outputs is creating perverse incentives. It’s the content of each paper that matters, not the name of the journal.

In terms of solutions, Curry suggested it would be better to put higher education institutions in 5% brackets rather than ranking them 1-n in the league tables. Curry calls for academic leaders and institutions to do their bit in combating the metrics focus. He also called for much wider adoption of the Declaration On Research Assessment (known as DORA). Curry’s own institution, Imperial College London, has done so recently.

Curry argued that ‘indicators’ would be a more appropriate term than ‘metrics’ in research assessment because we’re looking at proxies. The term metrics imply you know what you are measuring. Certainly metrics can inform but they cannot replace judgement. Users and providers must be transparent.

Another solution is preprints, which shift attention from container to content because readers use the abstract not the journal name to decide which papers to read. Note that this idea is starting to become more mainstream with the research by the NIH towards the end of last year “Including Preprints and Interim Research Products in NIH Applications and Reports

Copyright discussion

I sat on a panel to discuss copyright with a funder – Mark Thorley, Head of Science Information, Natural Environment Research Council , a lawyer – Alexander Ross, Partner, Wiggin LLP and a publisher – Dr Robert Harington,  Associate Executive Director, American Mathematical Society.

My argument** was that selling or giving the copyright to a third party with a purely commercial interest and that did not contribute to the creation of the work does not protect originators. That was the case in the Kookaburra song example. It is also the case in academic publishing. The copyright transfer form/publisher agreement that authors sign usually mean that the authors retain their moral rights to be named as the authors of the work, but they sign away rights to make any money out of them.

I argued that publishers don’t need to hold the copyright to ensure commercial viability. They just need first exclusive publishing rights. We really need to sit down and look at how copyright is being used in the academic sphere – who does it protect? Not the originators of the work.

Judging by the mood in the room, the debate could have gone on for considerably longer. There is still a lot of meat on that bone. (**See the end of this blog for details of my argument).

The intermediary corner

The problem of getting books to readers

There are serious issues in the supply chain of getting books to readers, according to Dr Michael Jubb, Independent Consultant and Richard Fisher from Something Understood Scholarly Communication.

The problems are multi-pronged. For a start, discoverability of books is “disastrous” due to completely different metadata standards in the supply chain. ONIX is used for retail trade and MARC is standard for libraries, Neither has detailed information for authors, information about the contents of chapters, sections etc, or information about reviews and comments.

There are also a multitude of channels for getting books to libraries. There has been involvement in the past few years of several different kinds of intermediaries – metadata suppliers, sales agents, wholesalers, aggregators, distributors etc – who are holding digital versions of books that can be supplied through the different type of book platforms. Libraries have some titles on multiple platforms but others only available on one platform.

There are also huge challenges around discoverability and the e-commerce systems, which is “too bitty”. The most important change that has happened in books has been Amazon, however publisher e-commerce “has a long way to go before it is anything like as good as Amazon”.

Fisher also reminded the group that there are far more books published each year than there are journals – it’s a more complex world. He noted that about 215 [NOTE: amended from original 250 in response to Richard Fisher’s comment below] different imprints were used by British historians in the last REF. Many of these publishers are very small with very small margins.

Jubb and Fisher both emphasised readers’ strong preference for print, which implies that much more work needed on ebook user experience. There are ‘huge tensions’ between reader preference (print) and the drive for e-book acquisition models at libraries.

The situation is probably best summed up in the statement that “no-one in the industry has a good handle on what works best”.

Providing efficient access management

Current access control is not functional in the world we live in today. If you ask users to jump through hoops to get access off campus then your whole system defeats its purpose. That was the central argument of Tasha Mellins-Cohen, the Director of Product Development, HighWire Press when she spoke about the need to improve access control.

Mellins-Cohen started with the comment “You have one identity but lots of identifiers”, and noted if you have multiple institutional affiliations this causes problems. She described the process needed for giving access to an article from a library in terms of authentication – which, as an aside, clearly shows why researchers often prefer to use Sci Hub.

She described an initiative called CASA – Campus Activated Subscriber-Access which records devices that have access on campus through authenticated IP ranges and then allows access off campus on the same device without using a proxy. This is designed to use more modern authentication. There will be “more information coming out about CASA in the next few months”.

Mellins-Cohen noted that tagging something as ‘free’ in the metadata improves Google indexing – publishers need to do more of this at article level. This comment was responded with a call out to publishers to make the information about sharing more accessible to authors through How Can I Share It?

Mellins-Cohen expressed some concern that some of the ideas coming out of RA21 Resource Access in 21st Century, an STM project to explore alternatives to IP authentication, will raise barriers to access for researchers.

Summary

It is always interesting to have the mix of publishers, intermediaries, librarians and others in the scholarly communication supply chain together at a conference such as this. It is rare to have the conversations between different stakeholders across the divide. In his summary of the event, Mark Carden noted the tension in the scholarly communication world, saying that we do need a lively debate but also need to show respect for one another.

So while the keynote started promisingly, and said all the things we would like to hear from the publishing industry, there is still the reality that we are not there yet.  And this underlines the whole problem. This interweb thingy didn’t happen last week. What has actually happened  to update the publishing industry in the last 20 years? Very little it seems. However it is not all bad news. Things to watch out for in the near future include plans for micro-payments for individual access to articles, according to Mark Allin, and the highly promising Campus Activated Subscriber-Access system.

Danny Kingsley attended the Researcher to Reader conference thanks to the support of the Arcadia Fund, a charitable fund of Lisbet Rausing and Peter Baldwin.

Published 27 February 2017
Written by Dr Danny Kingsley
Creative Commons License

Copyright case study

In my presentation, I spoke about the children’s campfire song, “Kookaburra sits in the old gum tree” which was written by Melbourne schoolteacher Marion Sinclair in 1932 and first aired in public two years later as part of a Girl Guides jamboree in Frankston. Sinclair had to get prompted to go to APRA (Australasian Performing Right Association) to register the song. That was in 1975, the song had already been around for 40 years but she never expressed any great interest in any propriety to the song.

In 1981 the Men at Work song “Down Under” made No. 1 in Australia. The song then topped the UK, Canada, Ireland, Denmark and New Zealand charts in 1982 and hit No.1 in the US in January 1983. It sold two million copies in the US alone.  When Australia won the America’s Cup in 1983 Down Under was played constantly. It seems extremely unlikely that Marion Sinclair did not hear this song. (At the conference, three people self-identified as never having heard the song when a sample of the song was played.)

Marion Sinclair died in 1988, the song went to her estate and Norman Lurie, managing director of Larrikin Music Publishing, bought the publishing rights from her estate in 1990 for just $6100. He started tracking down all the chart music that had been printed all over the world, because Kookaburra had been used in books for people learning flute and recorder.

In 2007 TV show Spicks and Specks had a children’s music themed episode where the group were played “Down Under” and asked which Australian nursery rhyme the flute riff was based on. Eventually they picked Kookaburra, all apparently genuinely surprised when the link between the songs was pointed out. There is a comparison between the music pieces.

Two years later Larrikin Music filed a lawsuit, initially wanting 60% of Down Under’s profits. In February 2010, Men at Work appealed, and eventually lost. The judge ordered Men at Work’s recording company, EMI Songs Australia, and songwriters Colin Hay and Ron Strykert to pay 5% of royalties earned from the song since 2002 and from its future earnings.

In the end, Larrikin won around $100,000, although legal fees on both sides have been estimated to be upwards $4.5 million, with royalties for the song frozen during the case.

Gregory Ham was the flautist in the band who played the riff. He did not write Down Under, and was devastated by the high profile court case and his role in proceedings. He reportedly fell back into alcohol abuse and was quoted as saying: “I’m terribly disappointed that’s the way I’m going to be remembered — for copying something.” Ham died of a heart attack in April 2012 in his Carlton North home, aged 58, with friends saying the lawsuit was haunting him.

This case, I argued, exemplifies everything that is wrong with copyright.