Tag Archives: metrics

Chasing cash cows in a swamp? Perspectives on Plan S from Australia and the USA

Plan S was born in Europe, yet from the very start it aspired to accelerate conversations around open access on a global scale. After all, if free access to research outputs is good in one place, it will be good everywhere, right? Well, it turns out that things may not be that simple.

In this Open Access Week, we look East and West to find out how Plan S is being received across the globe. Dr Danny Kingsley explores how reliance on foreign students has trapped Australian universities in a ‘Faustian bargain’ with publishers and reduced the scope for change. Micah Vandegrift reports on the type of conversations that Plan S has inspired in the USA, as well as the potential political barriers, sounding a note of cautious optimism.

The uptake of Plan S or equivalent principles in countries beyond Europe is crucial to the overall success of the movement. Publishers are using the fact that uptake currently has limited geographic scope to stall change, arguing that they cannot alter their model to suit the requirements of a relatively small percentage of authors. The number of supporting funders is still small and concentrated in Europe, with a few US players. China initially looked set to join in and thus change the game, but since the end of 2018 we have seen little progress on that front. Has Plan S been successful in shaping conversations around the world?

Hearing from our colleagues in other countries highlights some of the promises and challenges Plan S is facing in making an impact outside Europe. Learning about those raises a number of interesting points for how we advocate for open access at home too.

Dr Danny Kingsley: Australia

Photo of Sydney Opera House over a calm sea.
Sydney Opera House. ‘ Plan S has not really caused much of a ripple Down Under ‘.

Rankings are a natural enemy of openness

When first approached by the Office of Scholarly Communication to write a piece about Plan S in Australia, my initial response was it would be very short. That is, Plan S has not really caused much of a ripple Down Under. Those in the know – people working in scholarly communication and some senior members of research institutions – are aware and watching closely. But as far as opening up a general discussion amongst the academic community, this simply hasn’t happened.

Over the past six months I have been trying to understand where some of the problems lie when it comes to openness in Australia. It is more fundamental than the usual concerns researchers have about Open Access, and goes to the heart of how universities work here.

Where the money flows

First a quick run-down on how research funding to universities works in Australia. There are only two government funders – the National Health and Medical Council (NHMRC) and the Australian Research Council (ARC). The amount of funding these granted in 2017-2018 was about $943 million and $758 million respectively to all research organisations. As a comparison, the Wellcome Trust endowed in the range of £10m – £50m in Australia in 2017-18. For those interested there is a full breakdown of sources of research funding.

The funder policies on Open Access and Research Data Management are pretty weak overall. The NHMRC policy requires that any peer reviewed publication be available in a repository 12 months after publication and “strongly encourages researchers to consider the reuse value of their data and to take reasonable steps to share research data and associated metadata arising from NHMRC supported research”. The ARC policy requires the metadata of research outputs to be available in a repository 3 months after publication and the work to be OA 12 months after publication. But the policy specifically states: “For the purposes of this policy, Research Outputs do not include research data and research data outputs.”

Resourcing limitations mean these policies are not monitored, and there are no sanctions for non-compliance. This means they are basically ineffective, given the findings of a study last year that identified what policies need to ensure compliance.

But these policies simply reflect a lack of policy generally in Australia, partly due to the revolving door that has been the Prime Ministership over the past five years. So, on face value, the reason for the lack of engagement with discussions around Plan S just reflect this lassitude.

But I am wondering if there might be something deeper at play here.

Cash cow

Australian universities are heavily financially reliant on overseas students, with the numbers of international students several multiples greater than any other comparable university worldwide. Numbers of overseas students have doubled since 2008, with 398,563 students enrolled in 2018. In one instance, the University of Sydney, fees from Chinese students make up one fifth of its annual revenue with $500 million in 2017. Taken across the country, these figures outweigh public research funding significantly.

While this dependence has been labelled as highly risky from a financial perspective, it is also causing serious issues elsewhere in the sector including concerns about eroding educational standards. But it is also causing a perversion in the way research is managed.

The role of the ranking

University rankings are extremely important in the recruitment of overseas students. The vast majority of Australian university websites list some interpretation of their rankings. Monash University and the University of Western Australia both note they are in the “top 100 universities in the world”. Other universities are more specific, naming their place, like UNSW at 43rd in the world and University of Queensland listing no fewer than five rankings, trumped by Queensland University of Technology with six rankings listed.

Chasing rankings comes at a price. In some instances, increasing a University’s position in the rankings is a specific strategy, with the University of Canberra a recent success story.

There is incredible pressure on researchers in Australia to perform. This can take the form of reward, with many universities offering financial incentives for publication in ‘top’ journals. This is fairly widespread, with some universities having this position on the public record. For example, Griffith University’s Research and Innovation Plan 2017-2020 includes: “Maintain a Nature and Science publication incentive scheme”. Publication in these two journals comprises 20% of the score in the Academic Ranking of World Universities.

Other institutions take a more draconian position. Murdoch University’s proposed ‘academic career framework’ identifies specific numbers of articles researchers are expected to publish in top journals per year. Not surprisingly this approach has been highly criticised for its “extremely narrow view of academic career success”.

Australia’s Chief Scientist has recently been arguing the need for a different way of assessing our researchers, with concern that the current system is fuelling bad science. With exception of some groundswell activity, this is as close as anyone is getting to using the ‘reproducibility’ word here in Australia, possibly from nervousness in the sector from government interference in the allocation of research grants in 2018. There is certainly nothing comparable to the UK or the US on this issue.

The Open Access challenge

But what has all of this to do with Open Access or Plan S? Well, everything actually.

For a start, signing up to the Declaration of Research Assessment (DORA), or the Leiden Manifesto is one of the principles of Plan S, with the Wellcome Trust stating that it will not fund research at institutions that have not signed up. Only a handful of Australian research organisations have signed DORA, none of which are universities. Given many Australian institutions are not only judging researchers on their publication record, but in some cases proscribing which journals in which they are allowed to publish, it would be extremely difficult for these institutions to become a signatory to DORA or the Leiden Manifesto.

But the main problem for the open agenda is the total reliance on specific metrics that deliver ranking numbers – metrics which enfold Australian universities into a Faustian bargain with the large commercial publishers.

Australian universities are not engaging with Plan S because they cannot afford to. And while the Australian funders remain silent on the topic (literally – a search for Plan S on each website comes up empty), there is little incentive to worry about it.

If anything, this situation further underlines the need to shift the academic reward system away from the single measure of publication of novel results in high impact journals.  Given how deeply ingrained that measure is in Australia it will be interesting to see where we are at this time next year.

Micah Vandegrift: USA

An image of a river in the USA.
A meandering river in the USA. Plan S has sparked conversations in the USA, but progress is slow.

A shot heard around the world

A little more than a year ago, open access had its “shot heard around the world” moment. Plan S expanded out from Europe, encompassing angst and excitement, requiring think-pieces from thought leaders, policy briefs from the wonks, and general malaise from lots of stakeholders. The European open agenda is, by design or by accident, shaping the horizon and Plan S continues to be a marker of that progression. I had the unique opportunity to be on the ground in Europe for most of the fallout last fall, and now with the benefit of time and geographic remove, I am observing the after effects, especially in how U.S.-based research communities are responding in kind. 

Ripples and tides

The greatest surprise is that Plan S seems to be the thing that is getting people from all corners out to debate the issues. The tidal wave of Plan S seems to have crashed on our shores with something for everyone – publishers, libraries, researchers, and funders. Librarianship tends to pivot around shifts in the publishing landscape, finding crevices to leverage our expertise and chances to show off that knowledge to researchers, and I expected Plan S to offer that as well. The weird thing, though, is that the responses have been uneven, distributed, and displaced. For example, I was invited along with Rick Anderson of Scholarly Kitchen fame to debate the Plan in front of 200+ managing and technical editors as the plenary session at their conference. On the flipside, Dr. Kelvin Droegemeier, announced as Director of the White House Office of Science and Technology Policy in January 2019 (after a vacancy since something happened in November 2016), flippantly addressed Plan S in an interview simply saying “we won’t ever tell people where to publish.” Bizarrely, a research policy affecting labs and scholars from Norway to Portugal is giving me a chance to meet and chat with publisher colleagues more than ever before, and not opened any new doors for communicating finer points of licensing with faculty on my campus. 

A slow-flowing river

Following the current into the near future, I believe that there are three tributaries that will come together. Funders will continue to exert their influence, supplanting publishers as drivers of the conversation, disciplines will adapt discipline-specific means of scholarly sharing (see the rise of pre-prints [PDF]), and policy makers will attempt to legislate cautious action toward a global research marketplace. However… in the U.S. context there are two barriers that could dam the flow. Uncertainty in our political climate, and an America-first foreign policy agenda, is boiling up concern about “undue foreign influence,” and I fear that isolationism will compel a counter narrative to the open and public sharing of research worldwide. Secondly, America is a god-damn huge country and developing a coherent national framework for openness seems to be a fool’s errand. However, what sometimes appears to be a bog can actually be a river barely inching along. If Plan S was a splash, Plan Open U.S. will be a steady drip, creating geologic formations of systemic change toward a more open research ecosystem. 

Conclusions

We read Danny and Micah’s contributions with great interest. They raised several questions about Plan S, which we hope to discuss with Micah after today’s talk.

  1. What can we do to increase engagement of our local academic communities with the open access agenda?
  2. Is it possible to uncouple decisions about research practice from financial or political/ideological considerations?
  3. How can government funders find a balance between dictating open research mandates and respecting the academic freedom of researchers?
  4. Can institutions measure research accurately without creating perverse incentives?
  5. Is there any country in the world where the mention of politicians does not trigger an immediate eye-roll?

Published 24 October 2019

Written by Dr Danny Kingsley (Scholarly Communication Consultant) and Micah Vandegrift (Open Knowledge Librarian at NC State University Libraries).

Compiled by Dr Beatrice Gini

This icon displays that the content of this blog is licensed under CC BY 4.0

‘Be nice to each other’ – the second Researcher to Reader conference

Aaaaaaaaaaargh! was Mark Carden’s summary of the second annual Researcher to Reader conference, along with a plea that the different players show respect to one another. My take home messages were slightly different:

  • Publishers should embrace values of researchers & librarians and become more open, collaborative, experimental and disinterested.
  • Academic leaders and institutions should do their bit in combating the metrics focus.
  • Big Deals don’t save libraries money, what helps them is the ability to cancel journals.
  • The green OA = subscription cancellations is only viable in a utopian, almost fully green world.
  • There are serious issues in the supply chain of getting books to readers.
  • And copyright arrangements in academia do not help scholarship or protect authors*.

The programme for the conference included a mix of presentations, debates and workshops. The Twitter hashtag is #r2rconf.

As is inevitable in the current climate, particularly at a conference where there were quite a few Americans, the shadow of Trump was cast over the proceedings. There was much mention of the political upheaval and the place research and science has in this.

[*please see Kent Anderson’s comment at the bottom of this blog]

In the publishing corner

Time for publishers to raise to the challenge

The conference opened with an impassioned speech by Mark Allin, the President and CEO of John Wiley & Sons, who started with the statement this was “not a time for retreat, but a time for outreach and collaboration and to be bold”.

The talk was not what was expected from a large commercial publisher. Allin asked: “How can publishers act as advocates for truth and knowledge in the current political climate?” He mentioned that Proquest has launched a displaced researchers programme in reaction to world events, saying, “it’s a start but we can play a bigger role”.

Allin asked what publishers can do to ensure research is being accessed. Referencing “The content trap” by Bharat Anand, Allin said “We won’t as a media industry survive as a media content and putting it in a bottle and controlling its distribution. We will only succeed if we connect the users. So we need to re-engineer the workflows making them seamless, frictionless. “We should be making sure that … we are offering access to all those who want it.”

Allin raised the issue of access, noting that ResearchGate has more usage than any single publisher. He made the point that “customers don’t care if it is the version of record, and don’t care about our arcane copyright laws”. This is why people use SciHub, it is ease of access. He said publishers should not give up protecting copyright but must realise its limitations and provide easy access.

Researchers are the centre of gravity – we need to help them spend more time researching and less time publishing, he says. There is a lesson here, he noted, suppliers should use “the divine discontent of the customer as their north star”. He used the example of Amazon to suggest people working in scholarly communication need to use technology much better to connect up. “We need to experiment more, do more, fail more, be more interconnected” he said, where “publishing needs open source and open standards” which are required for transformational impact on scholarly publishing – “the Uber equivalent”.

His suggestion for addressing the challenges of these sharing platforms is to “try and make your experience better than downloading from a pirate site”, and that this would be a better response than taking the legal route and issuing takedown notices.  He asked: “Should we give up? No, but we need to recognise there are limits. We need to do more to enable access.”

Allin called the situation, saying publishing may have gone online but how much has the internet really changed scholarly communication practices? The page is still a unit of publishing, even in digital workflows. It shouldn’t be, we should have a ‘digital first’ workflow. The question isn’t ‘what should the workflow look like?’, but ‘why hasn’t it improved?’, he said, noting that innovation is always slowed by social norms not technology. Publishers should embrace values of researchers & librarians and become more open, collaborative, experimental and disinterested.

So what do publishers do?

Publishers “provide quality and stability”, according to Kent Anderson, speaking on the second day (no relation to Rick Anderson) in his presentation about ‘how to cook up better results in communicating research’. Anderson is the CEO of Redlink, a company that provides publishers and libraries with analytic and usage information. He is also the founder of the blog The Scholarly Kitchen.

Anderson made the argument that “publishing is more than pushing a button”, by expanding on his blog on ‘96 things publishers do’. This talk differed from Allin’s because it focused on the contribution of publishers.

Anderson talked about the peer review process, noting that rejections help academics because usually they are about mismatch. He said that articles do better in the second journal they’re submitted to.

During a discussion about submission fees, Anderson noted that these “can cover the costs of peer review of rejected papers but authors hate them because they see peer review as free”. His comment that a $250 journal submission charge with one journal is justified by the fact that the target market (orthopaedic surgeons) ‘are rich’ received (rather unsurprisingly) some response from the audience via Twitter.

Anderson also made the accusation that open access publishers take lower quality articles when money gets tight. This did cause something of a backlash on the Twitter discussion with a request for a citation for this statement, a request for examples of publishers lowering standards to bring in more APC income with the exception of scam publishers. [ADDENDUM: Kent Anderson below says that this was not an ‘accusation’ but an ‘observation’. The Twitter challenge for ‘citation please?’ holds.]

There were a couple of good points made by Anderson. He argued that one of the value adds that publishers do is training editors. This is supported by a small survey we undertook with the research community at Cambridge last year which revealed that 30% of the editors who responded felt they needed more training.

The library corner

The green threat

There is good reason to expect that green OA will make people and libraries cancel their subscriptions, at least it will in the utopian future described by Rick Anderson (no relation to Kent Anderson), Associate Dean of University of Utah in his talk “The Forbidden Forecast, Thinking about open access and library subscriptions”.

Anderson started by asking why, if we’re in a library funding crisis, aren’t we seeing sustained levels of unsubscription? He then explained that Big Deals don’t save libraries money. They lower the cost per article, but this is a value measure, not a cost measure. What the Big Deal did was make cancellations more difficult. Most libraries have cancelled every journal that they can without Faculty ‘burning down the library’, to preserve the Big Deal. This explains the persistence of subscriptions over time. The library is forced to redirect money away from other resources (books) and into serials budget. The reason we can get away with this is because books are not used much.

The wolf seems to be well and truly upon us. There have been lots of cancellations and reduction of library budgets in the USA (a claim supported by a long list of examples). The number of cancellations grows as the money being siphoned off book budgets runs out.

Anderson noted that the emergence of new gold OA journals doesn’t help libraries, this does nothing to relieve the journal emergency. They just add to the list of costs because it is a unique set of content. What does help libraries is the ability to cancel journals. Professor Syun Tutiya, Librarian Emeritus at Chiba University in a separate session noted that if Japan were to flip from a fully subscription model to APCs it would be about the same cost, so that would solve the problem.

Anderson said that there is an argument that “there is no evidence that green OA cancels journals” (I should note that I am well and truly in this camp, see my argument). Anderson’s argument that this is saying the future hasn’t happened yet. The implicit argument here is that because green OA has not caused cancellations so far means it won’t do it into the future.

Library money is taxpayers’ money – it is not always going to flow. There is much greater scrutiny of journal big deals as budgets shrink.

Anderson argued that green open access provides inconsistent and delayed access to copies which aren’t always the version of record, and this has protected subscriptions. He noted that Green OA is dependent on subscription journals, which is “ironic given that it also undermines them”. You can’t make something completely & freely available without undermining the commercial model for that thing, Anderson argued.

So, Anderson said, given green OA exists and has for years, and has not had any impact on subscriptions, what would need to happen for this to occur? Anderson then described two subscription scenarios. The low cancellation scenario (which is the current situation) where green open access is provided sporadically and unreliably. In this situation, access is delayed by a year or so, and the versions available for free are somewhat inferior.

The high cancellation scenario is where there is high uptake of green OA because there are funder requirements and the version is close to the final one. Anderson argued that the “OA advocates” prefer this scenario and they “have not thought through the process”. If the cost is low enough of finding which journals have OA versions and the free versions are good enough, he said, subscriptions will be cancelled. The black and white version of Anderson’s future is: “If green OA works then subscriptions fail, and the reverse is true”.

Not surprisingly I disagreed with Anderson’s argument, based on several points. To start, there would need to have a certain percentage of the work available before a subscription could be cancelled. Professor Syun Tutiya, Librarian Emeritus at Chiba University noted in a different discussion that in Japan only 6.9% of material is available Green OA in repositories and argued that institutional repositories are good for lots of things but not OA. Certainly in the UK, with the strongest open access policies in the world, we are not capturing anything like the full output. And the UK is itself only 6% of the research output for the world, so we are certainly a very long way away from this scenario.

In addition, according to work undertaken by Michael Jubb in 2015 – most of the green Open Access material is available in places other than institutional repositories, such as ResearchGate and SciHub. Do librarians really feel comfortable cancelling subscriptions on the basis of something being available in a proprietary or illegal format?

The researcher perspective

Stephen Curry, Professor of Structural Biology, Imperial College London, spoke about “Zen and the Art of Research Assessment”. He started by asking why people become researchers and gave several reasons: to understand the world, change the world, earn a living and be remembered. He then asked how they do it. The answer is to publish in high impact journals and bring in grant money. But this means it is easy to lose sight of the original motivations, which are easier to achieve if we are in an open world.

In discussing the report published in 2015, which looked into the assessment of research, “The Metric Tide“, Curry noted that metrics & league tables aren’t without value. They do help to rank football teams, for example. But university league tables are less useful because they aggregate many things so are too crude, even though they incorporate valuable information.

Are we as smart as we think we are, he asked, if we subject ourselves to such crude metrics of achievement? The limitations of research metrics have been talked about a lot but they need to be better known. Often they are too precise. For example was Caltech really better than University of Oxford last year but worse this year?

But numbers can be seductive. Researchers want to focus on research without pressure from metrics, however many Early Career Researchers and PhD students are increasingly fretting about publications hierarchy. Curry asked “On your death bed will you be worrying about your H-Index?”

There is a greater pressure to publish rather than pressure to do good science. We should all take responsibility to change this culture. Assessing research based on outputs is creating perverse incentives. It’s the content of each paper that matters, not the name of the journal.

In terms of solutions, Curry suggested it would be better to put higher education institutions in 5% brackets rather than ranking them 1-n in the league tables. Curry calls for academic leaders and institutions to do their bit in combating the metrics focus. He also called for much wider adoption of the Declaration On Research Assessment (known as DORA). Curry’s own institution, Imperial College London, has done so recently.

Curry argued that ‘indicators’ would be a more appropriate term than ‘metrics’ in research assessment because we’re looking at proxies. The term metrics imply you know what you are measuring. Certainly metrics can inform but they cannot replace judgement. Users and providers must be transparent.

Another solution is preprints, which shift attention from container to content because readers use the abstract not the journal name to decide which papers to read. Note that this idea is starting to become more mainstream with the research by the NIH towards the end of last year “Including Preprints and Interim Research Products in NIH Applications and Reports

Copyright discussion

I sat on a panel to discuss copyright with a funder – Mark Thorley, Head of Science Information, Natural Environment Research Council , a lawyer – Alexander Ross, Partner, Wiggin LLP and a publisher – Dr Robert Harington,  Associate Executive Director, American Mathematical Society.

My argument** was that selling or giving the copyright to a third party with a purely commercial interest and that did not contribute to the creation of the work does not protect originators. That was the case in the Kookaburra song example. It is also the case in academic publishing. The copyright transfer form/publisher agreement that authors sign usually mean that the authors retain their moral rights to be named as the authors of the work, but they sign away rights to make any money out of them.

I argued that publishers don’t need to hold the copyright to ensure commercial viability. They just need first exclusive publishing rights. We really need to sit down and look at how copyright is being used in the academic sphere – who does it protect? Not the originators of the work.

Judging by the mood in the room, the debate could have gone on for considerably longer. There is still a lot of meat on that bone. (**See the end of this blog for details of my argument).

The intermediary corner

The problem of getting books to readers

There are serious issues in the supply chain of getting books to readers, according to Dr Michael Jubb, Independent Consultant and Richard Fisher from Something Understood Scholarly Communication.

The problems are multi-pronged. For a start, discoverability of books is “disastrous” due to completely different metadata standards in the supply chain. ONIX is used for retail trade and MARC is standard for libraries, Neither has detailed information for authors, information about the contents of chapters, sections etc, or information about reviews and comments.

There are also a multitude of channels for getting books to libraries. There has been involvement in the past few years of several different kinds of intermediaries – metadata suppliers, sales agents, wholesalers, aggregators, distributors etc – who are holding digital versions of books that can be supplied through the different type of book platforms. Libraries have some titles on multiple platforms but others only available on one platform.

There are also huge challenges around discoverability and the e-commerce systems, which is “too bitty”. The most important change that has happened in books has been Amazon, however publisher e-commerce “has a long way to go before it is anything like as good as Amazon”.

Fisher also reminded the group that there are far more books published each year than there are journals – it’s a more complex world. He noted that about 215 [NOTE: amended from original 250 in response to Richard Fisher’s comment below] different imprints were used by British historians in the last REF. Many of these publishers are very small with very small margins.

Jubb and Fisher both emphasised readers’ strong preference for print, which implies that much more work needed on ebook user experience. There are ‘huge tensions’ between reader preference (print) and the drive for e-book acquisition models at libraries.

The situation is probably best summed up in the statement that “no-one in the industry has a good handle on what works best”.

Providing efficient access management

Current access control is not functional in the world we live in today. If you ask users to jump through hoops to get access off campus then your whole system defeats its purpose. That was the central argument of Tasha Mellins-Cohen, the Director of Product Development, HighWire Press when she spoke about the need to improve access control.

Mellins-Cohen started with the comment “You have one identity but lots of identifiers”, and noted if you have multiple institutional affiliations this causes problems. She described the process needed for giving access to an article from a library in terms of authentication – which, as an aside, clearly shows why researchers often prefer to use Sci Hub.

She described an initiative called CASA – Campus Activated Subscriber-Access which records devices that have access on campus through authenticated IP ranges and then allows access off campus on the same device without using a proxy. This is designed to use more modern authentication. There will be “more information coming out about CASA in the next few months”.

Mellins-Cohen noted that tagging something as ‘free’ in the metadata improves Google indexing – publishers need to do more of this at article level. This comment was responded with a call out to publishers to make the information about sharing more accessible to authors through How Can I Share It?

Mellins-Cohen expressed some concern that some of the ideas coming out of RA21 Resource Access in 21st Century, an STM project to explore alternatives to IP authentication, will raise barriers to access for researchers.

Summary

It is always interesting to have the mix of publishers, intermediaries, librarians and others in the scholarly communication supply chain together at a conference such as this. It is rare to have the conversations between different stakeholders across the divide. In his summary of the event, Mark Carden noted the tension in the scholarly communication world, saying that we do need a lively debate but also need to show respect for one another.

So while the keynote started promisingly, and said all the things we would like to hear from the publishing industry, there is still the reality that we are not there yet.  And this underlines the whole problem. This interweb thingy didn’t happen last week. What has actually happened  to update the publishing industry in the last 20 years? Very little it seems. However it is not all bad news. Things to watch out for in the near future include plans for micro-payments for individual access to articles, according to Mark Allin, and the highly promising Campus Activated Subscriber-Access system.

Danny Kingsley attended the Researcher to Reader conference thanks to the support of the Arcadia Fund, a charitable fund of Lisbet Rausing and Peter Baldwin.

Published 27 February 2017
Written by Dr Danny Kingsley
Creative Commons License

Copyright case study

In my presentation, I spoke about the children’s campfire song, “Kookaburra sits in the old gum tree” which was written by Melbourne schoolteacher Marion Sinclair in 1932 and first aired in public two years later as part of a Girl Guides jamboree in Frankston. Sinclair had to get prompted to go to APRA (Australasian Performing Right Association) to register the song. That was in 1975, the song had already been around for 40 years but she never expressed any great interest in any propriety to the song.

In 1981 the Men at Work song “Down Under” made No. 1 in Australia. The song then topped the UK, Canada, Ireland, Denmark and New Zealand charts in 1982 and hit No.1 in the US in January 1983. It sold two million copies in the US alone.  When Australia won the America’s Cup in 1983 Down Under was played constantly. It seems extremely unlikely that Marion Sinclair did not hear this song. (At the conference, three people self-identified as never having heard the song when a sample of the song was played.)

Marion Sinclair died in 1988, the song went to her estate and Norman Lurie, managing director of Larrikin Music Publishing, bought the publishing rights from her estate in 1990 for just $6100. He started tracking down all the chart music that had been printed all over the world, because Kookaburra had been used in books for people learning flute and recorder.

In 2007 TV show Spicks and Specks had a children’s music themed episode where the group were played “Down Under” and asked which Australian nursery rhyme the flute riff was based on. Eventually they picked Kookaburra, all apparently genuinely surprised when the link between the songs was pointed out. There is a comparison between the music pieces.

Two years later Larrikin Music filed a lawsuit, initially wanting 60% of Down Under’s profits. In February 2010, Men at Work appealed, and eventually lost. The judge ordered Men at Work’s recording company, EMI Songs Australia, and songwriters Colin Hay and Ron Strykert to pay 5% of royalties earned from the song since 2002 and from its future earnings.

In the end, Larrikin won around $100,000, although legal fees on both sides have been estimated to be upwards $4.5 million, with royalties for the song frozen during the case.

Gregory Ham was the flautist in the band who played the riff. He did not write Down Under, and was devastated by the high profile court case and his role in proceedings. He reportedly fell back into alcohol abuse and was quoted as saying: “I’m terribly disappointed that’s the way I’m going to be remembered — for copying something.” Ham died of a heart attack in April 2012 in his Carlton North home, aged 58, with friends saying the lawsuit was haunting him.

This case, I argued, exemplifies everything that is wrong with copyright.