Open Research in the Humanities: Research Evaluation 

Authors: Emma Gilby, Matthias Ammon, Rachel Leow and Sam Moore

This is the sixth and final of a series of blog posts, presenting the reflections of the Working Group on Open Research in the Humanities.  Read the opening post here. The working group aimed to reframe open research in a way that was more meaningful to humanities disciplines, and their work will inform the University of Cambridge approach to open research.  This post discusses opportunities and challenges for research evaluation in the arts and humanities. The direction of travel in the Open Research discussion is away from any straightforward use of metrics in research evaluation. This is hugely in favour of the arts and humanities.

Opportunities 

The arts and humanities have never used metrics in the same way as their STEM colleagues. This is partly because of the slower speed of publishing and the in-depth editorial process (18-24 months from submission to publication might be considered standard), and partly because ‘citation indices’ are less relevant when one’s contribution is to be part of a broad, ongoing cultural conversation rather than to generate data from scratch (see above, on CORE data). So the diversification of research evaluation enshrined in DORA (https://sfdora.org), and the questioning of the uncritical use of metrics and altmetrics by administrators, grant funders and promotion committees, is a positive development. It allows for a general move away from established academic platforms and formats, as discussed above.  

Support required 

Some pressing questions about research evaluation remain, which might account for some perceived hostility from some quarters towards a move away from established academic platforms. Who is doing the work of reading and assessing these multiple new formats? How do we evaluate success? Is success measured in terms of ‘reach’ – number of Twitter followers or blog readers etc? This would take us down the route of clickbait and skew towards already-popular, English-language material; this is a particular danger with processes designed to evaluate web traffic.  

What guidance is available for established academics looking to credit other colleagues for their social media contributions, in particular? An anxiety often expressed is that ‘there is a lot of rubbish on the internet’. How do we sift through? In general, research evaluation takes time and effort, and there is a sense that this work needs to be properly measured and quantified. For example, if an evaluator spends 20 minutes per CV on 60 CVs, that is 20 hours of work before one even gets into reading and evaluating actual outputs. In the context of a busy teaching term, such additional labour is barely possible and contributes to a general sense of stress within the profession. Looking for a kind of shorthand to facilitate swift and accurate evaluation of a wide range of (possibly unfamiliar) formats is therefore the pragmatic approach.  

The discussion of narrative CVs in the DORA context implies an amalgamation of the traditional CV and the cover letter. In our institution as no doubt in others too, it would be useful to have some HR guidance here on what appointment panels should ask for (e.g. no cover letter, but a paragraph each on a candidate’s three main research achievements?).  

There was a feeling in the working group that Cambridge is perhaps behind other universities who make ‘open research’ a category for assessment in itself, and who guide their employment panels and candidates accordingly. Indeed, Cambridge’s traditional division of the criteria for promotion into the discrete categories of ‘research’, ‘teaching’ and ‘general contribution’ seems actively to work against the whole idea of ‘open research’. It suggests unhelpfully that ‘service to the community’, ‘teaching’ and ‘research’ do not overlap. This division seems to have survived the recent overhaul of the academic promotions exercise.  

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.