Tag Archives: open data

Data Diversity Podcast (#4) – Dr Stefania Merlo (1/2) 

Welcome back to the fourth instalment of Data Diversity, the podcast where we speak to Cambridge University Data Champions about their relationship with research data and highlight their unique data experiences and idiosyncrasies in their journeys as a researcher. In this edition, we speak to Data Champion Dr Stefania Merlo from the McDonald Institute of Archaeological Research, the Remote Sensing Digital Data Coordinator and project manager of the Mapping Africa’s Endangered Archaeological Sites and Monuments (MAEASaM) project and coordinator of the Metsemegologolo project. This is the first of a two-part series and in this first post, Stefania shares with us her experiences of working with research data and outputs that are part of heritage collections, and how her thoughts about research data and the role of the academic researcher have changed throughout her projects. She also shares her thoughts about what funders can do to ensure that research participants, and the data that they provide to researchers, can speak for themselves.   

This is the first of a two-part series and in this first post, Stefania shares with us her experiences of working with research data and outputs that are part of heritage collections, and how her thoughts about research data and the role of the academic researcher have changed throughout her projects. She also shares her thoughts about what funders can do to ensure that research participants, and the data that they provide to researchers, can speak for themselves.   


I’ve been thinking for a while about the etymology of the word data. Datum in Latin means ‘given’. Whereas when we are collecting data, we always say we’re “taking measurements”. Upon reflection, it has made me come to a realisation that we should approach data more as something that is given to us and we hold responsibility for, and something that is not ours, both in terms of ownership, but also because data can speak for itself and tell a story without our intervention – Dr Stefania Merlo


Data stories (whose story is it, anyway?) 

LO: How do you use data to tell the story that you want to tell? To put it another way, as an archaeologist, what is the story you want to tell and how do you use data to tell that story?

SM: I am currently working on two quite different projects. One is Mapping Africa’s Endangered Archaeological Sites and Monuments (funded by Arcadia) which is funded to create an Open Access database of information on endangered archaeological sites and monuments in Africa. In the project, we define “endangered” very broadly because ultimately, all sites are endangered. We’re doing this with a number of collaborators and the objective is to create a database that is mainly going to be used by national authorities for heritage management. There’s a little bit less storytelling there, but it has more to do with intellectual property: who are the custodians of the sites and the custodians of the data? A lot of questions are asked about Open Access, which is something that the funders of the projects have requested, but something that our stakeholders have got a lot of issues with. The issues surround where the digital data will be stored because currently, it is stored in Cambridge temporarily. Ideally all our stakeholders would like to see it stored in a server in the African continent at the least, if not actually in their own country. There are a lot of questions around this. 

The other project stems out of the work I’ve been doing in Southern Africa for almost the past 20 years, and is about asking how do you articulate knowledge of the African past that is not represented in history textbooks? This is a history that is rarely taught at university and is rarely discussed. How do you avail knowledge to publics that are not academic publics? That’s where the idea of creating a multimedia archive and a platform where digital representations of archaeological, archival, historical, and ethnographic data could be used to put together stories that are not the mainstream stories. It is a work in progress. The datasets that we deal with are very diverse because it is required to tell a history in a place and in periods for which we don’t have written sources.  

It’s so mesmerizing and so different from what we do in contexts where history is written. It gives us the opportunity to put together so many diverse types of sources. From oral histories to missionary accounts with all the issues around colonial reports and representations of others as they were perceived at the time, putting together information on the past environment combining archaeological data. We have a collective of colleagues that work in universities and museums. Each performs different bits and pieces of research, and we are trying to see how we would put together these types of data sets. How much do we curate them to avail them to other audiences? We’ve used the concept of data curation very heavily, and we use it purposefully because there is an impression of the objectivity of data, and we know, especially as social scientists, that this just doesn’t exist. 

I’ve been thinking for a while about the etymology of the word data. Datum in Latin means ‘given’. Whereas when we are collecting data, we always say we’re taking measurements. Upon reflection, it has made me come to a realisation that we should approach data more as something that is given to us and we hold responsibility for, and something that is not ours, both in terms of ownership, but also because data can speak for itself and tell a story without our intervention. That’s the kind of thinking surrounding data that we’ve been going through with the project. If data are given, our work is an act of restitution, and we should also acknowledge that we are curating it. We are picking and choosing what we’re putting together and in which format and framework. We are intervening a lot in the way these different records are represented so that they can be used by others to tell stories that are perhaps of more relevance to us. 

So there’s a lot of work in this project that we’re doing about representation. We are explaining – not justifying but explaining – the choices that we have made in putting together information that we think could be useful to re-create histories and tell stories. The project will benefit us because we are telling our own stories using digital storytelling, and in particular story mapping, but it could become useful for others as resources that can be used to tell their own stories. It’s still a work in progress because we also work in low resourced environments. The way in which people can access digital repositories and then use online resources is very different in Botswana and in South Africa, which are the two countries where I mainly work with in this project. We also dedicate time into thinking how useful the digital platform will be for the audiences that we would like to get an engagement from. 

The intended output is an archive that can be used in a digital storytelling platform. We have tried to narrow down our target audience to secondary school and early university students of history (and archaeology). We hope that the platform will eventually be used more widely, but we realised that we had to identify an audience to be able to prepare the materials. We have also realised that we need to give guidance on how to use such a platform so in the past year, we have worked with museums and learnt from museum education departments about using the museum as a space for teaching and learning, where some of these materials could become useful. Teachers and museum practitioners don’t have a lot of time to create their own teaching and learning materials, so we’re trying to create a way of engaging with practitioners and teachers in a way that doesn’t overburden them. For these reasons, there is more intervention that needs to come from our side into pre-packaging some of these curations, but we’re trying to do it in collaboration with them so that it’s not something that is solely produced by us academics. We want this to be something that is negotiated. As archaeologists and historians, we have an expertise on a particular part of African history that the communities that live in that space may not know about and cannot know because they were never told. They may have learned about the history of these spaces from their families and their communities, but they have learned only certain parts of the history of that land, whereas we can go much deeper into the past. So, the question becomes, how do you fill the gaps of knowledge, without imposing your own worldview? It needs to be negotiated but it’s a very difficult process to establish. There is a lot of trial and error, and we still don’t have an answer. 

Negotiating communities and funders 

LO: Have you ever had to navigate funders’ policies and stakeholder demands?  

SM: These kinds of projects need to be long and they need continuous funding, but they have outputs that are not always necessarily valued by funding bodies. This brings to the fore what funding bodies are interested in – is it solely data production, as it is called, and then the writing up of certain academic content? Or can we start to acknowledge that there are other ways of creating and sharing knowledge? As we know, there has been a drive, especially with UK funding bodies, to acknowledge that there are different ways in which information and knowledge is produced and shared. There are alternative ways of knowledge production from artistic ones to creative ones and everything in between, but it’s still so difficult to account for the types of knowledge production that these projects may have. When I’m reporting on projects, I still find it cumbersome and difficult to represent these types of knowledge production. There’s so much more that you need to do to justify the output of alternative knowledge compared to traditional outputs. I think there needs to be change to make it easier for researchers that produce alternative forms of knowledge to justify it rather than more difficult than the mainstream. 

One thing I would say is there’s a lot that we’ve learned with the (Mapping Africa’s Endangered Archaeological Sites and Monuments) project because there we engage directly with the custodians of the site and of the analog data. When they realise that the funders of the project expect to have this data openly accessible, then the questions come and the pushback comes, and it’s a pushback on a variety of different levels. The consequence is that basically we still haven’t been able to finalise our agreements with the custodians of the data. They trust us, so they have informed us that in the interim we can have the data as a project, but we haven’t been able to come to an agreement on what is going to happen to the data at the end of the project. In fact, the agreement at the moment is the data are not going to be going on a completely Open Access sphere. The negotiation now is about what they would be willing to make public, and what advantages they would have as a custodian of the data to make part, or all, of these data public.

This has created a disjuncture between what the funders thought they were doing. I’m sure they thought they were doing good by mandating that the data needs to be Open Access, but perhaps they didn’t consider that in other parts of the world, Open Access may not be desirable, or wanted, or acceptable, for a variety of very valid reasons. It’s a node that we still haven’t resolved and it makes me wonder: when funders are asking for Open Access, have they really thought about work outside of UK contexts with communities outside of the UK context? Have they considered these communities’ rights to data and their right to say, “we don’t want our data to be shared”? There’s a lot of work that has happened in North America in particular, because indigenous communities are the ones that put forward the concept of C.A.R.E., but in UK we are still very much discussing F.A.I.R. and not C.A.R.E.. I think the funders may have started thinking about it, but we’re not quite there. There is still this impression that Open Data and Open Access is a universal good without having considered that this may not be the case. It puts researchers that don’t work in UK or the Global North in an awkward position. This is definitely something that we are still grappling with very heavily. My hope is that this work is going to help highlight that when it comes to Open Access, there are no universals. We should revisit these policies in light of the fact that we are interacting with communities globally, not only those in some countries of the world. Who is Open Access for? Who does it benefit? Who wants it and who doesn’t want it, and for what reasons? These are questions that we need to keep asking ourselves. 

LO: Have you been in a position where you had to push back on funders or Open Access requirements before? 

Not necessarily a pushback, but our funders have funded a number of similar projects in South Asia, in Mongolia, in Nepal and the MENA region and we have come together as a collective to discuss issues around the ethics and the sustainability of the projects. We have engaged with representatives of our funders trying to explain that what they wanted initially, which is full Open Access, may not be practicable. In fact, there has already been a change in the terminology that is used by the funders. From Open Access, they changed the concept to Public Access, and they have come back to us to say that they can change their contractual terms to be more nuanced and acknowledge the fact that we are in negotiation with national stakeholders and other stakeholders about what should happen to the data. Some of this has been articulated in various meetings, but some of it was trial and error on our side. In other words, with our new proposal for renewal of funding, which was approved, we just included these nuances in the proposal and in our commitment and they were accepted. So in the course of the past four years, through lobbying of the funded projects, we have been able to bring nuance to the way in which the funders themselves think about Open Access. 


Stay tuned for part two of this conversation where Stefania will share some of the challenges of managing research data that are located in different countries!


DSpace - OpenAlex logos

Towards enriched open scholarly information: integrating DSpace and OpenAlex

We are pleased to announce that, thanks to the support of the Vietsch Foundation, we will be developing an integration between DSpace repositories and OpenAlex. We are partnering with 4Science, a certified platinum DSpace provider, to deliver this project that will integrate two key systems within the global scholarly ecosystem, the DSpace repository (https://www.dspace.org/) and OpenAlex (https://openalex.org/), a free and open catalogue of the world’s scholarly research system.

Using OpenAlex’s open API (Application Programming Interface), this integration will allow for the quick import of relevant research and scholarly (meta)data into DSpace repositories, helping institutions to improve the quality and completeness of their records of research outputs and streamlining researcher publication and reporting workflows by providing accurate and relevant information in automated ways. This integration will also save time for researchers and librarians, who would be able to dedicate time to more research-oriented tasks.

Reusing the data in OpenAlex will help institutions to improve the quality and completeness of data in institutional repositories and strengthen the wider open access network by increasing the number of versions and access points to content. Moreover, the availability of multiple (open access) copies of the materials can provide a more effective strategy for long-term preservation.

The research and scholarly publishing environment is changing rapidly and there is an increasing expectation that research findings will be shared, both among funders and policy makers, and the wider research and public community. We strongly believe that institutional research repositories and scholarly platforms play a critical role in supporting these open research practices by preserving and disseminating research findings and supporting materials produced by institutions. The solution developed in this project will greatly contribute to increasing and enhancing the availability of open and accurate information about research outputs in the wider scholarly ecosystem.

Project information at the Vietsch Foundation site: https://www.vietsch-foundation.org/projects/

Data Diversity Podcast #3 – Dr Nick H. Wise (3/4)

Welcome back to the penultimate post featuring Dr Nick H. Wise, Research Associate in Architectural Fluid Mechanics at the Department of Engineering, University of Cambridge. If you have been with us for the previous two posts, you would know that besides being a scientist and an engineer, Nick has made his name as a scientific sleuth who, based on an article on the blog Retraction Watch which was written in 2022, is responsible for more than 850 retractions, leading Times Higher Education to dub him as a research fraudbuster. Since then, through his X account @Nickwizzo, he has continued his investigations, tracking cases of fraud and in some cases, naming and shaming the charlatans. In this four-part series, we will learn from Nick about some of the shady activities that taint the scientific publishing industry today.

In part three, we learn from Nick about how researchers try to generate more citations from a single piece of research through a trick called ‘salami slicing’ and the blurred lines between illegality and desperately coping to meet with the unrealistic expectations of academia (to the point of engaging with fraud). Below are some excerpts from the conversation, which can be listened to in full here


Citation count was once a proxy for quality and now it is citation count regardless of quality. People are only looking at the citation count, and not the actual quality. Actually assessing quality takes a lot more effort. 


‘Salami slicing’ and the Game of Citations

LO: What do you think is better for science? A slower, more thoughtful process of publishing and everything in between? Or more information, more research, but then things like fraud slip through and occur more frequently?

NW: I don’t think there’s necessarily more research. Another phenomenon that paper mills take advantage of is salami slicing. Imagine you have completed a research project. Now you could write this up as one, thirty-page paper or two, twenty-page papers. You could write two comprehensive papers or try to put out multiple ten-page papers where you have some minor parameters changed. I see this happening in nanofluids research because it is an area of research close to mine. The nanofluid is simply a base liquid – it might be water, it might be ethanol – and into that you mix these very small nanoscale particles of some other material, such as gold, silver, or iron oxide. And in this sort of mixture of liquid and particles, you want to investigate its fluid flow and describe this with some differential equations. You can use computers to solve the differential equations and then plot some results about velocity profiles and heat transfer coefficients, etcetera. Now, you could write a paper for a given situation where you say, I’m not going to specify the liquid, but here is a general and viscosity of this liquid. If you want to apply this to your own research, you plug in the density and viscosity of your liquid, and likewise the particles. I’m not going to specify which particles are used, because all that changes is their density and their heat transfer coefficient properties. So that’s one way you could do it.

Another way to do it is to go I’m going to write a paper about water and gold particles; that’s one paper. Then you can write another paper which has water and silver particles, and then you can write one with ethanol and iron oxide, and there are so many varieties. You can also vary the geometry that this flow is going around, and you can add in an electric field and a magnetic field, etcetera. You can build up in this n-factorial way. There are thirty possible liquids multiplied by a hundred possible particles and multiplied by however many geometric configurations. You can see that this is what they are doing. Rather than writing a few quite general comprehensive papers, they are writing hundreds of very specific papers which enables them to produce more papers and sell more authorships and put more citations in. But this overwhelm of papers produced; there’s still only so many peer reviewers, and so many editors. And this phenomenon happens in lots of fields, they find something where there are just these variables that they can keep writing almost the same paper. Yet, the paper is original. It has not been done before. It is incredibly derivative, but that is not necessarily a barrier to publication.

LO: What I’m getting from this is, this is part of the whole system, and the issue at hand is definitely enabled by certain motivations like getting more citations. You can take one big piece of salami or publish that in one book, or you can slice the salami thirty ways. And if they are in the position to slice the salami, they say why not, I suppose, right? A game is there to be played.

NW: Right, they are playing the game that is in front of them. And again, there are people who do this who are not from a paper mill. They just want to maximize the number of citations and publications. The question is why are they doing this? Why do they want to maximize their publications? Because they want a promotion, or they want a tenured job. There are also countries where you get a cash reward for publishing a paper in a good journal so the more papers you publish, the more money you get paid. Your government might have told all the universities that they need to increase their ranking in the World University rankings. How do you do that? By increasing your research output and the citations you get. That is another driver. These drivers come from all sorts of places but there is always an emphasis on numbers. Citation count was once a proxy for quality and now it is citation count regardless of quality. People are only looking at the citation count and not the actual quality. Assessing quality takes a lot more effort.

LO: Citations used to be a proxy for quality, but that is not the case anymore. But it still implies the quality of the research, or you would hope.

NW: You would hope, but only because there is an assumption that the only reason something has a lot of citations is because it is good quality. Citations are also easier to count. Quality is much harder to account for, but that incentivizes people to do things like cite their colleagues. Again, you could still track it if people from the same university were citing each other. But then you get bigger scale things with middlemen who organize people from across the world to cite each other or just do it for cash. If you are publishing and you are producing papers to order, each one of those papers has a reference section which is real estate. You can throw in and have some genuine references which are relevant to this paper, but you can also throw in some irrelevant references that someone paid you to include. You can also pay someone to include references that are actually relevant to a topic.

LO: If it is relevant to a topic, it is almost like merely encouraging someone to be aware of certain work as opposed to a scam, which sounds like a gray area.

NW: Well, I would say that as soon as someone is paying money, then it starts to be illegitimate. But I mean if someone emails you and says “I’ve just published this paper, I think you might be interested, it’s in your research field: maybe read it or maybe you do cite it”, it’s different from someone emailing you to say “I’ll pay you £50 if you cite my paper” and you do. Then I would say that you have crossed a line. So, it does get very gray. Then there are these organized paper mills who are doing this as a business and that is where I think it becomes quite clear that it is probably not legitimate.

Facebook (authorship) marketplace

NW: You could go on Facebook and there are people selling authorship of their paper as a one off. There are PhD students in some country with no research funding who say “it costs $2500 for the article processing charge for me to publish where I would like to publish, I do not have $2500 so if you pay the $2500, you can be first author on the paper” and that is the only way they can get their paper published. They’re not doing this as a business, they’re just doing this once for this one paper. And you get people responding. Quite often professors or more established academics with access to budgets are the ones who will say yes. And the only thing that the person has done is to provide the funding for the publication.

The minimum thing that one is supposed to have done to be considered an author is to have either written the draft or reviewed and edited the paper. You might have also done data analysis or conceptualization. I think we would agree that if all this person does is just pay the fee for publication, then that is not acceptable. But what if they read the paper and then made a couple of comments? Now they have reviewed and edited it, and so now they have done review, editing and funding. There are many big labs around the world that have some very senior scientist whose name is on every single paper that comes out of the lab. And what have they done? Well, they provided all the funding, and they have reviewed the paper. I bet there are some who have barely glanced at the paper. But let’s say that they have reviewed the paper, and they provided the funding for the publication. Is that what makes it different to the person on Facebook who has found some random professor from another country to pay for their publication? Where is the difference? I don’t think it is an easy line to draw. In this way, the move to Open Access publishing requiring large fees for publication has also driven quite a bit of this phenomenon.

LO: It also seems like you have developed a bit of empathy. Maybe you’ve looked at so many cases and you see that it’s not always clear.

NW: Absolutely. Again, if you have the people running a paper mill, or if you have some professor who is being bribed and waving through dozens of papers, I don’t have much empathy for them. But the Masters or PhD student who has been told that they have to publish papers to get their PhD or even a Masters and they have this demand placed on them, or they even have produced a paper but they need this on the all this money to get it published, I don’t blame them for what they’re doing. It’s the situation they’ve been placed in. It is the system that they are part of. I have a lot of empathy for them.


Look out for the final post coming next week, where we get Nick’s take on what he thinks should be the repercussions for engaging in fraud, and we get a parting tip from Nick on what researchers should do when performing a literature search on papers in their field.