Tag Archives: open data

Mapping the world through data – The November 2023 Data Champion Forum 

The November Data Champion forum was a geography/geospatial data themed edition of the bi-monthly gathering, this time hosted by the Physiology department. As usual, the Data Champions in attendance were treated to two presentations. Up first was Martin Lucas-Smith from the Department of Geography who introduced the audience to the OpenStreetMap (OSM) project, a global community mapping project using crowdsourcing. Just as Wikipedia is for textual information, OSM results in a worldwide map created by everyday people who map the world themselves. The resulting maps can vary in terms of its focus such as the transport map, which is a map which shows public transport lanes like railways, buses and trams worldwide, and the humanitarian map, which is an initiative dedicated to humanitarian action through open mapping. Martin is personally involved in a project called CycleStreets which, as the name implies, uses open mapping of bicycle infrastructure. The Department of Geography uses OSM as a background for its Cambridge Air Photos websites. Projects like these, Martin highlighted, demonstrate how community gets generated around open data. 

CycleStreets: Martin at the November 2023 Data Champion Forum

In his presentation, Martin explained the mechanics of OSM such as its data structure, how the maps are edited, and how data can be used in systems like routing engines. Editing the maps and the decision-making processes that go behind how a path is represented visually on the map is the point where the OSM community comes to action. While the data in OSM consists primarily of geometric points (called ‘Nodes’) and lines (called ‘Ways’) coupled with tags which denotes metadata values, the norms about how to define this information can only come about by consensus from the OSM community. This is perhaps different to more formal database structures that might be employed within corporate efforts such as Google. Because of its widespread crowdsourced nature, OSM tends to be more detailed than other maps for less well-served communities such as people cycling or walking, and its metadata is richer, as they are created by people who are intimately familiar with the areas that they are mapping. A map by users for users. 

Next up was Dr Rachel Sippy, a Research Associate with the Department of Genetics who presented how geospatial data factored into epidemiological research. In her work, the questions of ‘who’, ‘when’, and ‘where’ a disease outbreak occurred are important, at it is the where that gives her research a geographical focus. Maps, however, are often not detailed enough to provide information about an outbreak of disease among a population or community as maps can only mark out the incident site, the place, whereas the spatial context of that place, which she denotes as space, is equally as important in understanding disease outbreaks.  

Of ‘Space’ and ‘Place’: Rachel at the November 2023 Data Champion forum

It can be difficult, however, to understand what a researcher is measuring and what types of data can be used to measure space and/or place. Spatial data, as Rachel pointed out, can be difficult to work with and the researcher has to decide if spatial data is a burden or fundamental to the understanding of a disease outbreak in a particular setting. Rachel discussed several aspects of spatial data which she has considered in her research such as visualisation techniques, data sources and methods of analysis. They all come with their own sets of challenges and researchers have to navigate them to decide how best to tell the fundamental story that answers the research question. This essentially comes down to an act of curation of spatial data, as Rachel pointed out, quoting Mark Monmoneir, that “not only is it easy to lie with maps, it’s essential”. In doing so, researchers working with spatial data would have to navigate the political and cultural hierarchies that are explicitly and implicitly inherent to places, and any ethical considerations relating to both the human and non-human (animal) inhabitants of those geographical locations. Ultimately, how data owners choose to model the spatial data will affect the analysis of the research, and with it, its utility for public health. 

After lunch, both Martin and Rachel sat together to hold a combined Q&A session and a discussion emerged around the topic of subjectivity. A question was raised to Rachel regarding mapping and subjectivity, as it was noticed that how she described place, which included socio-cultural meanings and personal preferences of the inhabitants of the place, can be considered to be subjective in manner. Rachel agreed and alluded back to her presentation, where she mentioned that these aspects of mapping can get fuzzy as researchers would have to deal with matters relating to identity, political affiliations and personal opinions, such as how safe an individual may feel in a particular place. Martin added that with the OSM project the data must be objective as possible, yet the maps themselves are subjective views of objective data.  

Rachel and Martin answering questions from the Data Champions at the November 2023 forum

Martin also brought to attention that maps are contested spaces because spaces can be political in nature. Rachel added that sometimes, maps do not appropriately represent the contested nature of her field sites, which she only learned through time on the field. In this way, context is very important for “real mapping”. As an example, Martin discussed his “UK collision data” map, created outside the University, which states where collisions have happened, giving the example of one of central Cambridge’s busiest streets, Mill Road: without contextual information such as what time these collisions occurred, what vehicles were involved, and the environmental conditions at the time of the accident, a collision map may not be that valuable. To this end, it was asked whether ethnographic research could provide useful data in the act of mapping and the speakers agreed. 

US requirements for public access to research

Niamh Tumelty, Head of Open Research Services, Cambridge University Libraries

Yesterday it was announced that the White House Office of Science and Technology Policy has updated US policy guidance to make the results of taxpayer-supported research immediately available to the American public at no cost:
https://www.whitehouse.gov/ostp/news-updates/2022/08/25/ostp-issues-guidance-to-make-federally-funded-research-freely-available-without-delay/

Federal agencies have been asked to update their public access policies to make publications and supporting data publicly accessible without an embargo. This applies to all federal agencies (the previous policy only applied to those with more than $100 million in annual research and development expenditure) and allows for flexibility for the agencies to decide on some of the details while encouraging alignment of approaches. It applies to all peer-reviewed research articles in journals and includes the potential to also include peer-reviewed book chapters, editorials and peer-reviewed conference proceedings.

The emphasis on “measures to reduce inequities of, and access to, federally funded research and data” is particularly important in light of the serious risk that we will just move from a broken system with built-in inequities around access to information to a new broken system with built-in inequities around whose voices can be heard. Active engagement will be needed to ensure that the agencies take these issues into account and are not contributing to these inequities.

While there will be a time lag in terms of development/updating and implementation of agency policies and we don’t yet have the fine print around licences etc, this will bring requirements for US researchers more closely in line with what many of our researchers already need to do as a result of e.g. UKRI and Wellcome Trust policies. Closer alignment should help address some of the collaborator issues that have arisen following the recent cOAlition S policy updates – though of course a lot will depend on the detail of what each agency puts in place. Researchers availing of US federal funding need to engage now if they would like to influence the approach taken by those who fund their work.

There continues to be a very real question around sustainable business models both from publisher and institutional perspectives, alongside the other big questions around whether the current approaches to scholarly publishing are serving the needs of researchers adequately. It is essential that this doesn’t just become an additional cost for researchers or institutions as many of those who have commented in the past 24 hours fear. Many alternatives to the APC and transitional agreement/big deal approaches have been proposed, from diamond approaches through to completely reimagined approaches to publishing (e.g. Octopus).

There will be mixed feelings about this. While there is likely to be little sympathy for the publishers with the widest profit margins, this move is sure to push more of the smaller publishers, including many (but not all!) learned societies, to think differently. We need to ensure that we understand what researchers most value about these publishers and how to preserve those aspects in whatever comes in future – I am reminded of the thought-provoking comments from our recent working group on open research in the humanities on this topic.

These are big conversations that were already underway and will now take on greater urgency. The greatest challenge of all remains how to change the research culture such researchers can have confidence in sharing their work and expertise in ways that maximise access to their work while also aligning with their (differing!) values and priorities.

Open Research in the Humanities: CORE Data

Authors: Emma Gilby, Matthias Ammon, Rachel Leow and Sam Moore

This is the third of a series of blog posts, presenting the reflections of the Working Group on Open Research in the Humanities. Read the opening post at this link. The working group aimed to reframe open research in a way that was more meaningful to humanities disciplines, and their work will inform the University of Cambridge approach to open research. This post reflects on the concept of FAIR data and proposes an alternative way of thinking about data in the humanities.

As a rule, data in the arts and humanities is collected, organised, recontextualised and explained. We are therefore putting forward this acronym as an alternative to LERU’s FAIR data (findable, accessible, interoperable, reusable). Our data is collected rather than generated; organised and recontextualised in order to further a cultural conversation about discoveries, methods and debates; and explained as part of the analytical process. Any view of scholarly comms as uniquely about the distribution of and access to FAIR data (‘from my bench to yours’) will seem less relevant to A&H academics. Similarly, the goal of reproducibility of data – in the sense in which this often appears in the sciences and social sciences, where it refers to the results of a study being perfectly replicable when the study is repeated – is, if anything, contrary to the aim of CORE data: i.e. the aim that this data should be built upon and thereby modified through the process of further recontextualization. Our CORE data, then, understood as information used for reference and analysis, is made up of texts, music, pictures, fabrics, objects, installations, performances, etc. Sometimes, this information does not belong to us, but is owned by another person or institution or community, in which case it is not ours to make public.

Opportunities

The A&H tend to bring information together in new ways to further discussion about socio-cultural developments across the globe. Available digital data is only the tip of the iceberg when it comes to the material that is worked with.[1] Arts and humanities scholars, who spend their lives thinking about the arrangement and communication of information, are acutely aware that archives (digital and otherwise) are not neutral spaces, but man-made and the product of human choices. This means that information available online, to a broadband-enabled public, is asymmetrical and distorted.

One of the main benefits of open research is that it is thought to make data globally accessible, especially to ‘the global south’ and to institutions with fewer available funds to ‘buy data in’. As we explore below (‘research integrity’), this unidirectional view of open access is problematic. In general, digital material tends to reproduce English-speaking structures and epistemologies. As FAIR data is redefined as CORE data, an attention to context will hopefully promote the diverse positions occupied by all those who make up the world and who produce research about it.

Support required

In order usefully to employ CORE data in the A&H, we need to bring to the surface and examine underlying assumptions about knowledge creation as well as knowledge dissemination.

The work of the digital humanities – rooted explicitly in digital technologies and the forms of communication that they enable – is obviously a vital part of these discussions about opening up the CORE data of the humanities. Digital work, in the same way as any other successful A&H research, needs to consider its own materiality and conditions of production, evaluate its own history, draw attention to its own limits, and navigate its trans-temporal relationships with data in other forms (the manuscript, the printed text, the painting, the piece of music). This is a developing field and one that still has an uneasy relationship with the existing tenure/promotions system.[2] Colleagues noted that training needs are evolving constantly. It is often hard to know where to turn for specific guidance in e.g. how to manage one’s own ‘born digital’ archives, how to deconstruct a twitter archive, and so on.

This issue also overlaps with the need, as part of the ‘rewards and incentives’ process outlined below, to evaluate the success of colleagues as they undertake this training and negotiate with these processes. DH is one of the most exciting and rapidly developing areas of research and needs to be widely resourced. But it would also be harmful to collapse all A&H research into ‘the digital humanities’. The work of colleagues whose CORE data is resistant, for whatever reason, to wide online dissemination in English also needs to be allocated the value it deserves: some publics are simply smaller than others.

Postscript: the group subsequently became aware of the CARE Principles of Indigenous Data Governance. These principles will also be considered when developing our services in support of data management and ethical sharing.


[1] Erzsébet Tóth-Czifra, ‘The Risk of Losing the Thick Description: Data Management Challenges Faced by the Arts and Humanities in the Evolving FAIR Data Ecosystem’, in Digital Technologies and the Practices of Humanities Research, edited by Jennifer Edmond (Open Book Publishers, 2014), https://doi.org/10.11647/OBP.0192.10

[2]See the excellent article by Cait Coker and Kate Ozment ‘Building the Women in Book History Bibliography, or Digital Enumerative Bibliography as Preservation of Feminist Labor’, Digital Humanities Quarterly 13 (3), 2019, http://www.digitalhumanities.org/dhq/vol/13/3/000428/000428.html – where the authors of the ‘Women in Book History’ digital bibliography still see the tenure system as ‘monograph-driven’, and had to fund their research through selling merchandise.