By James Porter, University of Leeds, UK
As a research community we’re being urged to “open” science up like never before. Whether it’s our research results, methods used to make sense of them, or even the underlying raw data itself, everything we do should be made freely and easily accessible to the widest variety of people possible, in the widest variety of ways. Already great strides have been made. As Leonelli et al (2015) note, we’ve seen the push towards “open” access of published research results; “open” data deposited in repositories; and “open” source licenses for research materials (e.g. codes, models etc). All of this edges us closer to the ethos behind “open” science or Science 2.0. That is, to encourage greater equality, widen participation, and stimulate innovation.
Indeed, “open” science has already been heralded as a success. It’s helped scientists find answers to decade old problems. Scientists at the University of Washington struggling to discover the structure of a protein that helps HIV multiply, turned to developers of Foldit, for example. As an online game, players are asked to rearrange the protein to find its most stable configuration, likely to be it’s natural form. Within three weeks over 57,000 players had arrived at an answer, which was published in Nature Structural. None of this would have been possible if that research had remained hidden from public view behind journal subscriptions or locked away in our ivory towers.
It’s somewhat ironic, then, that we’re being asked to make things “open” yet constantly reminded to refrain from sharing our findings prematurely. This is due in no small part to a prevailing institutional culture of publish or perish (i.e. REF); the creep to commercialise science and lockdown intellectual property or block rivals (e.g. OncoMouse); and concerns over allowing others to cast doubt or breed misunderstandings (i.e. UEA leaked emails). How science is opened up so that it’s usable and useful, not just available; who should be doing it – early career researchers or established professors; and when research is released – before/after publication; are all tricky questions that researchers must grapple with today. “Sticks” and “carrots”, as Leonelli et al (2015) argue, may incentivise “open” science but it’s unlikely to fully succeed unless the underlying institutional and social norms/values governing research are addressed as well. Many of these institutional and epistemic norms touch on the changing spaces of science engaged by geographers.
The UK government, for instance, has set the Met Office on a course for “open” science. In a pointed rebuttal to critics who claim that it has stifled innovation through a monopoly over meteorological and climate data, the Met Office is set to “open” things up. The once fine distinction between data used for non-commercial purposes and commercial ones is no more. Today, a new policy breaks data into one of three categories (open, research and managed), which dictates who can access it and what they can do with it (not everyone can be trusted, apparently). Making the data fit into these categories ignores its hybrid, contested, and evolving nature, where it may start life as one thing but over time change as more things are added. Efforts to make the data manageable not only reflect politics to do with their construction and circulation but also reflect the tension faced by the Met Office to give away and make money from its data/services.
Much of the logic behind the “open” science movement shares similarities with neoliberal thinking. Will making raw data freely available via repositories reduce inequalities between the data-rich and data-poor, or simply allow those with the resources, capacity and infrastructures to increase them? Will the ability to reproduce, verify, and challenge research results bolster the status of science, a la Robert Merton, or make it harder to differentiate rigorous science from junk science, making it easier to sell for PR purposes? And does opening up research results, data and materials, constitute a valuable endeavour in itself, or one that’s only realisable when equipped with the right expertise?
Yes “open” science is certainly welcome in exposing a whole raft of cultural practices (and politics) we take for granted in academia today and helps us respond to the needs of the twentieth-first century. But before we fully embrace “open” science we need to think critically about its politics. Critical scholars have told us time and again how neoliberalism worsen inequalities, reduces participation, and restricts innovation to only marketable products/services. We need to ask for “whom” is science being opened, how “democratic” is that process, and of course what deep-seated politics are being advanced as things get opened-up? These are issues Leonelli has raised in relation to biology in the Bulletin of Science, Technology & Society, but tracing these unfolding dynamics in relation to geographical data and in open access journals like Geo is up to all of us.
About the author:
Dr James Porter is a Research Fellow in the School of Earth and Environment at the University of Leeds. James’ work specialises in how institutional politics shapes the production, and in turn, use of environmental knowledge for policy, through the lens of science and technology studies (STS) and the management of risk/uncertainty.
Leonelli, S. et al. (2015) Sticks and Carrots: Encouraging Open Science at its source. Geo: Geography and Environment, doi: 10.1002/geo2.2.