Thinking about the governance of data

I find “governance” to be a tricky word. Particularly when we’re talking about the governance of data.

For example, I’ve experienced conversations with people from a public policy background and people with a background in data management, where its clear that there are different perspectives. From a policy perspective, governance of data could be described as the work that governments do to enforce, encourage or enable an environment where data works for everyone. Which is slightly different to the work that organisations do in order to ensure that data is treated as an asset, which is how I tend to think about organisational data governance.

These aren’t mutually exclusive perspectives. But they operate at different scales with a different emphasis, which I think can sometimes lead to crossed wires or missed opportunities.

As another example, reading this interesting piece of open data governance recently, I found myself wondering about that phrase: “open data governance”. Does it refer to the governance of open data? Being open about how data is governed? The use of open data in governance (e.g. as a public policy tool), or the role of open data in demonstrating good governance (e.g. through transparency). I think the article touched on all of these but they seem quite different things. (Personally I’m not sure there is anything special about the governance of open data as opposed to data in general: open data isn’t special).

Now, all of the above might be completely clear to everyone else and I’m just falling into my usual trap of getting caught up on words and meanings. But picking away at definitions is often useful, so here we are.

The way I’ve rationalised the different data management and public policy perspectives is in thinking about the governance of data as a set of (partly) overlapping contexts. Like this:

 

Governance of data as a set of overlapping contexts

 

Whenever we are managing and using data we are doing so within a nested set of rules, processes, legislation and norms.

In the UK our use of data is bounded by a number of contexts. This includes, for example: legislation from the EU (currently!), legislation from the UK government, legislation defined by regulators, best practices that might be defined how a sector operates, our norms as a society and community, and then the governance processes that apply within our specific organisations, departments and even teams.

Depending on what you’re doing with the data, and the type of data you’re working with, then different contexts might apply. The obvious one being the use of personal data. As data moves between organisations and countries, then different contexts will apply, but we can’t necessarily ignore the broader contexts in which it already sits.

The narrowest contexts, e.g. those within an organisations, will focus on questions like: “how are we managing dataset XYZ to ensure it is protected and managed to a high quality?” The broadest contexts are likely to focus on questions like: “how do we safely manage personal data?

Narrow contexts define the governance and stewardship of individual datasets. Wider contexts guide the stewardship of data more broadly.

What the above diagram hopefully shows is that data, and our use of data, is never free from governance. It’s just that the terms under which it is governed may just be very loosely defined.

This terrible sketch I shared on twitter a while ago shows another way of looking at this. The laws, permissions, norms and guidelines that define the context in which we use data.

Data use in context

One of the ways in which I’ve found this “overlapping contexts” perspective useful, is in thinking about how data moves into and out of different contexts. For example when it is published or shared between organisations and communities. Here’s an example from this week.

IBM have been under fire because they recently released (or re-released) a dataset intended to support facial recognition research. The dataset was constructed by linking to public and openly licensed images already published on the web, e.g. on Flickr. The photographers, and in some cases the people featured in those images, are unhappy about the photographs being used in this new way. In this new context.

In my view, the IBM researchers producing this dataset made two mistakes. Firstly, they didn’t give proper appreciation to the norms and regulations that apply to this data — the broader contexts which inform how it is governed and used, even though its published under an open licence. For example, e.g. people’s expectations about how photographs of them will be used.

An open licence helps data move between organisations — between contexts — but doesn’t absolve anyone from complying with all of the other rules, regulations, norms, etc that will still apply to how it is accessed, used and shared. The statement from Creative Commons helps to clarify that their licenses are not a tool for governance. They just help to support the reuse of information.

This lead to IBM’s second mistake. By creating a new dataset they took on responsibility as its data steward. And being a data steward means having a well-defined, set of data governance processes that are informed and guided by all of the applicable contexts of governance. But they missed some things.

The dataset included content that was created by and features individuals. So their lack of engagement with the community of contributors, in order to discuss norms and expectations was mistaken. The lack of good tools to allow people to remove photos — NBC News created a better tool to allow Flickr users to check the contents of the dataset — is also a shortfall in their duties. Its the combination of these that has lead to the outcry.

If IBM had instead launched an initiative similar where they built this dataset, collaboratively, with the community then they could have avoided this issue. This is the approach that Mozilla took with Voice. IBM, and the world, might even have had a better dataset as a result because people have have opted-in to including more photos. This is important because, as John Wilbanks has pointed out, the market isn’t creating these fairer, more inclusive datasets. We need them to create an open, trustworthy data ecosystem.

Anyway, that’s one example of how I’ve found thinking about the different contexts of governing data, helpful in understanding how to build stronger data infrastructure. What do you think? Am I thinking about this all wrong? What else should I be reading?

 

Impressions from pidapalooza 19

This week I was at the third pidapalooza conference in Dublin. It’s a conference that is dedicated open identifiers: how to create them, steward them, drive adoption and promote their benefits.

Anyone who has spent any time reading this blog or following me on twitter will know that this is a topic close to my heart. Open identifiers are infrastructure.

I’ve separately written up the talk I gave on documenting identifiers to help drive adoption and spur the creation of additional services. I had lots of great nerdy discussions around URIs, identifier schemes, compact URIs, standards development and open data. But I wanted to briefly capture and share a few general impressions.

Firstly, while the conference topic is very much my thing, and the attendees were very much my people (including a number of ex-colleagues and collaborators), I was approaching the event from a very different perspective to the majority of other attendees.

Pidapalooza as a conference has been created by organisations from the scholarly publishing, research and archiving communities. Identifiers are a key part of how the integrity of the scholarly record is maintained over the long term. They’re essential to support archiving and access to a variety of research outputs, with data being a key growth area. Open access and open data were very much in evidence.

But I think I was one of only a few (perhaps the only?) attendee from what I’ll call the “broader” open data community. That wasn’t a complete surprise but I think the conference as a whole could benefit from a wider audience and set of participants.

If you’re working in and around open data, I’d encourage you to go to pidapalooza, submit some talk ideas and consider sponsoring. I think that would be beneficial for several reasons.

Firstly, in the pidapalooza community, the idea of data infrastructure is just a given. It was refreshing to be around a group of people that past the idea of thinking of data as infrastructure and were instead focusing on how to build, govern and drive adoption of that infrastructure. There’s a lot of lessons there that are more generally applicable.

For example I went to a fascinating talk about how EIDR, an identifier for movie and television assets, had helped to drive digital transformation in that sector. Persistent identifiers are critical to digital supply chains (Netflix, streaming services, etc). There are lessons here for other sectors around benefits of wider sharing of data.

I also attended a great talk by the Australian Research Data Commons that reviewed the ways in which they were engaging with their community to drive adoption and best practices for their data infrastructure. They have a programme of policy change, awareness raising, skills development, community building and culture change which could easily be replicated in other areas. It paralleled some of the activities that the Open Data Institute has carried out around its sector programmes like OpenActive.

The need for transparent governance and long-term sustainability were also frequent topics. As was the recognition that data infrastructure takes time to build. Technology is easy, its growing a community and building consensus around an approach that takes time.

(btw, I’d love to spend some time capturing some of the lessons learned by the research and publishing community, perhaps as a new entry to the series of data infrastructure papers that the ODI has previously written. If you’d like to collaborate with or sponsor the ODI to explore that, then drop me a line?)

Secondly, while the pidapalooza community seem to have generally accepted (with a few exceptions) the importance of web identifiers and open licensing of reference data. But that practice is still not widely adopted in other domains. Few of the identifiers I encounter in open government data, for example, are well documented, openly licensed or are supported by a range of APIs and services.

Finally, much of the focus of pidapalooza was on identifying research outputs and related objects: papers, conferences, organisations, datasets, researchers, etc. I didn’t see many discussions around the potential benefits and consequences of use of identifiers in research datasets. Again, this focus follows from the community around the conference.

But as the research, data science and machine-learning communities begin exploring new approaches to increase access to data, it will be increasingly important to explore the use of standard identifiers in that context. Identifiers have a clear role in helping to integrate data from different sources, but there are wider risks around data privacy and ethical considerations around identification of individuals, for example, that will need to happen.

I think we should be building a wider community of practice around use of identifiers in different contexts, and I think pidapalooza could become a great venue to do that.

Talk: Documenting Identifiers for Humans and Machines

This is a rough transcript of a talk I recently gave at a session at Pidapalooza 2019. You can view the slides from the talk here. I’m sharing my notes for the talk here, with a bit of light editing. I’d also really welcome you thoughts and feedback on this discussion document.

At the Open Data Institute we think of data as infrastructure. Something that must be invested in and maintained so that we can maximise the value we get from data. For research, to inform policy and for a wide variety of social and economic benefits.

Identifiers, registers and open standards are some of the key building blocks of data infrastructure. We’ve done a lot of work to explore how to build strong, open foundations for our data infrastructure.

A couple of years ago we published a white paper highlighting the importance of openly licensed identifiers in creating open ecosystems around data. We used that to introduce some case studies from different sectors and to explore some of the characteristics of good identifier systems.

We’ve also explored ways to manage and publish registers. “Register” isn’t a word that I’ve encountered much in this community. But its frequently used to describe a whole set of government data assets.

Registers are reference datasets that provide both unique and/or persistent identifiers for things, and data about those things. The datasets of metadata that describe ORCIDs and DOIs are registers. As are lists of doctors, countries and locations where you can get our car taxed. We’ve explored different models for stewarding registers and ways to build trust
around how they are created and maintained.

In the work I’ve done and the conversations I’ve been involved with around identifiers, I think we tend to focus on two things.

The first is persistence. We need identifiers to be persistent in order to be able to rely on them enough to build them into our systems and processes. I’ve seen lots of discussion about the technical and organisational foundations necessary to ensure identifiers are persistent.

There’s also been great work and progress around giving identifiers affordance. Making them actionable.

Identifiers that are URIs can be clicked on in documents and emails. They can be used by humans and machines to find content, data and metadata. Where identifiers are not URIs, then there are often resolvers that will help to make to integrate them with the web.

Persistence and affordance are both vital qualities for identifiers that will help us build a stronger data infrastructure.

But lately I’ve been thinking that there should be more discussion and thought put into how we document identifiers. I think there are three reasons for this.

Firstly, identifiers are boundary objects. As we increase access to data, by sharing it between organisations or publishing it as open data, then an increasing number of data users  and communities are likely to encounter these identifiers.

I’m sure everyone in this room know what a DOI is (aside: they did). But how many people know what a TOID is? (Aside: none of them did). TOIDs are a national identifier scheme. There’s a TOID for every geographic feature on Ordnance Survey maps. As access to OS data increases, more developers will be introduced to TOIDs and could start using them in their applications.

As identifiers become shared between communities. It’s important that the context around how those identifiers are created and managed is accessible, so that we can properly interpret the data that uses them.

Secondly, identifiers are standards. There are many different types of standard. But they all face common problems of achieving wide adoption and impact. Getting a sector to adopt a common set of identifiers is a process of agreement and implementation. Adoption is driven by engagement and support.

To help drive adoption of standards, we need to ensure that they are well documented. So that users can understand their utility and benefits.

Finally identifiers usually exist as part of registers or similar reference data. So when we are publishing identifiers we face all the general challenges of being good data publishers. The data needs to be well described and documented. And to meet a variety of data user needs, we may need a range of services to help people consume and use it.

Together I think these different issues can lead to additional friction that can hinder the adoption of open identifiers. Better documentation could go some way towards addressing some of these challenges.

So what documentation should we publish around identifier schemes?

I’ve created a discussion document to gather and present some thoughts around this. Please have a read and leave you comments and suggestions on that document. For this presentation I’ll just talk through some of the key categories of information.

I think these are:

  • Descriptive information that provides the background to a scheme, such as what it’s for, when it was created, examples of it being used, etc
  • Governance information that describes how the scheme is managed, who operates it and how access is managed
  • Technical notes that describe the syntax and validation rules for the scheme
  • Operational information that helps developers understand how many identifiers there are, when and how new identifiers are assigned
  • Service pointers that signpost to resolvers and other APIs and services that help people use or adopt the identifiers

I take it pretty much as a given that this type of important documentation and metadata should be machine-readable in some form. So we need to approach all of the above in a way that can meet the needs of both human and machine data users.

Before jumping into bike-shedding around formats. There’s a few immediate questions to consider:

  • how do we make this metadata discoverable, e.g. from datasets and individual identifiers?
  • are there different use cases that might encourage us to separate out some of this information into separate formats and/or types of documentation?
  • what services might we build off the metadata?
  • …etc

I’m interested to know whether others think this would be a useful exercise to take further. And also the best forum for doing that. For example should there be a W3C community group or similar that we could use to discuss and publish some best practice.

Please have a look at the discussion document. I’m keen to learn from this community. So let me know what you think.

Thanks for listening.

Talk: Tabular data on the web

This is a rough transcript of a talk I recently gave at a workshop on Linked Open Statistical Data. You can view the slides from the talk here. I’m sharing my notes for the talk here, with a bit of light editing.

At the Open Data Institute our mission is to work with companies and governments to build an open trustworthy data ecosystem. An ecosystem in which we can maximise the value from use of data whilst minimising its potential for harmful impacts.

An important part of building that ecosystem will be ensuring that everyone — including governments, companies, communities and individuals — can find and use the data that might help them to make better decisions and to understand the world around them

We’re living in a period where there’s a lot of disinformation around. So the ability to find high quality data from reputable sources is increasingly important. Not just for us as individuals, but also for journalists and other information intermediaries, like fact-checking organisations.

Combating misinformation, regardless of its source, is an increasingly important activity. To do that at scale, data needs to be more than just easy to find. It also needs to be easily integrated into data flows and analysis. And the context that describes its limitations and potential uses needs to be readily available.

The statistics community has long had standards and codes of practice that help to ensure that data is published in ways that help to deliver on these needs.

Technology is also changing. The ways in which we find and consume information is evolving. Simple questions are now being directly answered from search results, or through agents like Alexa and Siri.

New technologies and interfaces mean new challenges in integrating and using data. This means that we need to continually review how we are publishing data. So that our standards and practices continue to evolve to meet data user needs.

So how do we integrate data with the web? To ensure that statistics are well described and easy to find?

We’ve actually got a good understanding of basic data user needs. Good quality metadata and documentation. Clear licensing. Consistent schemas. Use of open formats, etc, etc. These are consistent requirements across a broad range of data users.

What standards can help us meet those needs? We have DCAT and Data Packages. Schema.org Dataset metadata, and its use in Google dataset search, now provides a useful feedback loop that will encourage more investment in creating and maintaining metadata. You should all adopt it.

And we also have CSV on the Web. It does a variety of things which aren’t covered by some of those other standards. It’s a collection of W3C Recommendations that:

The primer provides an excellent walk through of all of the capabilities and I’d encourage you to explore it.

One of the nice examples in the primer shows how you can annotate individual cells or groups of cells. As you all know this capability is essential for statistical data. Because statistical data is rarely just tabular: it’s usually decorated with lots of contextual information that is difficult to express in most data formats. Users of data need this context to properly interpret and display statistical information.

Unfortunately, CSV on the Web is still not that widely adopted. Even though its relatively simple to implement.

(Aside: several audience members noted they are using it internally in their data workflows. I believe the Office of National Statistics are also moving to adopt it)

This might be because of a lack of understanding of some of the benefits it provides. Or that those benefits are limited in scope.

There also aren’t a great many tools that support CSV on the web currently.

It might also be that actually there’s some other missing pieces of data infrastructure that are blocking us from making best use of CSV on the Web and other similar standards and formats. Perhaps we need to invest further in creating open identifiers to help us describe statistical observations. E.g. so that we can clearly describe what type of statistics are being reported in a dataset?

But adoption could be driven from multiple angles. For example:

  • open data tools, portals and data publishers could start to generate best practice CSVs. That would be easy to implement
  • open data portals could also readily adopt CSV on the Web metadata, most already support DCAT
  • standards developers could adopt CSV on the Web as their primary means of defining schemas for tabular formats

Not everyone needs to implement or use the full set of capabilities. But with some small changes to tools and processes, we could collectively improve how tabular data is integrated into the web.

Thanks for listening.

UnINSPIREd: problems accessing local government geospatial data

This weekend I started a side project which I plan to spend some time on this winter. The goal is to create a web interface that will let people explore geospatial datasets published by the three local authorities that make up the West of England Combined Authority: Bristol City Council, South Gloucestershire Council and Bath & North East Somerset Council.

Through Bath: Hacked we’ve already worked with the council to publish a lot of geospatial data. We’ve also run community mapping events and created online tools to explore geospatial datasets. But we don’t have a single web interface that makes it easy for anyone to explore that data and perhaps mix it with new data that they have collected.

Rather than build something new, which would be fun but time consuming, I’ve decided to try out TerriaJS. Its an open source, web based mapping tool that is already being used to publish the Australian National Map. It should handle doing the West of England quite comfortably. It’s got a great set of features and can connect to existing data catalogues and endpoints. It seems to be perfect for my needs.

I decided to start by configuring the datasets that are already in the Bath: Hacked Datastore, the Bristol Open Data portal, and data.gov.uk. Every council also has to publish some data via standard APIs as part of the INSPIRE regulations, so I hoped to be able to quickly bring a list of existing datasets without having to download and manage them myself.

Unfortunately this hasn’t proved as easy as I’d hoped. Based on what we’ve learned so far about the state of geospatial data infrastructure in our project at the ODI I had reasonably low expectations. But there’s nothing like some practical experience to really drive things home.

Here’s a few of the challenges and issues I’ve encountered so far.

  • The three councils are publishing different sets of data. Why is that?
  • The dataset licensing isn’t open and looks to be inconsistent across the three councils. When is something covered by INSPIRE rather than the PSMA end user agreement?
  • The new data.gov.uk “filter by publisher” option doesn’t return all datasets for the specified publisher. I’ve reported this as a bug, in the meantime I’ve fallen back on searching by name
  • The metadata for the datasets is pretty poor, and there is little supporting documentation. I’m not sure what some of the datasets are intended to represent. What are “core strategy areas“?
  • The INSPIRE service endpoints do include metadata that isn’t exposed via data.gov.uk. For example this South Gloucester dataset includes contact details, data on geospatial extents, and format information which isn’t otherwise available. It would be nice to be able to see this and not have to read the XML
  • None of the metadata appears to tell me when the dataset was last updated. The last modified data on data.gov.uk is (I think) the date the catalogue entry was last updated. Are the Section 106 agreements listed in this dataset from 2010 or are they regularly updated. How can I tell?
  • Bath is using GetMapping to host its INSPIRE datasets. Working through them on data.gov.uk I found that 46 out of the 48 datasets I reviewed have broken endpoints. I’m reasonably certain these used to work. I’ve reported the issue to the council.
  • The two datasets that do work in Bath cannot be used in TerriaJS. I managed to work around the fact that they require a username and password to access but have hit a wall because the GetMapping APIs only seem to support EPSG:27700 (British National Grid) and not EPSG:3857 as used by online mapping tools. So the APIs refuse to serve the data in a way that can be used by the framework. The Bristol and South Gloucestershire endpoints handle this fine. I assume this is either a limitation of the GetMapping service or a misconfiguration. I’ve asked for help.
  • A single Web Mapping Service can expose multiple datasets as individual layers. But apart from Bristol, both Bath and South Gloucestershire are publishing each dataset through its own API endpoint. I hope the services they’re using aren’t charging per end-point, as they’re probably unnecessary? Bristol has chosen to publish a couple of API that bring together several datasets, but these are also available individually through separate APIs.
  • The same datasets are repeated across data catalogues and endpoints. Bristol has its data listed as individual datasets in its own platform, listed as individual datasets in data.gov.uk and also exposed via two different collections which bundle some (or all?) of them together. I’m unclear on the overlap or whether there are differences between them in terms of scope, timeliness, etc. The licensing is also different. Exploring the three different datasets that describe allotments in Bristol, only one actually displayed any data in TerriaJS. I don’t know why
  • The South Gloucestershire web mapping services all worked seamlessly, but I noticed that if I wanted to download the data, then I would need to jump through hoops to register to access it. Obviously not ideal if I do want to work with the data locally. This isn’t required by the other councils. I assume this is a feature of MisoPortal
  • The South Gloucestershire datasets don’t seem to include any useful attributes for the features represented in the data. When you click on the points, lines and polygons in TerriaJS no additional information is displayed. I don’t know yet whether this data just isn’t included in the dataset or if its a bug in the API or in how TerriaJS is requesting it. I’d need to download or explore the data in some other way to find out. However the data that is available from Bath and Bristol also has inconsistencies in how its described, so I suspect there aren’t any agreed standards
  • Neither the GetMapping or MisoPortal APIs support CORS. This means you can’t access the data from Javascript running directly in the browser, which is what TerriaJS does by default. I’ve had to configure those to be accessed via a proxy. “Web mapping services” should work on the web.
  • While TerriaJS doesn’t have a plugin for OpenDataSoft (which powers the Bristol Open Data platform), I found that OpenDataSoft do provide a Web Feature Service interface. So I was able to configure that in TerriaJS to access that. Unfortunately I then found that either there’s a bug in the platform or a problem with the data because most of the points were in the Indian Ocean

The goal of the INSPIRE legislation was to provide a common geospatial data infrastructure across Europe. What I’m trying to do here should be relatively quick and easy to do. Looking at this graph of INSPIRE conformance for the UK, everything looks rosy.

But, based on an admittedly small sample of only three local authorities, the reality seems to be that:

  • services are inconsistently implemented and have not been designed to be used as part of native web applications and mapping frameworks
  • metadata quality is poor
  • there is inconsistent detail about features which makes it hard to aggregate, use and compare data across different areas
  • it’s hard to tell the provenance of data because of duplicated copies of data across catalogues and endpoints. Without modification or provenance information, its unclear whether data is data is up to date
  • licensing is unclear
  • links to service endpoints are broken. At best, this leads to wasted time from data users. At worst, there’s public money being spent on publishing services that no-one can access

It’s important that we find ways to resolve these problems. As this recent survey by the ODI highlights, SMEs, startups and local community groups all need to be able to use this data. Local government needs more support to help strengthen our geospatial data infrastructure.

The building blocks of data infrastructure – Part 2

This is the second part of a two part post looking at the building blocks of data infrastructure. In part one we looked at definitions of data infrastructure, and the first set of building blocks: identifiers, standards and registers. You should read part one first and then jump back in here.

We’re using the example of weather data to help us think through the different elements of a data infrastructure. In our fictional system we have a network of weather stations installed around the world. The stations are feeding weather observations into a common database. We’ve looked at why its necessary to identify our stations, the role of standards, and the benefits of building registers to help us manage the system.

Technology

Technology is obviously part of data infrastructure. In part one we have already introduced several types of technology.

The sensors and components that are used to build the weather stations are also technologies.

The data standards that define how we organise and exchange data are technologies.

The protocols that help us transmit data, like WiFi or telecommunications networks, are technologies.

The APIs that are used to submit data to the global database of observations, or which help us retrieve observations from it, are also technologies.

Unfortunately, I often see some mistaken assumptions that data infrastructure is only about the technologies we use to help us manage and exchange data.

To use an analogy, this is a bit like focusing on tarmac and kerb stones as the defining characteristics of our road infrastructure. These materials are both important and necessary, but are just parts of a larger system. If we focus only on technology it’s easy to overlook the other, more important buildings blocks of data infrastructure.

We should be really clear when we are talking about “data infrastructure”, which encompasses all the building blocks we are discussing here and when we are talking about “infrastructure for data” which focuses just on the technologies we use to collect and manage data.

Technologies evolve and become obsolete. Over time we might choose to use different technologies in our data infrastructure.

What’s important is choosing technologies that ensure our data infrastructure is as reliable, sustainable and as open as possible.

Organisations

Our data infrastructure is taking shape. We now have a system that consists of weather stations installed around the world, reporting local weather observations into a central database. That dataset is the primary data asset that we will be publishing from our data infrastructure.

We’ve explored the various technologies, data standards and some of the other data assets (registers) that enable the collection and publishing of that data.

We’ve not yet considered the organisations that maintain and govern those assets.

The weather stations themselves will be manufactured and installed by many different organisations around the world. Other organisations might offer services to help maintain and calibrate stations after they are installed.

A National Meteorological Service might take on responsibility for maintaining the network of stations within it’s nation’s borders. The scope of their role will be defined by national legislation and policies. But a commercial organisation might also choose to take on responsibility for running a collection of stations.

In our data infrastructure, the central database of observations will be curated and managed by a single organisation. The (fictional) Global Weather Office. Our Global Weather Office will do more than just manage data assets. It also has a role to play in choosing and defining the data standards that support data collection. And it helps to certify which models of weather station conform to those standards.

Organisations are a key building block of data infrastructure. The organisational models that we choose to govern a data infrastructure and which take responsibility for its sustainability, are an important part of its design.

The value of the weather observations comes from their use. E.g. as input into predictive models to create weather forecasts and other services. Many organisations will use the observation data provided by our data infrastructure to create a range of products and services. E.g. national weather forecasts, or targeted advice for farmers that is delivered via farm management systems. The data might also be used by researchers. Or by environmental policy-makers to inform their work.

Mapping out the ecosystem of organisations that operate and benefit from our data infrastructure will help us to understand the roles and responsibilities of each organisation. It will also help clarify how and where value is being created.

Guidance and Policies

With so many different organisations operating, governing and benefiting from our data infrastructure we need to think about how they are supported in creating value from it.

To do this we will need to produce a range of guidance and policies, for example:

  • Documentation for all of the data assets that helps to put them in context, allowing them to be successfully used to create products and services. This might include notes on how we have collected our data, the standards used, and locations of our stations.
  • Recommendations for how data should be processed and interpreted to ensure that weather forecasts that use the data are reliable and safe
  • Licences that define how the data assets can be used
  • Documentation that describes the data governance processes that are applied to the data assets
  • Policies that define how organisations gain access to the data infrastructure, e.g. to start supplying data from new stations
  • Policies that decide how, when and where new stations might be added to the global network, to ensure that global coverage is maintained
  • Procurement policies that define how stations and the services that relate to them purchased
  • National regulations that apply to manufacture of weather stations, or that set safety standards that apply when they are installed or serviced
  • …etc

Guidance and policies are an important building block that help to shape the ecosystem that supports and benefits from our data infrastructure.

A strong data infrastructure will have policies and governance that will support equitable access to the system. Making infrastructure as open as possible will help to ensure that as many organisations as possible have the opportunity to use the assets it provides, and have equal opportunities to contribute to its operation.

Community

Why do we collect weather data? We do it to help create weather forecasts, monitor climate change and a whole host of other reasons. We want the data to be used to make decisions.

Many different people and organisations might benefit from the weather data we are providing. A commuter might just want to know whether to take an umbrella to work. A farmer might want help in choosing which crops to plant. Or an engineer planning a difficult construction task made need to know the expected weather conditions.

Outside of the organisations who are directly interacting with our data infrastructure there will be a number of communities, made up of both individuals and organisations who will benefit from the products and services made with the data assets it provides. Communities are the final building block of our data infrastructure.

These communities will be relying on our data infrastructure to plan their daily lives, activities and to make business decisions. But they may not realise it. Good infrastructure is boring and reliable.

In his book on the social value of infrastructure, Brett Frischmann refers to infrastructure as “shared means to many ends”. Governing and maintaining infrastructure requires us to recognise this variety of interests and make choices that balances a variety of needs. 

The choices we make about who has access to our data infrastructure, and how it will be made sustainable, will be important in ensuring that value can be created from it over the long-term.

Reviewing our building blocks

To summarise, our building blocks of data infrastructure are:

  • Identifiers
  • Standards
  • Registers
  • Technology, of various kinds
  • Organisations, who create, maintain, govern and use our infrastructure
  • Guidance and Policies that inform is use
  • Communities who are impacted or are affected by it

The building blocks have different sizes. Identifiers are a well-understood technical concept. Organisations, policies and communities are more complex, and perhaps less well-defined.

Understanding their relationships, and how they benefit from being more open, requires us to engage in some systems thinking. By identifying each building block I hope we can start to have deeper conversations about the systems we are building.

Over time we might be able to tease out more specific building blocks. We might be able to identify important organisational roles that occur as repeated patterns across different types of infrastructure. Or specific organisational models that have been found to be successful in creating trusted, sustainable infrastructures. Over time we might also identify key types of policy and guidance that are important elements of ensuring that a data infrastructure is successful. These are research questions that can help us refine our understanding of data as infrastructure.

There are other aspects of data infrastructure which we have not explicitly explored here. For example ethics and trust. This is because ethics is not a building block. It’s a way of working that will enable fairer, equitable access to data infrastructure by a variety of communities. Ethics should inform every decision and every activity we take to design, build and maintain our data infrastructure.

Trust is also not a building block. Trust emerges from how we operate and maintain our data infrastructures. Trust is earned, rather than designed into a system.

Help me make this better

I’ve written these posts to help me work through some of my thoughts around data infrastructure. There’s a lot more to be said about the different building blocks. And choosing other examples, e.g. that focus on data infrastructure that involves sharing of personal data like medical records, might better highlight some different characteristics.

Let me know what you think of this breakdown. Is it useful? Do you think I’ve missed some building blocks? Leave a comment or tweet me your thoughts.

Thanks to Peter Wells and Jeni Tennison for feedback and suggestions that have helped me write these posts.

The building blocks of data infrastructure – Part 1

Data is a vital form of infrastructure for our societies and our economies. When we think about infrastructure we usually think of physical things like roads and railways.

But there are broader definitions of infrastructure that include less tangible things. Like ideas or the internet.

It is important to recognise that there is more to “infrastructure” than just roads and railways. Otherwise there is a risk that we, as a society, won’t invest the necessary time or effort in building, maintaining and governing that infrastructure. The decisions we make about Infrastructure are important because infrastructure helps to shape our societies.

To help explore the idea of data as infrastructure, I want to look at the various building blocks that make up a specific example of a data infrastructure. My hope is that this will help to make it clearer that “data infrastructure” is about more than just technology. As we will see the technical infrastructure we use to manage data is just one component of data infrastructure.

The example we will use is greatly simplified and is partly fictionalised, but it is essentially a real example: we’re going to look at weather data infrastructure.

I’ve written about weather data infrastructure before. It’s a really interesting example to explore in this context because:

  • it’s easy to understand the value of collecting weather data
  • it’s a complex enough example to help dig into some real-world issues
  • It illustrates how data that is collected and used locally or nationally can also be part of a global data infrastructure

Weather data is usually open data, or at least public. But the building blocks we will outline here apply equally well to data from across the data spectrum. In a follow-up post I may explore a more complex example that illustrates a different type of data infrastructure, e.g. one for medical research that relies on researchers having access to medical records.

In the following sections we’ll look at the different building blocks that are important in building a global weather data infrastructure. The real infrastructure is much more complex.

Some of the building blocks are a bit fuzzy and have multiple roles to play in our infrastructure. But that’s fine. The world isn’t a neat and tidy place that we can always reduce to simpler components.

A definition of data infrastructure

Before we begin, let’s introduce a definition of data infrastructure:

A data infrastructure consists of data assets, the standards and technologies that are used to curate and provide access to those assets, the guidance and policies that inform their use and management, the organisations that govern the data infrastructure, and the communities involved in maintaining it, or are impacted by decisions that are made using those data assets.

There are a lot of moving parts there. And there are lots of things to say about each of them. For now let’s focus on the individual building blocks to explore ways in which they fit together.

Identifiers

Imagine we’re planning to build a global network of weather stations. Each station will be regularly recording the local temperature and rainfall. In our system we’ll be collecting all of these readings into a global dataset of weather observations.

So that we know which observations have been reported by which weather station, we need a unique reference for each of them.

We can’t just use the name of the town or village in which the station has been installed as that reference. There are Birminghams in both the UK and the US, for example. We might also need to move and reinstall weather stations over time, but may need to track information about them, such as when they were installed or services. So we need a global identifier that is more reliable than just a name.

By assigning each weather station a unique identifier, we can then attached additional data to it. Like it’s current location. We can also associate the identifier with every temperature and rainfall observation, so that we know which station reported that data.

Identifiers are the first building block of our data infrastructure.

Identifiers are deceptively simple. They’re just a number or a code, right? But there’s a lot to say about them, such as how they are assigned or are formatted. It can be hard to create good identifiers.

When identifiers are open, for anyone to use in their data, they have a role to play that goes beyond just providing unique references in a database. They can also help to create network effects that encourage publication of additional data.

Standards, part 1

Our weather stations are recording temperature and rainfall. We’ll measure temperature in degrees Centigrade and rainfall in millimetres. Both of these are standard units of measurement.

Standards are our second building block.

Standards are documented, reusable agreements. They help us collect and organise data in consistent ways, and make it easier to work with data from different sources.

Some standards, like units of measurement are global and are used in many different ways. But some standard might be only be relevant to specific communities or systems.

In our weather data infrastructure, we will need to standardise some other aspects of how we plan to collect weather data.

For example, let’s assume that our weather stations are recording data every half an hour. Every thirty minutes a station will record a new temperature reading. But is it recording the temperature at that specific moment in time, or should it report the average temperature over the last thirty minutes? There may be advantages in doing one or the other.

If we don’t standardise some of our data collection practices, then weather stations created by different manufacturers might record data differently. This will affect the quality of our data.  

Standards, part 2

Every data infrastructure will rely on a wide variety of different standards. Some standards support consistent measurement and data collection. Others help us to exchange data more effectively.

Our weather stations will need to record the data they collect and automatically upload it to a service that helps us build our global database. In a real system there are a number of different ways in which we might want a weather station to report data, to provide a variety of ways in which it could be aggregated and reused. But to simplify things, we’ll assume they just upload their data to a centralised service. Centralised data collection is problematic for a number of reasons, but that’s a topic for another article.

To help us define how the weather stations will upload their data we will need to pick a standard data format that will define the syntax for recording data in a machine-readable form. Let’s assume that we decide to use a simple CSV (comma-separated values) format.

Each station will produce a CSV file that contains one row for every half-hourly observation. Each row will consist of a station identifier, a time stamp for the recordings, a temperate reading and a rainfall reading.

The time stamps can be recorded using ISO 8601, which is an international standard for formatting dates and times. Helpfully we can include time zones, which will be essential for reporting time accurately across our global network of weather stations.

We also need to ensure that the order in which the four fields will be reported is consistent, or that the headers in the CSV file clearly identify what is contained in each column. Again, we might be using weather stations from multiple manufacturers and need data to be recorded consistently. Some stations might also include additional sensors, e.g. to record wind speed. So ideally our standard will be extensible to support that additional data. Taking time to design and standardise our CSV format will make data aggregation easier.

Every time we define how to collect, manage or share data within a system, we are creating agreements that will help ensure that everyone involved in those processes can be sure that those tasks are carried out in consistent ways. When we reuse existing standards, rather than creating bespoke versions, we can benefit from the work of thousands of different specialists across a variety of industries.

Sometimes though we do need to define a new standard, like the order of the columns in our specific type of CSV file. But where possible we should approach this by building on existing standards as much as possible.

Registers

To help us manage our network of weather stations it will be useful to record where each of them has been installed. It would also be helpful to record when they were installed. Then we can figure out when they might need to be re-calibrated or replaced and send some out to do the necessary work.

To do this, we can create a dataset, that lists the identifier, location, model and installation date of every weather station.

This type of dataset a register.

Registers are lists of important data. They have multiple uses, but are most frequently used to help us improve the quality of our data reporting.

For example we can use the above register to confirm that we’re regularly receiving data from every station on the network. When a station is installed it will need to get added to the register. We might give the company installing the station permission to do that, to help us maintain the register.

We can also use the register to determine if we have a good geographic spread of stations, to help us assess and improve the coverage and quality of the observations we’re collecting. The register is also useful for anyone using our global dataset so they can see how the dataset has been collected over time. Registers should be as open as possible.

There are other types of register that might be useful for governing our data infrastructure. For example we might create a register that lists all of the models of weather station that have been certified to comply with our preferred data standards.

We can use that register to help us make decisions about how to replace stations when they fail. A register can also help provide an incentive for the manufacturers of weather stations to conform to our chosen standards. If they’re not on the list, then we might not buy their products.

In Part 2 of this post we’ll look at others aspects of data infrastructure, including technology, organisations and policies. Thanks to Peter Wells and Jeni Tennison for feedback and suggestions that have helped me write these posts.