What is a Dataset? Part 2: A Working Definition

A few years ago I wrote a post called “What is a Dataset?” It lists a variety of the different definitions of “dataset” used in different communities and standards. What I didn’t do is give my own working definition of dataset. I wanted to share that here along with a few additional thoughts on some related terms.

Answering the right question

I’ve noticed that often, when people ask for a definition of “dataset”, its for one of two reasons.

The first occurs when they’re actually asking a different question: “What is data?” Here I usually try and avoid getting into a lengthy discussion around data, facts, information and knowledge and instead focus on providing examples of datasets. I include databases, spreadsheets, sensors readings and collections of documents, images and video. This is to help get across that actually everything is data these days. It just depends how you process it.

The second question occurs when someone is trying to decide how to turn an existing database or some other collection of data into a “dataset” they can publish it on their website, or in a portal, or via an API. Answering this question involves a number of other questions. For example:

  • Is a dataset a single data file?
    • Answer: Not necessarily, it could be several files that have been split up for ease of production or consumption
  • Is a database one dataset or several?
    • Answer: It depends. Sometimes a database might be a single dataset, but sometimes it might be better published as several smaller datasets. You’ll often need to strip personal or commercially sensitive data anyway, so what you publish is unlikely to be exactly what you’ve got in your database. But you might decide to publish a collection of different data files (e.g. one per table) packaged together in some way. This might be best if someone will always want to consume the whole thing, e.g. to create a local copy of your database
  • Are there reasons why a single larger collection of data might be broken up into different datasets?
    • Answer: Yes, if it makes it easier for people to access and use the data. Or maybe there are regular updates, each of which is a separate dataset
  • If a database contains data from different sources, should it be published as several different datasets?
    • Answer: It depends. If you’ve created a useful aggregation, then publishing it as a whole makes sense as a user can access the whole thing. Ditto if you’ve corrected, fixed or improved some third-party data. But sometimes you might just want to release whatever new data you’ve added or created, and let people find other datasets that you reference or reuse by providing a link to the original versions
  • …etc

There are no hard and fast answers. Like everything around publishing open data, you need to take into account a number of different factors.

A working definition

Bringing this together, I’ve ended up with the followingrough working definition of “dataset”:

A dataset is a collection of data that is managed using the same set of governance processes, have a shared provenance and share a common schema

By requiring a common set of governance processes, you group together data that has the same level of quality assurance, security and other policies. By requiring a shared provenance, we focus on data that has been collected in similar ways, which means that they will have similar licensing and rights issues. Sharing a common schema means that the data is consistently expressed.

To test this out:

  • If you have a produce a set of official statistics, each annual release is a new dataset. Because the data has been collected and processed at different times
  • A database of images and comments that users have made against them would probably best be released as two datasets: one containing the images (& their metadata) and another containing the comments. Images and comments are two different types of object, they’re collected and managed in different ways
  • A set of food hygiene ratings collected by different councils across the UK consists of multiple datasets. Data on each local area will have been collected at different times by different organisations. Publishing them separately allows users to take just the data they need, when it’s updated
  • …etc

There are always exception to any rule, but I’ve found this reasonably useful in practice. As it highlights some important considerations. But I’m pretty sure it can be improved. Let me know if you have comments.

This post is part of a series called “basic questions about data“.

 

The Lego Analogy

I think Lego is a great analogy for understanding the importance of data standards and registers.

Lego have been making plastic toys and bricks since the late 40s. It took them a little while to perfect their designs. But since 1958 they’ve been manufacturing bricks in the same way, to the same basic standard. This means that you can take any brick that’s been manufactured over the last 59 years and they’ll fit together. As a company, they have extremely high standards around how their bricks are manufactured. Only 18 in a million are ever rejected.

A commitment to standards maximises the utility of all of the bricks that the company has ever produced.

Open data standards apply the same principle but to data. By publishing data using common APIs, formats and schemas, we can start to treat data like Lego bricks. Standards help us recombine data in many, many different ways.

There are now many more types and shapes of Lego brick than there used to be. The Lego standard colour palette has also evolved over the years. The types and colours of bricks have changed to reflect the company’s desire to create a wider variety of sets and themes.

If you look across all of the different sets that Lego have produced, you can see that some basic pieces are used very frequently. A number of these pieces are “plates” that help to connect other bricks together. If you ask a Master Lego Builder for a list of their favourite pieces, you’ll discover the same. Elements that help you connect other bricks together in new and interesting ways are the most popular.

Registers are small, simple datasets that play the same role in the data ecosystem. They provide a means for us to connect datasets together. A way to improve the quality and structure of other datasets. They may not be the most excitingly shaped data. Sometimes they’re just simple lists and tables. But they play a very important role in unlocking the value of other data.

So there we have it, the Lego analogy for standards and registers.

Mapping wheelchair accessibility, how google could help

This month Google announced a new campaign to crowd-source information on wheelchair accessibility. It will be asking the Local Guides community of volunteers to begin answering simple questions about the wheelchair accessibility of places that appear on Google Maps. Google already crowd-sources a lot of information from volunteers. For example, it asks them to contribute photos, add reviews and validate the data its displaying to users of its mapping products.

It’s great to see Google responding to requests from wheelchair users for better information on accessibility. But I think they can do better.

There are many projects exploring how to improve accessibility information for people with mobility issues, and how to use data to increase mobility. I’ve recently been leading a project in Bath that is using a service called Wheelmap to crowd-source wheelchair accessibility information for the centre of the city. Over two Saturday afternoons we’ve mapped 86% of the city. Crowd-sourcing is a great way to collect this type of information and Google has the reach to really take this to another level.

The problem is that the resulting data is only available to Google. Displaying the data on Google maps will put it in front of millions of people, but that data could potentially be reused in a variety of other ways.

For example, for the Accessible Bath project we’re now able to explore accessibility information based on the type of location. This may be useful for policy makes to help shape support and investment in local businesses to improve accessibility across the city. Bath is a popular tourist destination so it’s important that we’re accessible to all.

We’re able to do this because Wheelmap stores all of its data in OpenStreetMap. We have access to all of the data our volunteers collect and can use it in combination with the rich metadata already in OpenStreetMap. And we can also start to combine it with other information, e.g. data on the ages of buildings, which may yield more insight.

As we learnt in our meetings with local wheelchair users and stroke survivors, mobility and accessibility issues are tricky to address. Road and pavement surfaces and types of dropped kerbs can impacts you differently depending on your specific needs. Often you need more data and more context from other sources to provide the necessary support. Like Google we’re starting with wheelchair accessibility because that’s the easiest problem to begin to address.

To improve routing, for example you might need data on terrain, or be able to identify the locations and sizes of individual disabled parking spaces. Microsoft’s Cities Unlocked are combining accessibility and location data from OpenStreetmap with Wikipedia entries to help blind users navigate a city. They chose OpenStreetMap as their data source because of its flexibility, existing support for accessibility information and rapid updates. This type of innovation requires greater access to raw data, not just data on a map.

By only collecting and displaying data only on its own maps, Google is not maximising the value of the contributions made by it’s Local Guides community. If the data they collected was published under an open licence, it could be used in many other projects. By improving its maps, Google is addressing a specific set of user needs. By opening up the data it could let more people address more user needs.

If Google felt they were unable to publish the data under an open licence, they could at least make the data available to OpenStreetMap contributors to support their mapping events. This type of limited licensing is already being used by Microsoft, Digital Globe and others to make commercial satellite imagery available to the OpenStreetMap community. While restrictive licensing is not ideal, allowing the data to be used to improve open databases, without the need to worry about IP issues is a useful step forward from keeping the data locked down.

Another form of support that Google could offer is to extend Schema.org to allow accessibility information to be associated with Places. By incorporating this into Google Maps and then openly publishing or sharing that data, it would encourage more organisations to publish this information about their locations.

But I find it hard to think of good reasons why Google wouldn’t make this data openly available. I think its Local Guides community would agree that they’re contributing in order to help make the world a better place. Ensuring that the data can be used by anyone, for any purpose, is the best way to achieve that goal.

Under construction

It’s been a while since I posted a more personal update here. But, as I announced this morning, I’ve got a new job! I thought I’d write a quick overview of what I’ll be doing and what I hope to achieve.

I’ve been considering giving up freelancing for a while now. I’ve been doing it on and off since 2012 when I left Talis. Freelancing has given me a huge amount of flexibility to take on a mixture of different projects. Looking back, there’s a lot of projects I’m really proud of. I’ve worked with the Ordnance Survey, the British Library and the Barbican. I helped launch a startup which is now celebrating its fifth birthday. And I’ve had far too much fun working with the ONS Digital team.

I’ve also been able to devote time to helping lead a plucky band of civic hackers in Bath. We’ve run free training courses, built an energy-saving application for schools and mapped the city. Amongst many other things.

I’ve spent a significant amount of time over the last few years working with the Open Data Institute. The ODI is five and I think I’ve been involved with the organisation for around 4.5 years. Mostly as a part-time associate, but also for a year or so as a consultant. It turned out that wasn’t quite the right role for me, hence the recent dive back into freelancing.

But over that time, I’ve had the opportunity to work on a similarly wide-ranging set of projects. I’ve researched how election data is collected and used and learnt about weather data. I’ve helped to create guidance around open identifiers, licensing, and open data policies.  And explored ways to direct organisations on their open data journey. I’ve also provided advice and support to startups, government and multi-national organisations. That’s pretty cool.

I’ve also worked with an amazing set of people. Some of those people are still at the ODI and others have now moved on. I’ve learnt loads from all of them.

I was pretty clear what type of work I wanted to do in a more permanent role. Firstly, I wanted to take on bigger projects. There’s only so much you can do as an independent freelancer. Secondly, I wanted to work on “data infrastructure”. While collectively we’ve only just begun thinking through the idea of data as infrastructure, looking back over my career it’s a useful label for the types of work I’ve been doing. The majority of which has involved looking at applications of data, technology, standards and processes.

I realised that the best place for me to do all of that was at the ODI. So I’ve seized the opportunity to jump back into the organisation.

My new job title is “Data Infrastructure Programme Lead”. In practice this means that I’m going to be:

  • helping to develop the ODI’s programme of work around data infrastructure, including the creation of research, standards, guidance and tools that will support the creation of good data infrastructure
  • taking on product ownership for certificates and pathway, so we’ve got a way to measure good data infrastructure
  • working with the ODI’s partners and network to support them in building stronger data infrastructure
  • building relationships with others who are working on building data infrastructure in public and private sector, so we can learn from one another

And no doubt, a whole lot of other things besides!

I’ll be working closely with Peter and Olivier, as my role should complement theirs. And I’m looking forward to spending more time with the rest of the ODI team, so I can find ways to support and learn more from them all.

My immediate priorities will be are working on standards and tools to help build data infrastructure in the physical activity sector, through the OpenActive project. And leading on projects looking at how to build better standards and how to develop collaborative registers.

I’m genuinely excited about the opportunities we have for improving the publication and use of data on the web. It’s a topic that continues to occupy a lot of my attention. For example, I’m keen to see whether we can build a design manual for data infrastructure. Or improve governance around data through analysing existing sources. Or whether mapping data ecosystems and diagramming data flows can help us understand what makes a good data infrastructure. And a million other things. It’s also probably time we started to recognise and invest in the building blocks for data infrastructure that we’ve already built.

If you’re interesting in talking about data infrastructure, then I’d love to hear from you. You can reach me on twitter or email.

Bath Playbills 1812-1851

This weekend I published scans of over 2000 historical playbills for the Theatre Royal in Bath. Here are some notes on whey they come from and how they might be useful.

The scans are all available on Flickr and have been placed into the public domain under a CC0 waiver. You’re free to use them in any way you see fit. The playbills date from 1812 through to 1851. This is the period just before the fire and rebuilding of the theatre in its current location.

The scans are taken from 5 public domain books available digitally from the British Library. All I’ve done in this instance is take the PDF versions of the books, split out the pages into separate images and then upload them to Flickr, into separate collections.

This is a small step, but will hopefully make the contents more discoverable and accessible. The individual playbills are now part of the web, so can be individually referenced and commented on.

For example there are some great images in the later bills. And I learned that in 1840 you could have seen lions, tigers and leopards.

1831-1840_418

And this playbill includes detail on the plot and scenes from a play called “Susan Hopley” and an intriguing reference to “Punchinello Vampire!”.

1841-1851_327

As they are all in the public domain, then images will hopefully be of interest to Wikipedians interested in the history of Bath, the theatre or performers such as Joseph Grimaldi. (I did try adding a reference to a playbill myself, but had this reverted because I was “linking to my own social media site”).

There’s a lot of detail to the bills which it might be useful to extract. E.g. the dates of each bill, the plays being performed and details of the performers and sponsors. If anyone is interested in helping to crowd-source that, then let me know!

 

We can strengthen data infrastructure by analysing open data

Data is infrastructure for our society and businesses. To create stronger, sustainable data infrastructure that supports a variety of users and uses, we need to build it in a principled way.

Over time, as we gain experience with a variety of infrastructures supporting both shared and open data, we can identify the common elements of good data infrastructure. We can use that to help to write a design manual for data infrastructure.

There a variety of ways to approach that task. We can write case studies on specific projects, and we can map ecosystems to understand how value is created through data. We can also take time to contribute to projects. Experiencing different types of governance, following processes and using tools can provide useful insight.

We can also analyse open data to look for additional insights that might help use improve data infrastructure. I’ve recently been involved in two short projects that have analysed some existing open data.

Exploring open data quality

Working with Experian and colleagues at the ODI, we looked at the quality of some UK government datasets. We used a data quality tool to analyse data from the Land Registry, the NHS and Companies House. We found issues with each of the datasets.

It’s clear that there’s is still plenty of scope to make basic improvements to how data is published, by providing:

  • better guidance on the structure, content and licensing of data
  • basic data models and machine-readable schemas to help standardise approaches to sharing similar data
  • better tooling to help reconcile data against authoritative registers

The UK is also still in need of a national open address register.

Open data quality is a current topic in the open data community. The community might benefit from access to an “open data quality index” that provides more detail into these issues. Open data certificates would be an important part of that index. The tools used to generate that index could also be used on shared datasets. The results could be open, even if the datasets themselves might not be.

Exploring the evolution of data

There are currently plans to further improve the data infrastructure that supports academic research by standardising organisation identifiers. I’ve been doing some R&D work for that project to analyse several different shared and open datasets of organisation identifiers. By collecting and indexing the data, we’ve been able to assess how well they can support improving existing data, through automated reconciliation and by creating better data entry tools for users.

Increasingly, when we are building new data infrastructures, we are building on and linking together existing datasets. So it’s important to have a good understanding of the scope, coverage and governance of the source data we are using. Access to regularly published data gives us an opportunity to explore the dynamics around the management of those sources.

For example, I’ve explored the growth of the GRID organisational identifiers.

This type of analysis can help assess the level of investment required to maintain different types of dataset and registers. The type of governance we decide to put around data will have a big impact on the technology and processes that need to be created to maintain it. A collaborative, user maintained register will operate very differently to one that is managed by a single authority.

One final area in which I hope the community can begin to draw together some insight is around how data is used. At present there are no standards to guide the collection and reporting on metrics for the usage of either shared or open data. Publishing open data about how data is used could be extremely useful not just in understanding data infrastructure, but also in providing transparency about when and how data is being used.

 

Thank you for the data

Here are three anecdotes that show ways in which I’ve shared data with different types of organisation, and how they’ve shared data with me.

Last year we donated some old children’s toys and books to Julian House. When we dropped them off, I signed a Gift Aid declaration to allow the charity to claim additional benefits from our donation. At the end of the tax year they sent me an email, as I requested, to let me know how much they had raised from the donation. It was nice to know that the toys and books had gone to a good home and that the charity had benefited.

A few months ago, we switched energy provider to get a (much!) better and greener deal on our energy bills. The actual process of switching was easy. But we had to jump through a few hoops to actually get a quote. That mostly involved looking at charts and summaries of our current usage, collecting details on our plan and then using that to get a quote from some alternative suppliers. The government are still thinking about whether midata should apply to the energy sector. I don’t think it should because it’s too limited. An open banking model would be much better for consumers.

We decided to go with Octopus as our new supplier. Three months after the switch they sent me a lovely email, a “personal impact report”. It contained some great insights into our energy usage and the impacts on the environment of our greener energy consumption. For example, it told me that 18% of our electricity came from anaerobic digestion. Our biggest renewable supplier of solar energy was Bottreaux Mill Farm in Devon. It made me even happier to have switched, whilst also wishing I’d done it sooner.

Seven years ago I signed up to 23andMe and let them sequence my DNA. I was curious to know what I might learn and whether my data could be useful in medical studies. There are reasons to be wary of sharing this type of personal information, but it’s an informed decision. I understand what it is the company is doing. And I’ve also taken the time to read their privacy policy, which is clearly laid out.

23andMe email me on a regular basis to let me know when my data has contributed towards some published research. Looking at the site I can see I’ve contributed towards 19 published studies, including this one on autoimmune conditions. Our family is definitely interested in supporting any efforts to address autoimmune conditions. Unfortunately I often can’t look at the research because the papers aren’t open access.

I’ve been thinking about these types of exchanges after reading a short paper by Kadija Ferryman. In her paper she suggests that we should think of data as a gift. In the giving of a gift, there is the act of giving (sharing data), the act of receiving (holding that data) and often some form of reciprocation. These three anecdotes illustrate different types of reciprocation. In each case, an organisation has written me a little thank you note to show me how a data gift has been useful to them.

From a data collection point of view, none of the three organisations has had to more than they would have done anyway. Gift Aid requires some extra book-keeping as part of the policy, Octopus will be keeping detailed records on our energy consumption and their energy purchases. And 23andMe will have a clear view on when and where aggregate data is being shared with researchers.

They’ve just chosen to show that they appreciate my data gifts and, in some cases, have given me a data gift in return. I’m now more likely to donate to Julian House, more likely to stay with Octopus, and have greater trust in continuing to let 23andMe store my DNA profile.

Thinking about data as a gift is another useful analogy that we help us think through the appropriate ways to design data sharing arrangements. I know I’d definitely like to receive more data thank you notes.