“The woodcutter”, an open data parable

In a time long past, in a land far away, there was once a great forest. It was a huge sprawling forest containing every known species of tree. And perhaps a few more.

The forest was part of a kingdom that had been ruled over by an old mad king for many years. The old king had refused anyone access to the forest. Only he was allowed to hunt amongst its trees. And the wood from the trees was used only to craft things that the king desired.

But there was now a new king. Where the old king was miserly, the new king was generous. Where the old king was cruel, the new king was wise.

As his first decree, the king announced that the trails that meandered through the great forest might be used by anyone who needed passage. And that the wood from his forest could be used by anyone who needed it, provided that they first ask the king’s woodcutter.

Several months after his decree, whilst riding on the edge of the forest, the king happened upon a surprising scene.

Gone was the woodcutter’s small cottage and workshop. In its place had grown up a collection of massive workshops and storage sheds. Surrounding the buildings was a large wooden palisade in which was set some heavily barred gates. From inside the palisade came the sounds of furious activity: sawing, chopping and men shouting orders.

All around the compound, filling the nearby fields, was a bustling encampment. Looking at the array of liveries, flags and clothing on display, the king judged that there were people gathered here from all across his lands. From farms, cities, and towns. From the coast and the mountains. There were also many from neighbouring kingdoms.

It was also clear that many of these people had been living here for some time.

Perplexed, the king rode to the compound, making his way through the crowds waiting outside the gates. Once he had been granted entry, he immediately sought out the woodcutter, finding him directing activities from a high vantage point.

Climbing to stand beside the woodcutter the king asked, “Woodcutter, why are all these people waiting outside of your compound? Where is the wood that they seek?”

Flustered, the woodcutter, mopped his brow and bowed to his king. “Sire, these people shall have their wood as soon as we are ready. But first we must make preparations.”

“What preparations are needed?”, asked the king. “Your people have provided wood from this forest for many, many years. While the old king took little, is it not the same wood?”

“Ah, but sire, we must now provide the wood to so many different peoples”. Gesturing to a small group of tents close to the compound, the woodcutter continued: “Those are the ship builders. They need the longest, straightest planks to build their ships. And great trees to make their keels”.

“Over there are the house builders”, the woodcutter gestured, “they too need planks. But of a different size and from a different type of tree. This small group here represents the carpenters guild. They seek only the finest hard woods to craft clever jewellery boxes and similar fine goods.”

The king nodded. “So you have many more people to serve and many more trees to fell.”

“That is not all”, said the woodcutter pointing to another group. “Here are the river people who seek only logs to craft their dugout boats. Here are the toy makers who need fine pieces. Here are the fishermen seeking green wood for their smokers. And there the farmers and gardeners looking for bark and sawdust for bedding and mulch”.

The king nodded. “I see. But why are they still waiting for their wood? Why have you recruited men to build this compound and these workshops, instead of fetching the wood that they need?”

“How else are we to serve their needs sire? In the beginning I tried to handle each new request as it came in. But every day a new type and shape of wood. If I created planks, then the river people needed logs. If I created chippings, the house builders needed cladding.

Everyone saw only their own needs. Only I saw all of them. To fulfil your decree, I need to be ready to provide whatever the people needed.

And so unfortunately they must wait until we are better able to do so. Soon we will be, once the last dozen workshops are completed. Then we will be able to begin providing wood once more.”

The king frowned in thought. “Can the people not fetch their own wood from the forest?”

Sadly, the woodcutter said, “No sire. Outside of the known trails the woods are too dangerous. Only the woodcutters know the safe paths. And only the woodcutters know the art of finding the good wood and felling it safely. It is an art that is learnt over many years”.

“But don’t you see?” said the King, “You need only do this and then let others do the rest. Fell the trees and bring the logs here. Let others do the making of planks and cladding. Let others worry about running the workshops. There is a host of people here outside your walls who can help. Let them help serve each others needs. You need only provide the raw materials”.

And with this the king ordered the gates to the compound to be opened, sending the relieved woodcutter back to the forest.

Returning to the compound many months later, the king once again found it to be a hive of activity. Except now the house builders and ship makers were crafting many sizes and shapes of planks. The toy makers took offcuts to shape the small pieces they needed, and the gardeners swept the leavings from all into sacks to carry to their gardens.

Happy that his decree had at last been fulfilled, the king continued on his way.

Read the first open data parable, “The scribe and the djinn’s agreement“.

Basic questions about data

Over the past couple of years I’ve written several posts that each focus on trying to answer a simple question relating to data and/or open data.

I’ve collected them together into a list here for easier reference. I’ll update the list as I write more related posts:

I find that asking and then trying to answer these questions are a good way to develop understanding. Often there are a number of underlying questions or issues that can be more easily surfaced.

What is Derived Data?

A while ago I asked the question: “What is a Dataset?“. The idea was to look at how different data communities were using the term to see if there were any common themes. This week I’ve been considering how UPRNs can be a part of open data, a question made more difficult due to complex licensing issues.

One aspect of the discussion is the idea of “derived data“. Anyone who has worked with open data in the UK will have come across this term in relation to licensing of Ordnance Survey and Royal Mail data. But, as we’ll see shortly, the term is actually in wider use. I’ve realised though that like “dataset”, this is another term which hasn’t been well defined. So I thought I’d explore what definitions are available and whether we can bring any clarity.

I think there are several reasons why having a clearer definition and understanding of what constitutes “derived data”:

  1. When using data published under different licenses it’s important to understand what the implications are of reusing and mixing together datasets. While open data licenses create few issues, mixing together open and shared data, can create additional complexities due to non-open licensing terms. For further reading here see: “IPR and licensing issues in Derived Data ” (Korn et al, 2007)  and “Data as IP and Data License Agreements” (Practical Law, 2013).
  2. Understanding how data is derived is useful in understanding the provenance of a dataset and ensuring that sources are correctly attributed
  3. In the EU, at least, there are many open questions relating to the creation of services that use multiple data sources. As a community we should be trying to answer these questions to identify best practices, even if ultimately they might only be resolved through a legal process.

On that basis: what is derived data?

Definitions of derived data from the statistics community

The OECD Glossary of Statistical Terms defines “derived data element” as:

A derived data element is a data element derived from other data elements using a mathematical, logical, or other type of transformation, e.g. arithmetic formula, composition, aggregation.

This same definition is used in the data.gov.uk glossary, which has some comments.

The OECD definition of “derived statistics” also provides some examples of derivation, e.g. creating population-per-square-mile statistics from primary observations (e.g. population counts, geographical areas).

Staying in the statistical domain, this britannica.com article on censuses explains that (emphasis added):

there are two broad types of resulting data: direct data, the answers to specific questions on the schedule; and derived data, the facts discovered by classifying and interrelating the answers to various questions. Direct information, in turn, is of two sorts: items such as name, address, and the like, used primarily to guide the enumeration process itself; and items such as birthplace, marital status, and occupation, used directly for the compilation of census tables. From the second class of direct data, derived information is obtained, such as total population, rural-urban distribution, and family composition

I think this clearly indicates the basic idea that derived data is obtained when you apply a process or transformation to one or more source datasets.

What this basic definition doesn’t address is whether there any important differences between categories of data processing, e.g. does validating some data against a dataset yield derived data, or does the process have to be more transformative? We’ll come back to this later.

Legal definitions of derived data

The Open Database Licence (ODbL), which is now used by Open Streetmap, defines a “Derivative Database” as:

…a database based upon the Database, and includes any translation, adaptation, arrangement, modification, or any other alteration of the Database or of a Substantial part of the Contents. This includes, but is not limited to, Extracting or Re-utilising the whole or a Substantial part of the Contents in a new Database.

This itemises some additional types of process, namely that extracting portions of a dataset also creates a derivative and not just transformation or statistical calculations.

However, as noted in the legal summary for the Creative Commons No Derivatives licence, simply changing the format of a work doesn’t create a derivative. So, in their opinion at least, this type of transformation doesn’t yield a derived work. In the full legal code they don’t use the term “derived data”, largely because the licences can be applied to a wide range of different types of works, they instead define an “Adapted Material“:

…material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor.

The Ordnance Survey User Derived Dataset Contract (copy provided by Owen Boswarva), which allows others to create products using OS data, defines “User Derived Datasets” as:

datasets which you have created or obtained containing in part only or utilising in whole or in part Licensed Data in their creation together with additional information not obtained from any Licensed Data which is a fundamental component of the purpose of your Product and/or Service.

The definition stresses that the datasets consist of some geographical data, e.g. points or polygons, plus some additional data elements.

The Ordnance Survey derived data exemptions documentation has this to say about derived data:

data and or hard copy information created by you using (to a greater or lesser degree) data products supplied and licensed by OS, see our Intellectual Property (IP) policy.

For the avoidance of doubt, if you make a direct copy of a product supplied by OS – that copy is not derived data.

Their licensing jargon page just defines the term as:

…any data that you create using Ordnance Survey mapping data as a source

Unfortunately none of these definitions really provide any useful detail, which is no doubt part of the problems that everyone has with understanding OS policy and licensing terms. As my recent post highlights, the OS do have some pretty clear ideas of when and how derived data is created.

The practice note on “Data as IP and Data License Agreements” published by Practical Law provides a great summary of a range of IP issues relating to data and includes a discussion of derived data. Interestingly they highlight that it may be useful to consider not just data generated by processing a dataset but other data that may be generated through the interactions of a data publisher (or service provider) and a data consumer. (See “Original versus Derived Data“, page 7).

This leads them to define the following cases for when derived data might be generated:

  • Processing the licensed data to create new data that is either:
    • sufficiently different from the original data that the original data cannot be identified from analysis, processing or reverse engineering the derived data; or
    • „a modification, enhancement, translation or other derivation of the original data but from which the original data may be traced. „
  • Monitoring the licensee’s use of a provider’s service (commonly referred to as usage data).

From a general intellectual property stance I can see why usage data should be included here, but I would suggest that this category of derived data is quite different to what is understood by the (open) data community.

What I find helpful about this summary is that it starts to bring some clarity around the different types of processes that yield derived data.

The best existing approach to this that I’ve seen can be found in: “Discussion draft: IPR, liability and other issues in regard to Derived Data“. The document aims to clarify, or at least start a discussion around, what is considered to be derived data in the geographical and spatial data domain. They identify a number of different examples, including:

  • Transforming the spatial projection of a dataset, e.g. to/from Mercator
  • Aggregating data about a region to summarise to an administrative area
  • Layering together different datasets
  • Inferring new geographical entities from existing features, e.g. road centre lines derived from road edges

In my opinion these types of illustrative examples are a much better way of trying to identify when and how derived data is created. For most re-users its easier to relate to an example that legal definitions.

Another nice example is the Open Streetmap guidance on what they consider to be “trivial transformations” which don’t trigger the creation of derived works.

An expanded definition of derived data

With the above in mind, can we create a better definition of derived data by focusing on the types of processes and transformations that are carried out?

Firstly I’d suggest that the following types of process do not create derived data:

  1. Using a dataset – stating the obvious really, but simply using a dataset doesn’t trigger creating a derivative. Open Street Map calls these a “Produced Work”.
  2. Copying – again, I think this is should be well understood, but I mention it for completeness. This is distribution, not derivation.
  3. Changing the format – E.g. converting a JSON file to XML. The information content remains the same, only the format is changed. This is supported by the Creative Commons definitions of remixing/reuse.
  4. Packaging (or repackaging) – E.g. taking a CSV file and re-publishing it as a data package. This would also include taking several CSV files from different publishers and creating a single data package from them. I believe this is best understood as a “Collected Work” or “Compilation” as the original datasets remain intact.
  5. Validation – checking whether field(s) in dataset A are correct according to field(s) dataset B, so long as dataset A is not corrected as a result. This is a stance that Open Street Map seem to agree with.


This leaves us with a number of other processes which do create derived data:

  1. Extracting – extracting portions of a dataset, e.g. extracting some fields from a CSV file.
  2. Restructuring – changing the schema or internal layout of a database, e.g. parsing out existing data to create new fields such as breaking down an address into its constituent parts
  3. Annotation – enhancing an existing dataset to include new fields, e.g. adding UPRNs to a dataset that contains addresses
  4. Summarising or Analysing – e.g. creating statistical summaries of fields in a dataset, such as the population statistics examples given by the OECD. Whether the original dataset can be reconstructed from the derived will depend on the type of analysis being carried out, and how much of the original dataset is also included in the derived.
  5. Correcting – validating dataset A against dataset B, and then correcting dataset A with data from dataset B where there are discrepancies
  6. Inferencing – applying reasoning, heuristics, etc to generate an entirely new data based on one or more datasets as input.
  7. Model Generation – I couldn’t think of a better name for this, but I’m thinking of scenarios such as sharing a neural network that has used some datasets as a training set. I think this is different to inferencing.

What do you think of this? Does it capture the main categories of deriving data? If you have comments on this then please let me know by leaving a comment here or pinging me on twitter.


How and when can UPRNs be a part of open data?

I’m trying to understand when and how UPRNs can be a part of open data, whether published by councils or others organisations. I’m writing down what I understand in the hope that others might find this useful or might be able to correct any misunderstandings by leaving a comment. It’d be great to get some official confirmation from Ordnance Survey and others too.

On the 16th February 2015 there was an announcement from Ordnance Survey that said:

Supporting the local government transparency and government open data agendas, Ordnance Survey, GeoPlace and the Improvement Service are enabling AddressBase internal business use customers to release Unique Property Reference Numbers (UPRNs) on a royalty free and open basis. The move will facilitate the release and sharing of public and private sector addressing databases.

The announcement notes that this brings UPRNs into line with the terms that apply to reuse of the TOID identifier.

The relevant background documents are the following. Use these as your primary guidance:

The first policy statement is the significant document. It’s been updated to clarify some elements and has some example permitted and non-permitted uses which we’ll explore below.

What follows is my understanding of those documents and consideration of some additional scenarios of how and when UPRNs can be released as open data.

If any of the below is wrong, please leave a comment!


A log of updates and clarifications to this post:

  • 3/9/2015 – Added notes and comments about the ONS National Statistics Address Lookup dataset

Who can publish UPRNs in open data?

Current AddressBase licensees (specifically “Internal Business Use customers”) can publish data containing UPRNs without the need to place any restrictions on their downstream use. This only applies to the UPRN identifiers themselves as there are some provisos around what data can be published.

Anyone who obtains a UPRN from an open dataset can also use those identifiers to publish additional open data, e.g. by annotating a dataset with additional information, so long as they obey any licensing requirements from their source datasets.

This is perhaps best understood as distribution rather than publication though because the UPRNs must have previously been available in, and obtained from, an open dataset.

If you’re not an AddressBase customer, then you can’t publish new datasets containing previously unpublished UPRNs.

Note: the policy document refers only to AddressBase licensees. It doesn’t state a specific AddressBase product that must be licensed (there are several). It also doesn’t really define customers beyond that although the public sector presumption to publish does.

Who can use UPRNs in open data?

If you obtain a UPRN from an open dataset, then you can use it without incurring any additional licensing restrictions beyond what is stated in the licence for that dataset.

So, if a local authority publishes some data containing UPRNs under the OGL, then you can use it for commercial and non-commercial purposes so long as you attribute your sources.

What licence can be used when publishing UPRNs in open data?

Any open licence can be used to publish the UPRNs identifiers.

However there may be additional licensing restrictions that must be applied to the dataset if:

  • the dataset contains additional OS data
  • the dataset was constructed by using or referencing the geographical co-ordinates of the UPRNs

The first restriction seems obvious: if you include non-open data then this will impact your licensing options. The second is less clear: depending on how you constructed the dataset, you may not be able to publish it openly.

The specific wording is that UPRNs can only be published on a royalty-free basis, and with the option to sub-licence if:

licensees have not extracted UPRNs by using or making reference to the coordinates within AddressBase products data

Lets refer to that restriction as the “spatial reference restriction“. Examples are essential to help clarify where and when it applies.

Additional public sector permissions

It’s also worth highlighting that the presumption to publish document notes that for public sector customers of AddressBase the OS

…will permit the release of the OS x,y co-ordinates for your public sector assets, together with the UPRN, such that members are able to release datasets required to meet the requirements of the Local Government Transparency Code.

This means that as long as the presumption to publish process is followed, that it should be possible for local authorities to publish both UPRNs and their co-ordinates in derived datasets that aren’t substantial extracts of the source data. 

However this is expanded on in the OS licensing guidance which emphases that the permission applies to public sector assets only, and specifically those datasets within the Local Government Transparency Code. There’s also a note that:

This is subject to the member having permission from Royal Mail in relation to the release of any data derived from PAF

Which piles caveats upon caveats.

It’s not immediately clear to me if the spatial reference restriction is also intended to apply here or whether this is separate special dispensation for public sector customers. I’m assuming the latter, but it would be useful to have some confirmation.

Worked Examples

Lets work through a couple of examples of where UPRNs might be published as part of an open data release. The UPRN policy statement includes several examples which we’ll build on here.

Companies House address matching

This is the first permitted example in the policy statement:

A third party takes an open address dataset, such as the Free Company Data Product from Companies House, and matches the data contained within against one of the AddressBase products using non-spatial methods. It then appends the UPRN from the AddressBase products to this address data.

Emphasis is mine.

In this example a local authority could take the list of registered companies in its local area, match the addresses against AddressBase and publish a local extract that has been annotated with the UPRN. This could be published under the OGL (we’ll ignore the unclear licensing of Companies House data for now!).

The matching of addresses has to be done using non-spatial methods which means using text matching of the address components.

Food hygiene rating location matching

The FSA food hygiene rating open data includes the addresses and X,Y co-ordinates of places that have been assigned a food hygiene rating. Could a local authority do the same thing to this dataset as in the previous example? E.g. publishing a local subset enriched with the UPRN?

The answer seems to be:

  1. No – if the data is matched based on comparing the X,Y co-ordinates in the hygiene data to AddressBase, e.g. to find the nearest property. The spatial reference restriction doesn’t allow this.
  2. Yes – if the data is matched using the address fields only.

The end result will be exactly the same dataset but only one approach seems to be valid as using the X,Y co-ordinates is a spatial method.

Unfortunately the OS terms don’t define what constitutes a spatial (or non-spatial) method. Using a distance calculation as suggested in this example seems like its definitely a spatial method. But its not clear for example, whether finding addresses within a location, e.g. a post code, or administrative area, counts as a spatial method.

In fact, given that AddressBase is essentially just a list of addresses and locations, its hard to think of examples other than just address matching where it would be possible to extract UPRNs.

Local authority land and building assets

The local government transparency code (p15-16) requires local authorities to publish a list of its land and building assets. This includes the UPRN and full address of all properties.

This is expressly allowed by the “presumption to publish” process, so the authority can do this without requiring additional permission. The authority could use a spatial query in AddressBase to find and extract all of the necessary data and publish it under the OGL.

Note: if AddressBase contained an indicator of whether a property was owned by the public sector, it wouldn’t be permissible for a non-public sector licensee to publish exactly the same dataset as above along with the co-ordinates. The spatial reference restriction would apply, so using a spatial query to extract the data would not be allowed.

Local government incentive scheme

The local government incentive scheme datasets includes planning applications, public toilets, and premises licences. Many of the local authorities in the UK are publishing these datasets against a standard schema. All of the schemas have been defined to include addresses, co-ordinates and UPRNs.

To meet the terms of the incentive scheme the datasets are published as open data. Currently the UPRNs are often not populated except for public toilets which have been given an exemption by the OS. This is mentioned in the schema guidance but I’ve not found a better link for it and its not listed here.

So, can a local authority update the planning and licensing datasets to include UPRNs? Yes, I think so.

Assuming that each planning and licensing application is matched to its UPRN via the address then everything should be fine. This is a “non-spatial” method and is essentially the same as the Companies House example.

However because these datasets are not part of the transparency code, I don’t think the local authority could include the X,Y co-ordinates of the UPRN without permission from the OS.

Bin and recycling collection routes

This is the example that triggered me looking into this issue again. I wanted to know: could we publish a list of UPRNs in Bath along with the identifier of the bin collection route they are on and which day of the week the bins are collected.

In order to tell someone when their bin or recycling will be collected you need to know what bin collection route they are on. And different sides of the street may be covered by different routes, so you can’t just publish a list of which roads are covered by which routes, you need to know which addresses it covers.

Unfortunately because you need a spatial query to do this, the spatial reference restriction applies. This means you can’t publish that dataset with UPRNs. I also don’t think you can publish it by substituting UPRNs for the textual addresses as that would amount to publishing a significant extract of PAF, basically all properties in the local area.

So this type of service data can’t be published as open data currently. Only local authorities can build services that know when and where recycling and bin collection services are available.

How do the UPRN terms compare with TOIDs?

The basic terms of use for UPRNs and TOIDs are broadly similar. However the key important difference is that for TOIDs there is no equivalent of the spatial reference restriction: if you were licensed to use the data, or have access to them as open data, then there are no additional restrictions.

Although the “OS OpenData™ TOID look-up service” mentioned in the terms, and originally available at http://opentoids.ordnancesurvey.co.uk/toidservice/ no longer exists, they can be found in the various OS open data products so its easily to look them up.

That’s not true for addresses and UPRNs.

Does the ONS NSAL dataset make UPRNs open data?

Commenting on the first version of this post, Owen Boswarva wondered if the ONS National Statistics Address Lookup (NSAL) meant that UPRNs are open data?

The NSAL dataset is described in this blog post. It’s a list of UPRNs mapped to various administrative regions. This allows for easy reporting and recasting of statistics by different geographies. The blog post explains that the changes to the UPRN policy were encouraged to help support the release of this dataset, which is published under the OGL. This means that there is already a complete list of UPRNs published under an open licence.

So does this means that the UPRNs are open data? Clearly the full list of UPRN identifiers is now available under an open licence from the ONS. So the answer could be a qualified yes. However as the ONS explain in their version notes (copy here), the dataset may be out of date with the authoritative copy in AddressBase, so isn’t necessarily definitive.

There’s also none of the accompanying metadata that I’d expect to see if the UPRN identifier scheme was fully published as open data, e.g. administrative metadata around when UPRNs are added or removed, relationships between UPRNs, and perhaps the address data.

While the NSAL dataset itself is excellent, helping to solve problems with mapping between the various local geographies, it doesn’t provide any additional utility beyond giving us a reasonably up to date count of how many UPRNs there are. It doesn’t help us publish more open data that include UPRNs, or help us annotate existing datasets with UPRNs, for that you still need the address and co-ordinate information held in AddressBase.


UPRNs are not open data, but they can be included in some open datasets. There are some very specific cases where UPRNs could usefully be added to both existing and new open data sets.

However there are some subtleties in understanding what is allowed that includes both who is publishing the data and how the dataset is be constructed.

Hopefully this post has shed some light onto the issues that might help open data publishers and, importantly, local authorities in understanding what can and can’t be done.

I’ll update this post to make corrections as and when necessary. Please leave a comment if you have an issue with any of my reasoning. Also, please comment if you have additional examples of permitted or non-permitted publication.

Data and information in the city

For a while now I’ve been in the habit of looking for data as I travel to work or around Bath. You can’t really work with data and information systems for any length of time without becoming a little bit obsessive about numbers or becoming tuned into interesting little dashboards:

My eye gets drawn to gauges and displays on devices as I’m curious about not just what they’re showing but also for whom the information is intended.

I can also tell you that for at least ten years, perhaps longer, the electronic signs on some of the buses running the Number 10 route in Bath have been buggy. Instead of displaying “10 Southdown” they read “(ode1fsOs1ss1 10sit2 Southdown)” with a flashing “s” in “sit”.

Yes. I wrote it down. I was curious about whether it was some misplaced control codes, but I couldn’t find a reference.

Having spent so long working on data integration and with technologies like Linked Data, I’m also curious about how people assign identifiers to things. A lot of what I’ve learnt about that went into writing this paper, which is a piece of work of which I’m very proud. It’s become an ingrained habit to look out for identifiers wherever I can find them.  It’s not escaped me that this is pretty close to train spotting, btw!

I’ve also recently started contributing to Bath: Hacked, which is Bath’s community-led open data project. Its led me to pay even closer attention to the information around me in Bath, as it might turn up some useful data that could be published or indicate the potential for a useful digital service.

So to try and direct my “data magpie” habits into a more productive direction, I’ve started on a small project to photograph some of the information and data I find as I walk around the city. There are signs, information and data all around us but we don’t often really notice it or we just take the information for granted. I decided to try to catalogue some of the ways in which we might encounter data around Bath and, by extension, in other cities.

The entire set of photos is available on Flickr if you care to take a look. Think of it as a natural history study of data.

In the rest of this post I wanted to explore a few things that have occurred to me along the way. Areas where we can glimpse the digital environment and data infrastructure that is increasingly supporting the physical environment. And the ways in which data might be intentionally or incidentally shared with others.

Data as dark matter

For most people data is ephemeral stuff. It’s not something they tend to think about even though its being collected and recorded all around us. While there’s increasing awareness of how our personal data is collected and used by social networks and other services, there’s often little understanding of what data might be available about the built environment.

But you can see evidence of that data all around us. Data is a bit like dark matter: we often only know it exists based on its effects on other things which we more clearly understand. Once you start looking you can see identifiers everywhere:

Bridge identifiers

If something has an identifier then there will be data associated with it, creating a record that describes that object. As there is very likely to be a collection of those things then we can infer that there’s a database containing many similar records.

Once you start looking you can see databases everywhere: of lampposts, parking spaces, bins, and the monoliths that sit in our streets but which we rarely think about:

Traffic light control box

Once you realise all of these databases exist it’s natural to start asking questions such as how that information is collected, who is responsible for it, and when might it be useful?  There are databases everywhere and people are employed to look after them.

The bus driver’s role in data governance

Live bus times

I was looking forward to the installation of the Real Time Information sign at the bus stop (0180BAC30294) near my house. For a few years now I’ve been regularly taking a photo of the paper sign on the stop. Looking at that on my phone is still much quicker than using any of the online services or apps. A real time data feed was going to solve that. Only it didn’t. It’s made things worse:

My morning bus, the one that begins my commute to the Open Data Institute, is often not listed. I’ve had several morning conversations with Travelwest about it. Although, evoking Hello Lamppost, it feels like I’ve been arguing with the bus sign itself and would like to leave a note to others to say that, actually yes the Number 10 is really on its way.

I’m suddenly concerned that they may do away with that helpful paper sign. The real-time information feed exposes problems with data management that wouldn’t otherwise be evident. Real-time doesn’t necessarily always mean better.

Interestingly Travelwest have an FAQ that lists a number of reasons why some buses won’t appear on the RTI system. This includes the expected range of data and hardware problems, but also: “The bus driver has logged on to the ETM incorrectly, preventing the journey operated being ‘matched’ by the central RTI system“.

So it turns out that bus drivers have a key role in the data governance of this particular dataset. They’re not just responsible for getting the bus from A-B but also in ensuring that passengers know that its on its way. I wonder if that’s part of their induction?

The paperless city

There are more obvious signs of business processes that we can see around a city. These are stages in processes that require some public notice or engagement, such as planning applications or other “rights to object” to planned works:

Pole Objection Notice

In other cases the information is presented as an indication that a process has been completed successfully, such as gaining a premises licence, liability insurance or an energy rating certificate. If this information is being put on physical display then it’s natural to wonder whether there are digital versions that could or should be made available.

Also, in the majority of cases, making this information availability digitally would probably be much better. There are certainly opportunities to create better digital services to help engage people in these processes. But in order to be inclusive I suspect paper based approaches are going to be around for a while.

What would a digital public service look like that provided this type of city information, both on-demand and as notifications, to residents? The information might already be available on council websites, but you have to know that it’s there and then how find it.

Visible to the public, but not for the public

Interestingly, not all of the information we can find around the city is intended for wider public consumption. It may be published into a public space but it might only be intended for a particular group of people, or useful at a particular point in time, e.g. during an emergency such as this map of fire sensors.

Fire hydrant

Most of the identifier examples I referred to above fall into this category. Only a small number of people need to know the identifier for a specific bin, traffic light control both, or bridge.

It also means that information may often be provided without context as the intended audience knows how to read it or has the tools required to use it to unlock more information.  This means to properly interpret it you have to be able to understand the visual code that is used in these organisational hobo signs.

The importance of notice boards

For me there’s something powerful in the juxtaposition of these two examples:

Community notice board

Dynamic display board

The first is a community notice board. Anyone can come along and not only read it but also add to the available information. It’s a piece of community owned and operated information infrastructure. This manually updated map of the local farmers market is another nice example, as are the walls of flyers and event notices at the local library.

The second example is a sealed unit. It’s owned and operated by a single organisation who gets to choose what information is displayed. Community annotations aren’t possible. There’s no scope to add notices or grafitti to appropriate the structure for other purposes – something that you see everywhere else in the city. This is increasingly hard to do with digitial infrastructures.

In my opinion a truly open city will include both types of digital and physical infrastructure. I dislike the top-down view of the smart city and preview the vision of creating an open, annotable data infrastructure for residents and local businesses to share information.

Useful perspective

In this rambling post I’ve tried to capture some of the thoughts that have occurred to me whilst taking a more critical look at how data and information is published in our cities. I’ve really only scratched the surface, but it’s been fun to take a step back and look at Bath with a slightly more critical eye.

I think it’s interesting to see how data leaks into the physical environment, either intentionally or otherwise. Using environments that people are familiar with might also be a useful way to get a wider audience thinking about the data that helps our society function, and how it is owned and operated.

It’s also interesting to consider how a world of increasingly connected devices and real-time information is going to impact this environment. Will all of this information move onto our phones, watches or glasses and out of the physical infrastructure? Or are we going to end up with lots more cryptic icons and identifiers on all kinds of bits of infrastructure?


“The scribe and the djinn’s agreement”, an open data parable

In a time long past, in a land far away, there was once a great city. It was the greatest city in the land and the vast marketplace at its centre was the busiest, liveliest marketplace in the world. People of all nations could be found there buying and selling their wares. Indeed, the marketplace was so large that people would spend days, even weeks, exploring its length and breadth would still discover new stalls selling a myriad of items.

A frequent visitor to the marketplace was a woman known only as the Scribe. While the Scribe was often found roaming the marketplace even she did not know of all of the merchants to be found within its confines. Yet she spent many a day helping others to find their way to the stalls they were seeking, and was happy to do so.

One day, as a gift for providing useful guidance, a mysterious stranger gave the Scribe a gift: a small magical lamp. Upon rubbing the lamp a djinn appeared before the suprised Scribe and offered her a single wish.

“Oh venerable djinn” cried the Scribe, “grant me the power to help anyone that comes to this marketplace. I wish to help anyone who needs it to find their way to whatever they desire”.

With a sneer the djinn replied: “I will grant your wish. But know this: your new found power shall come with limits. For I am a capricious spirit who resents his confinement in this lamp”. And with a flash and a roll of thunder, the magic was completed. And in the hands of the Scribe appeared the Book.

The Book contained the name and location of every merchant in the marketplace. From that day forward, by reading from the Book, the Scribe was able to help anyone who needed assistance to find whatever they needed.

After several weeks of wandering the market, happily helping those in need, the Scribe was alarmed to discover that she was confronted by a long, long line of people.

“What is happening?” she asked of the person at the head of the queue.

“It is now widely known that no-one should come to the Market without consulting the Scribe” said the man, bowing. “Could you direct me to the nearest merchant selling the finest silks and tapestries?”

And from that point forward the Scribe was faced with a never-ending stream of people asking for help. Tired and worn and no longer able to enjoy wandering the marketplace as had been her whim, she was now confined to its gates. Directing all who entered, night and day.

After some time, a young man took pity on the Scribe, pushing his way to the front of the queue. “Tell me where all of the spice merchants are to be found in the market, and then I shall share this with others!”

But no sooner had he said this than the djinn appeared in a puff of smoke: “NO! I forbid it!”. With a wave of its arm the Scribe was struck dumb until the young man departed. With a smirk the djinn disappeared.

Several days passed and a group of people arrived at the head of queue of petitioners.

“We too are scribes.” they said. “We come from a neighbouring town having heard of your plight. Our plan is to copy out your Book so that we might share your burden and help these people”.

But whilst a spark of hope was still flaring in the heart of the scribe, the djinn appeared once again. “NO! I forbid this too! Begone!” And with scream and a flash of light the scribes vanished. Looking smug the djinn disappeared.

Some time passes before a troupe of performers approach the Scribe. As a chorus they cried: “Look yonder at our stage, and the many people gathered before it. By taking turns from reading from the book, in front of wide audience, we can easily share your burden”.

But shaking her head the Scribe could only turn away whilst the djinn visited ruin upon the troupe. “No more” she whispered sadly.

And so, for many years the Scribe remained as she had been, imprisoned within the subtle trap of the djinn of the lamp. Until, one day a traveller appeared in the market. Upon reaching the head of the endless line of penitents, the man asked of the Scribe:

“Where should you go to rid your self of the evil djinn?”.

Surprised, and with sudden hope, the Scribe turned the pages of her Book…

Open data and diabetes

In December my daughter was diagnosed with Type 1 diabetes. It was a pretty rough time. Symptoms can start and escalate very quickly. Hyperglycaemia and ketoacidosis are no joke.

But luckily we have one of the best health services in the world. We’ve had amazing care, help and support. And, while we’re only 4 months into dealing with a life-long condition, we’re all doing well.

Diabetes sucks though.

I’m writing this post to reflect a little on the journey we’ve been on over the last few months from a professional rather than a personal perspective. Basically, the first weeks of becoming a diabetic or the parent of a diabetic, is a crash course in physiology, nutrition, and medical monitoring. You have to adapt to new routines for blood glucose monitoring, learn to give injections (and teach your child to do them), become good at book-keeping, plan for exercise, and remember to keep needles, lancets, monitors, emergency glucose and insulin with you at all times, whilst ensuring prescriptions are regularly filled.

Oh, and there’s a stupid amount of maths because you’ll need to start calculating how much carbohydrates are in all of your meals and inject accordingly. No meal unless you do your sums.

Good job we had that really great health service to support us (there’s data to prove it). And an amazing daughter who has taken it all in her stride.

Diabetics live a quantified life. Tightly regulating blood glucose levels means knowing exactly what you’re eating, and learning how your body reacts to different foods and levels of exercise. For example we’ve learnt the different ways that a regular school day versus school holidays effects my daughters metabolism. That we need to treat ahead for the hypoglycaemia that follows a few hours after some fun on the trampoline. And that certain foods (cereals, risotto) seem to affect insulin uptake.

So to manage the condition we need to know how many carbohydrates are in:

  • any pre-packaged food my daughter eats
  • any ingredients we use when cooking, so we can calculate a total portion size
  • in any snack or meal that we eat out

Food labeling is pretty good these days so the basic information is generally available. But its not always available on menus or in an easy to use format.

The book and app that diabetic teams recommend is called Carbs and Cals. I was a little horrified by it initially as its just a big picture book of different portion sizes of food. You’re encouraged to judge everything by eye or weight. It seemed imprecise to me but with hindsight its perfectly suited to those early stages of learning to live with diabetes. No hunting over packets to get the data you need: just look at a picture, a useful visualisation. Simple is best when you’re overwhelmed with so many other things.

Having tried calorie counting I wanted to try an app to more easily track foods and calculate recipes. My Fitness Pal, for example, is pretty easy to use and does bar-code scanning of many foods. There are others that are more directly targeted at diabetics.

The problem is that, as I’ve learnt from my calorie counting experiments, the data isn’t always accurate. Many apps fill their databases through crowd-sourcing. But recipes and portion sizes change continually. And people make mistakes when they enter data, or enter just the bits they’re interested in. Look-up any food on My Fitness Pal and you’ll find many duplicate entries. It makes me distrust the data because I’m concerned its not reliable. So for now we’re still reading packets.

Eating out is another adventure. There have been recent legislative changes to require restaurants to make more nutritional information available. If you search you may find information on a company website and can plan ahead. Sometimes its only available if you contact customer support. If you ask in a (chain) restaurant they may have it available in a ring-binder you can consult with the menu. This doesn’t make a great experience for anyone. Recently we’ve been told in a restaurant to just check online for the data (when we know it doesn’t exist), because they didn’t want to risk any liability by providing information directly. On another occasion we found that certain dishes – items from the childrens menu – weren’t included on the nutritional charts.

Basically, the information we want is:

  • often not available at all
  • available, but only if you know were to look or who to ask
  • potentially out of date, as it comes from non-authoritative sources
  • incomplete or inaccurate, even from the authoritative sources
  • not regularly updated
  • not in easy to use formats
  • available electronically, e.g. in an app, but without any clear provenance

The reality is that this type of nutritional and ingredient data is basically in the same state as government data was 6-7 years ago. It’s something that really needs to change.

Legislation can help encourage supermarkets and restaurants to make data available, but really its time for them to recognize that this is essential information for many people. All supermarkets, manufacturers and major chains will have this data already, there should be little effort required in making it public.

I’ve wondered whether this type of data ought to be considered as part of the UK National Information Infrastructure. It could be collected as part of the remit of the Food Standards Agency. Having a national source would help remove ambiguity around how data has been aggregated.

Whether you’re calorie or carb counting, open data can make an important difference. Its about giving people the information they need to live healthy lives.