Two years ago I wrote a short paper about “layering” data but for various reasons never got round to putting it online. The paper tried to capture some of my thinking at the time about the opportunities and approaches for publishing and aggregating data on the web. I’ve finally got around to uploading it and you can read it here.
I’ve made a couple of minor tweaks in a few places but I think it stands up well, even given the recent pace of change around data publishing and re-use. I still think the abstraction that it describes is not only useful but necessary to take us forward on the next wave of data publishing.
Rather than edit the paper to bring it completely up to date with recent changes, I thought I’d publish it as is and then write some additional notes and commentary in this blog post.
You’re probably best off reading the paper, then coming back to the notes here. The illustration referenced in the paper is also now up on slideshare.
RDF & Layering
I see that the RDF Working Group, prompted by Dan Brickley, is now exploring the term. I should acknowledge that I also heard the term “layer” in conjunction with RDF from Dan, but I’ve tried to explore the concept from a number of perspectives.
The RDF Working Group may well end up using the term “layer” to mean a “named graph”. I’m using the term much more loosely in my paper. In my view an entire dataset could be a layer, as well as some easily identifiable sub-set of it. My usage might therefore be closer to Pat Hayes’s concept of a “Surface”, but I’m not sure.
I think that RDF is still an important factor in achieving the goal I outlined of allowing domain experts to quickly assemble aggregates through a layering metaphor. Or, if not RDF, then I think it would need to be based around a graph model, ideally one with a strong notion of identity. I also think that mechanisms to encourage sharing of both schemas and annotations are also useful. It’d be possible to build such a system without RDF, but I’m not sure why you’d go to the effort.
User Experience
One of the things that appeals to me about the concept of layering is that there are some nice ways to create visualisation and interfaces to support the creation, management and exploration of layers. It’s not hard to see how, given some descriptive metadata for a collection of layers, you could create:
- A drag-and-drop tool for creating and managing new composite layers
- An inspection tool that would let you explore how the dataset for an application or visualisation has been constructed, e.g. to explore provenance or to support sharing and customization. Think “view source” for data aggregation.
- A recommendation engine that suggested new useful layers that could be added to a composite, including some indication of what additional query options might become available
There’s been some useful work done on describing datasets within the Linked Data community: VoiD and DCat for example. However there’s not yet enough data routinely available about the structure and relationships of individual datasets, nor enough research into how to provide useful summaries.
This is what prompted my work on an RDF Report Card to try and move the conversation forward beyond simply counting triples.
To start working with layers, we need to understand what each layer contains and how they relate to and complement one another.
Linked Data & Layers
In the paper I suggest that RDF & Linked Data alone aren’t enough and that we need systems, tools and vocabularies for capturing the required descriptive data and enabling the kinds of aggregation I envisage.
I also think that the Linked Data community is spending far too much effort on creating new identifiers for the same things and worrying how best to define equivalences.
I think the leap of faith that’s required, and that people like the BBC have already taken, is that we just need to get much more comfortable re-using other people’s identifiers and publishing annotations. Yes, there will be times when identifiers diverge, but there’s a lot to be gained, especially in terms of efficiency around data curation from just focusing on the value-added data, not re-publishing any copy of a core set of facts.
There are efficiency gains to be had from existing businesses, as well as faster routes to market for startups, if they can reliably build on some existing data. I suspect that there are also businesses that currently compete with one another — because they’re having to compile or re-compile the same core data assets — that could actually complement one another if they could instead focus on the data curation or collection tasks at which they excel.
Types of Data
In the paper I set out seven different facets which I think cover the majority of types of data that we routinely capture and publish. I think the classification could be debated, but I think its a reasonable first attempt.
The intention is to try and illustrate that we can usefully group together different types of data. And organisations may be particularly good at creating or collecting particular types of data. There’s scope for organisations to focus on being really good in a particular area and by avoiding needless competition around collecting and re-collecting the same core facts, there are almost certainly efficiency gains and cost savings to be had.
I’m sure there must be some prior work in this space, particularly around the core categories, so if anyone has pointers please share them.
There are also other ways to usefully categorise data. One area that springs to mind is how the data itself is collected, i.e. its provenance. E.g. is it collected automatically by sensors, or as a side-effect of user activity, or entered by hand by a human curator? Are those curators trained or are they self-selected contributors? Is the data derived from some form of statistical analysis?
I had toyed with provenance as a distinct facet, but I think its an orthogonal concern.
Layering & Big Data
A lot has happened in the last two years and I winced a bit at all of the Web 2.0 references in the paper. Remember that? If I were writing this now then the obvious trend to discuss as context to this approach is Big Data.
Chatting with Matt Biddulph recently he characterised a typical Big Data analysis as being based on “Activity Data” and “Reference Data”. Matt described reference data as being the core facts and information on top of which the activity data — e.g. from users of an application — is added. The analysis then draws on the combination to create some new insight, i.e. more data.
I referenced Matt’s characterisation in my Strata talk (with acknowledgement!). Currently Linked Data does really well in the Reference category but there’s not a great deal of Activity data. So while its potentially useful in a Big Data world, there’s a lot of value still not being captured.
I think Matt’s view of the world chimes well with both the layered data concept and the data classifications that I’ve proposed. Most of the facets in the paper really define different types of Reference data. The outcome of a typical Big Data analysis is usually a new facet, an obvious one being “Comparative” data, e.g. identifying the most popular, most connected, most referenced resources in a network.
However there’s clearly a different in approach between typical Big Data processing and the graph models that I think underpin a layered view of the world.
MapReduce workflows seem to work best with more regular data, however newer approaches like Pregel illustrate the potential for “graph-native” Big Data analysis. But setting that aside, there’s no real contention as a layering approach to combining data doesn’t say anything about how the data must actually be used: it can be easily projected out into structures that are amenable for indexing and processing in different ways.
Kasabi
Looking at the last section of the paper it should be obvious that much of the origin of this analysis was early preparation for Kasabi.
I still think that there’s a great deal of potential to create a marketplace around data layers and tools for interacting with them. But we’re not there yet though for several reasons. Firstly its taken time to get the underlying platform in place to support that. We’ve done that now and you can expect more information on that from more official sources shortly. Secondly I under estimated how much effort is still required to move the market forward: there’s still lots to be done to support organisations in opening up data before we can really explore more horizontal marketplaces. But that is a topic for another post.
This has been quite a ramble of a blog post but hopefully there are some useful thoughts here that chime with your own experience. Let me know what you think.
Layers as you outlined it can help with thinking about data in the abstract, and possibly also as an aid in modelling. It kind of puts me in mind of a layer of abstraction above schema.org and the like.
I’m not sure it works as well as an organising principle for datamarkets. For that, a more intuitive concept for more people is likely to be data set boundaries and overlaps.
I suspect that people within particular domains know where those boundaries are in concrete terms, but wouldn’t be able to articulate it in abstract terms.
To me, the question is whether they’d be able to see the value in being able to go across those boundaries and augment their own datasets with those of others, or whether there’s a need for professional ‘linkers’ who spot the value in connections and sell the integration.
Hello. Just to let you know that a very similar “information layer” concept is already implemented and in use in Wandora application (http://www.wandora.org). Unfortunately Wandora is not a native RDF application but can convert RDF documents to it’s internal data model, Topic Maps. (See: http://www.wandora.org/wandora/wiki/index.php?title=Documentation#Topic_map_layers)
Hi Wilbert,
I wasn’t quite suggesting that users of a marketplace would be presented with some kind of layer view, but rather that this provides a useful metaphor for understanding how datasets relate to one another.
For the primary view, I’d expect users to be interacting with datasets, much as they do with Kasabi and other marketplaces today. Its just that some of those datasets might be individual “layers”, whilst others are composed out of several other datasets.
If I want to use a dataset then I agree its very important to understand its boundaries. I think that’s one of the benefits of an RDF approach: the boundaries and the points at which different datasets join up, can be very clear. This means that kind of structure and relationship can be made available to non-domain experts, i.e. people looking to reuse and maybe remix the data.
I agree about “linkers”. I expect that such a marketplace would support people with sufficient skills/domain knowledge to be able to put that to use to create useful packages of data, and also connect up different datasets to make them more useful.
Aki: thanks for the pointer, I’ll take a look.
Cheers,
L.