Tag Archives: linked data

Layered Data: A Paper & Some Commentary

Two years ago I wrote a short paper about “layering” data but for various reasons never got round to putting it online. The paper tried to capture some of my thinking at the time about the opportunities and approaches for publishing and aggregating data on the web. I’ve finally got around to uploading it and you can read it here.

I’ve made a couple of minor tweaks in a few places but I think it stands up well, even given the recent pace of change around data publishing and re-use. I still think the abstraction that it describes is not only useful but necessary to take us forward on the next wave of data publishing.

Rather than edit the paper to bring it completely up to date with recent changes, I thought I’d publish it as is and then write some additional notes and commentary in this blog post.

You’re probably best off reading the paper, then coming back to the notes here. The illustration referenced in the paper is also now up on slideshare.

RDF & Layering

I see that the RDF Working Group, prompted by Dan Brickley, is now exploring the term. I should acknowledge that I also heard the term “layer” in conjunction with RDF from Dan, but I’ve tried to explore the concept from a number of perspectives.

The RDF Working Group may well end up using the term “layer” to mean a “named graph”. I’m using the term much more loosely in my paper. In my view an entire dataset could be a layer, as well as some easily identifiable sub-set of it. My usage might therefore be closer to Pat Hayes’s concept of a “Surface”, but I’m not sure.

I think that RDF is still an important factor in achieving the goal I outlined of allowing domain experts to quickly assemble aggregates through a layering metaphor. Or, if not RDF, then I think it would need to be based around a graph model, ideally one with a strong notion of identity. I also think that mechanisms to encourage sharing of both schemas and annotations are also useful. It’d be possible to build such a system without RDF, but I’m not sure why you’d go to the effort.

User Experience

One of the things that appeals to me about the concept of layering is that there are some nice ways to create visualisation and interfaces to support the creation, management and exploration of layers. It’s not hard to see how, given some descriptive metadata for a collection of layers, you could create:

  • A drag-and-drop tool for creating and managing new composite layers
  • An inspection tool that would let you explore how the dataset for an application or visualisation has been constructed, e.g. to explore provenance or to support sharing and customization. Think “view source” for data aggregation.
  • A recommendation engine that suggested new useful layers that could be added to a composite, including some indication of what additional query options might become available

There’s been some useful work done on describing datasets within the Linked Data community: VoiD and DCat for example. However there’s not yet enough data routinely available about the structure and relationships of individual datasets, nor enough research into how to provide useful summaries.

This is what prompted my work on an RDF Report Card to try and move the conversation forward beyond simply counting triples.

To start working with layers, we need to understand what each layer contains and how they relate to and complement one another.

Linked Data & Layers

In the paper I suggest that RDF & Linked Data alone aren’t enough and that we need systems, tools and vocabularies for capturing the required descriptive data and enabling the kinds of aggregation I envisage.

I also think that the Linked Data community is spending far too much effort on creating new identifiers for the same things and worrying how best to define equivalences.

I think the leap of faith that’s required, and that people like the BBC have already taken, is that we just need to get much more comfortable re-using other people’s identifiers and publishing annotations. Yes, there will be times when identifiers diverge, but there’s a lot to be gained, especially in terms of efficiency around data curation from just focusing on the value-added data, not re-publishing any copy of a core set of facts.

There are efficiency gains to be had from existing businesses, as well as faster routes to market for startups, if they can reliably build on some existing data. I suspect that there are also businesses that currently compete with one another — because they’re having to compile or re-compile the same core data assets — that could actually complement one another if they could instead focus on the data curation or collection tasks at which they excel.

Types of Data

In the paper I set out seven different facets which I think cover the majority of types of data that we routinely capture and publish. I think the classification could be debated, but I think its a reasonable first attempt.

The intention is to try and illustrate that we can usefully group together different types of data. And organisations may be particularly good at creating or collecting particular types of data. There’s scope for organisations to focus on being really good in a particular area and by avoiding needless competition around collecting and re-collecting the same core facts, there are almost certainly efficiency gains and cost savings to be had.

I’m sure there must be some prior work in this space, particularly around the core categories, so if anyone has pointers please share them.

There are also other ways to usefully categorise data. One area that springs to mind is how the data itself is collected, i.e. its provenance. E.g. is it collected automatically by sensors, or as a side-effect of user activity, or entered by hand by a human curator? Are those curators trained or are they self-selected contributors? Is the data derived from some form of statistical analysis?

I had toyed with provenance as a distinct facet, but I think its an orthogonal concern.

Layering & Big Data

A lot has happened in the last two years and I winced a bit at all of the Web 2.0 references in the paper. Remember that? If I were writing this now then the obvious trend to discuss as context to this approach is Big Data.

Chatting with Matt Biddulph recently he characterised a typical Big Data analysis as being based on “Activity Data” and “Reference Data”. Matt described reference data as being the core facts and information on top of which the activity data — e.g. from users of an application — is added. The analysis then draws on the combination to create some new insight, i.e. more data.

I referenced Matt’s characterisation in my Strata talk (with acknowledgement!). Currently Linked Data does really well in the Reference category but there’s not a great deal of Activity data. So while its potentially useful in a Big Data world, there’s a lot of value still not being captured.

I think Matt’s view of the world chimes well with both the layered data concept and the data classifications that I’ve proposed. Most of the facets in the paper really define different types of Reference data. The outcome of a typical Big Data analysis is usually a new facet, an obvious one being “Comparative” data, e.g. identifying the most popular, most connected, most referenced resources in a network.

However there’s clearly a different in approach between typical Big Data processing and the graph models that I think underpin a layered view of the world.

MapReduce workflows seem to work best with more regular data, however newer approaches like Pregel illustrate the potential for “graph-native” Big Data analysis. But setting that aside, there’s no real contention as a layering approach to combining data doesn’t say anything about how the data must actually be used: it can be easily projected out into structures that are amenable for indexing and processing in different ways.

Kasabi

Looking at the last section of the paper it should be obvious that much of the origin of this analysis was early preparation for Kasabi.

I still think that there’s a great deal of potential to create a marketplace around data layers and tools for interacting with them. But we’re not there yet though for several reasons. Firstly its taken time to get the underlying platform in place to support that. We’ve done that now and you can expect more information on that from more official sources shortly. Secondly I under estimated how much effort is still required to move the market forward: there’s still lots to be done to support organisations in opening up data before we can really explore more horizontal marketplaces. But that is a topic for another post.

This has been quite a ramble of a blog post but hopefully there are some useful thoughts here that chime with your own experience. Let me know what you think.

Tagged , , , , ,

Thoughts on Linked Data Business Models

Scott Brinker recently published a great blog post covering 7 business models for Linked Data. The post is well worth a read and reviews the potential for both direct and indirect revenue generation from a range of different business models. I’ve been thinking about these same issues myself recently so I’m pleased to see that others are doing similar analysis. Scott’s conclusion that, currently, Linked Data is more likely to drive indirection revenue is sound, and reflects where we are with the deployment of the technology.

The time is ripe though for organizations to begin exploring direct revenue generation models and it’s there that I wanted to add some thoughts and commentary to Scott’s posting.

Traffic

The traffic model, with its indirect revenue generation by driving traffic to existing content and services, is well understood. The same model has been used to encourage organizations to open up Web APIs, so its natural to consider this for Linked Data also.

Because it is tried and tested it’s currently one of the strongest arguments for driving adoption of Linked Data, so I’d put this right at the top of the list. The feedback loop that is in place now with search engines makes that traffic generation a reality.

Advertising

Scott mentions adverts as a possible revenue stream and raises the possibility of “data-layer ads”, by which I understand him to mean advertising included in the Linked Data itself. While I agree that an advertising model is a potential revenue stream, I don’t see that “data-layer ads” are really viable or actually useful in practice.

Adverts incorporated into raw data will be too easily stripped out or ignored by applications; by definition the adverts will be easily identifiable. RSS advertising doesn’t seem to have really taken off (I certainly never see them anyway) and I think this is for similar reasons: if the adverts are easily identifiable, then they can be stripped. And if they’re included in content or data values, then this causes problems for further machine-processing of the data and annoyances for end users.

Of course a business could enforce that users of its Linked Data should display ads through its terms and conditions, e.g. requiring data-layer ads to be displayed in some form to users of an application. In practice this can get problematic, especially if there’s not an obvious way to surface the ads to end users. But I think its also problematic as unlike a Web API where I sign up to gain access, for an arbitrary Linked Data site, there is no prior agreement required. My crawler or browser might fetch data without any knowledge of what those terms and conditions might be.

Adverts embedded into data is are not a useful way to distribute them to end-users. In an environment where adverts are increasingly profiled by a range of geographic, demographic or behavioural factors, incorporating blanket ads into data feeds loses all of that targetting capability. It also potentially loses the feedback, e.g. on click-throughs or impressions, that are useful for gauging the success of a campaign.

In my view advertising as a model to support Linked Data publishing is more likely to echo that used by the Guardian as part of its Open Platform terms and conditions (See Section 8, Advertising and Commercial Usage). The terms require users of the content to display ads from Guardian’s advertising network on its website. This avoids the need to include adverts in the data layer and supports a conventional model for delivering ads, making it play well with current advertising platforms and targeting options.

Subscriptions

As Brinker notes, subscription models for data, content and services have been around for some time. The interesting thing is to see how these models have been evolving of late due to pressures in various industries, and how these intersect with the open data movement. For Linked Data to be most useful some of its needs to be free: you need to make at least a bare minimum of data freely available, e.g. to identify objects of interest, to enable annotation and linking, etc. In my opinion a freemium model is the core of any subscription model for Linked Data.

Having previously worked in the academic publishing industry which is very heavily driven by subscription revenues, I’ve noticed a number of models that have come to the fore there, most recently driven by the Open Access movement. I think many of these are transferable to other contexts. So while the particulars will vary in different industries, the means of slicing up data into subscription packages are likely to be repeatable.

All of the following assume that some basic element of the Linked Data is free, but that one is paying for:

  • Full Access — Pay for access to detailed, denser data. The value-added data might include richer links to other datasets, more content, etc
  • Timely Access — Pay for access to the most recent, or more current version of the data. This leaves the bulk of the data open but delivers a commercial advantage to subscribers. As data gets older, it automatically becomes free
  • Archival Access — Putting archives of content, or large archival datasets on-line can be expensive in terms of data conversion, digitization, and service provision. So deep archives of data might only be available to subscribers. Commercial advantage derives from having more data to analyse and explore.
  • Block Access — paying for access to a dataset based on time, e.g. “for the next 24 hours”; or based on the number, frequency of accesses; or the number of concurrent accesses.
  • Convenient Access — paying for access to the data through a specific mechanism. This might seem at odds with Linked Data, but its reasonable to assume that some organizations might want data feeds or dumps rather than on-line only access. This might come at a premium.

These variants can combined and might also be separated out into personal (non-commercial) and commercial subscription packages.

It’s interesting to see how some of these (Timely Access, Convenient Access) are already in use in projects like Musicbrainz that blend Open Data with commercial models.

Sponsorship

One model that Scott Brinker doesn’t mention in his posting is Sponsorship. An organization might be funded to publish Linked Data, e.g. for the public good. The organization itself might be a charity and funded by donations.

It’s arguable that this might be more about cost recovery for service provision rather than a true business model, but I think its worth considering. Some of the open government data publishing efforts and possibly even the Linked Data from the BBC, could be seen as falling into this category.

It’s probably most viable for public sector, cultural heritage and similar organizations.

Closing Thoughts

What needs to happen to explore these different models? Is it just a matter of individual organizations experimenting to see what works and what doesn’t?

I think that is largely the case, and we’ll definitely be seeing that process begin to happen in earnest in 2010; a process that we’ll be supporting and enabling with the Talis Platform.

From a technical perspective I’m interested to see how well protocols like OAuth and FOAF+SSL can be deployed to mediate access to licensed Linked Data.

Tagged
Follow

Get every new post delivered to your Inbox.

Join 29 other followers