RDF Data Access Options, or Isn’t HTTP already the API?

This is a follow-up to my blog post from yesterday about RDF and JSON. Ed Summers tweeted to say:

…your blog post suggests that an API for linked data is needed; isn’t http already the API?

I couldn’t answer that in 140 characters, so am writing this post to elaborate a little on the last section of my post in which I suggested that “there’s a big data access gulf between de-referencing URIs and performing SPARQL queries”. What exactly do I mean there? And why do I think that the Linked Data API helps?

Is Your Website Your API?

Most Linked Data presentations that discuss the publishing of data to the web typically run through the Linked Data principles. At point three we reach the recommend that:


“When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL)

This has encourages us to create sites that consist of a mesh of interconnected resources described using RDF. We can “follow our nose” through those relationships to find more information.

This gives us two fundamental two data access options:

  • Resource Lookups: by dereferencing APIs we can obtain a (typically) complete description of a resource
  • Graph Traversal: following relationships and recursively de-referencing URIs to retrieve descriptions of related entities; this is (typically, not not necessarily) reconstituted into a graph on the client

However, if we take the “Your Website Is Your API” idea seriously, then we should be able to reflect all of the different points of interaction of that website as RDF, not just resource lookups (viewing a page) and graph traversal (clicking around).

As Tom Coates noted back in 2006 in “Native to a Web of Data“, good data-driven websites will have “list views and batch manipulation interfaces”. So we should be able to provide RDF views of those areas of functionality too. This gives us another kind of access option:

  • Listing: ability to retrieve lists/collections of things; navigation through those lists, e.g. by paging; and list manipulation, e.g. by filtering or sorting.

It’s possible to handle much of that by building some additional structure into your dataset, e.g. creating RDF Lists (or similar) of useful collections of resources. But if you bake this into your data then those views will potentially need to be re-evaluated every time the data changes. And even then there is still no way for a user to manipulate the views, e.g. to page or sort them.

So to achieve the most flexibility you need a more dynamic way of extracting and ordering portions of the underlying data. This is the role that SPARQL often fulfills, it provides some really useful ways to manipulate RDF graphs, and you can achieve far more with it than just extracting and manipulating lists of things.

SPARQL also supports another kind of access option that would otherwise require traversing some or all of the remote graph.

One example would be: “does this graph contain any foaf:name predicates?” or “does anything in this graph relate to http://www.example.org/bob?”. These kinds of existence checks, as well as more complex graph pattern matching, also tend to be the domain of SPARQL queries. It’s more expressive and potentially more efficient to just use a query language for that kind of question. So this gives us a fourth option:

  • Existence Checks: ability to determine whether a particular structure is present in a graph

Interestingly though they are not often the kinds of questions that you can “ask” of a website. There’s no real correlation with typical web browsing features although searching comes close for simple existence check queries.

Where the Linked Data API fits in

So there are at least four kinds of data access option. I doubt whether its exhaustive, but its a useful starting point for discussion.

SPARQL can handle all of these options and more. The graph pattern matching features, and provision of four query types lets us perform any of these kinds of interaction. For example A common way of implementing Resource Lookups over a triple store is to use a DESCRIBE or a CONSTRUCT query.

However the problem, as I see it, is that when we resort to writing SPARQL graph patterns in order to request, say, a list of people, then we’ve kind of stepped around HTTP. We’re no longer specifying and refining our query by interacting with web resources via parameterised URLs, we’re tunnelling the request for what we want in a SPARQL query sent to an endpoint.

From a hypermedia perspective it would be much better if there were a way to be able to handle the “Listing” access option using something that was better integrated with HTTP. It also happens that this might actually be easier for the majority of web developers to get to grips with, because they no longer have to learn SPARQL.

This is what I meant by a “RESTful API” in yesterday’s blog post. In my mind, “Listing things” sits in between Resource Lookups and Existence Checks or complex pattern matching in terms of access options.

It’s precisely this role that the Linked Data API is intended to fulfil. It defines a way to dynamically generate lists of resources from an underlying RDF graph, along with ways to manipulate those collections of resources, e.g. by sorting and filtering. It’s possible to use it to define a number of useful list views for an RDF dataset that nicely complements the relationships present in the data. It’s actually defined in terms of executing SPARQL queries over that graph, but this isn’t obvious to the end user.

These features are supplemented with the definition of simple XML and JSON formats, to supplement the RDF serializations that it supports. This is really intended to encourage adoption by making it easier to process the data using non RDF tools.

So, Isn’t HTTP the API?

Which brings me to the answer to Ed’s question: isn’t HTTP the API we need? The answer is yes, but we need more than just HTTP, we also need well defined media-types.

Mike Amundsen has created a nice categorisation of media types and a description of different types of factors they contain: H Factor.

Section 5.2.1.2 of Fielding’s dissertation explains that:


Control data defines the purpose of a message between components, such as the action being requested or the meaning of a response. It is also used to parameterize requests and override the default behavior of some connecting elements.

As it stands today neither RDF nor the Linked Data API specification ticks all of the the HFactor boxes. What we’ve really done so far is define how to parameterise some requests, e.g. to filter or sort based on a property value, but we’ve not yet defined that in a standard media type; the API configuration captures a lot of the requisite information but isn’t quite there.

That’s a long rambly blog post for a Friday night! Hopefully I’ve clarified what I was referring to yesterday. I absolutely don’t want to see anyone define an API for RDF that steps around HTTP. We need something that is much more closely aligned with the web. And hopefully I’ve also answered Ed’s question.

Gridworks Reconciliation API Implementation

Gridworks is a really fantastic tool and there’s scope to extend it in all kinds of interesting ways. Jeni Tennison has recently published a great blog post describing how to use Gridworks for generating Linked Data. I strongly encourage you to read her posting as it not only provides a good introduction to Gridworks itself, but also shows a nice real world example of generating RDF using its built-in data cleaning and templating tools.

I was luckily enough to meet David Huynh as a workshop recently and chatted to him briefly about another aspect of the Gridworks: its ability to match field values in a dataset to entities in Freebase, e.g. identifying a place based on just it’s name. Within Gridworks this process is known as “reconciliation”.

Reconciliation is an important step for generating good Linked Data as you’ll often need to correlate values in a dataset with URIs in existing datasets in order to generate links. E.g. matching company names to their URIs. While it is possible to generate identifiers algorithmically during a conversion this typically just defers the reconciliation work until a later stage, when you carry out cross-linking to introduce equivalence links.

Recognising that the ability to introduce new reconciliation services would be a powerful extension to Gridworks, David Huynh has been creating a draft specification that will allow third-parties to create and deploy their own reconciliation services. He’s been documenting his progress on implementing the client side of this protocol and has published a testing service.

It occurred to me that the reconciliation API is essentially a structured search over a dataset and thus could be implemented against the search interface exposed by Talis Platform stores. The RSS 1.0 feeds that the Platform returns includes enough information to rank and filter results as required by the API.

I’ve created a simple Ruby application, using the Sinatra web framework, that implements the reconciliation API for any Talis Platform store. You can find the code on github if you want to have a play with it. As I note in the README there are some areas where customisation is useful to get the most from the service. So while in principle it can be used against any existing Platform store you can create a simple JSON config to tweak it for particular datasets.

There’s a live version of the code running one my server here: http://ldodds.com/gridworks/.

That page has a simple API console for carrying out queries, but consult the draft specification for more details. I think I’ve covered all of the basic features (but bug reports welcome!). Consult the README for notes on configuration options and implementation decisions.

As a simple illustration, lets say that I have the value “Bath” in a dataset and want to match that to some area in the UK administrative geography. This information is available from the Linked Data exposed by statistics.data.gov.uk and this happens to be hosted in this platform store. The reconciliation API we need can therefore be found at: http://ldodds.com/gridworks/govuk-statistics/reconcile. An HTTP GET on that location retrieves the service metadata.

If we use the API explorer we can use a simple HTML form to try out examples. Select govuk-statistics from the Store drop-down and then type Bath into the search box. You’ll get this result. This is not very readable by default, so if you’re using Firefox I recommend you install the JSONView extension which provides a nicely formatted display.

Our initial search returns a number of results. The highest ranked of these being the Westminster Constituency for Bath. That seems like a pretty good initial result to me. As it is the most relevant result in the search it’s marked as an exact match, so once integrated with Gridworks it will capture and store the reconciled identifier for you.

However, we may know that in the imaginary dataset we’re working with, that a particular field doesn’t contain names of constituencies. It may instead refer to a Local Education Authority. We can refine our search by adding the URI that defines that type of resource into the type field in the API explorer.

Try pasting in http://statistics.data.gov.uk/def/geography/LocalEducationAuthority into the post and running the search again. You’ll find that this time you get a single result, which is Bath and North East Somerset. Job done.

Of course, to get the most from this you need to know what URIs you can use for filtering by types (and properties). But this is something that the Gridworks UI will help with. It can integrate with “suggestion services” that can be used to help map values to a properties and types within a schema. I’ll be looking at how to expose those as my next piece of work.

Hopefully you can see how the overall system works. Feel free to have a play with the API to try it out for yourself. If you have comments on the implementation then I’d love to hear them, but I’d suggest that comments on the specification are best addressed to the gridworks mailing list.

I also suspect the Reconciliation API has uses outside of just Gridworks. For example, I wonder how easy it would be to introduce reconciliation into Google Spreadsheets using Google Apps Script? It’s also another nice demonstration of how easy it is to map simple RESTful APIs onto RDF datasets, this implementation works for any data in the Platform, no matter what schema it confirms with. Neat.

RDF Dataset Notifications

Like many people in the RDF community I’ve been thinking about the issue of syndicating updates to RDF datasets. If we want to support truly distributed aggregation and processing of data then we need an efficient way to share updates.

There’s been a lot of experimentation around different mechanisms, and PubSubHubbub seems to be a current favourite approach. I’ve been playing with it myself recently and have hacked up a basic push mechanism around Talis Platform stores. More on that another time.

But I’ve not yet seen any general discussion about the merits of different approaches, or even discussion about what it is that we really want to syndicate.

So let’s take it from the top.

It seems to me that there’s basically three broad categories of information we want to syndicate:

  • Dataset Notifications — has a new dataset been added to a directory? has one been updated in some way, e.g. through the addition or removal of triples?
  • Resource Notifications — what resources have been added or modified within a dataset?
  • Triple Notifications — what triples have been changed within a dataset?

Each one of these categories is syndicating a different level of detail and may benefit from a different technical approach. For example there’s a different volume of information being exchanged if one is simply notifying dataset changes vs every triple. We’ll also likely need a different format or syntax.

Actually there may be a fourth category: notifications of graph structural changes to a dataset, e.g. adding or removing named graphs. I’ve not yet seen anyone exploring that level of syndication, but suspect it may be very useful.

Now, for each of those different categories, there are two different styles of notifications: push or pull. Pull mechanisms are typified by feed subscriptions, crawlers, or repeated queries of datasets. Push mechanisms are usually based on some form of publish-subscribe system.

Given those different scenarios, we can take a look at some existing technologies and categorise them. I’ve done just that and published a simple Google spreadsheet with my first stab at this analysis. (This probably needs a little more context in places but hopefully the classifications are fairly obvious).

PubSubHubbub seems to offer the most flexibility in that it mixes a standard Pull based Feed architecture with a Push based subscription system. Clearly worthy of the attention its getting. Other technologies offer similar features but are optimised for different purposes.

However that doesn’t mean that PubSubhubbub is just perfect out of the box. For example it’s worth noting that consumers aren’t required to use the Push aspects of the system, they can just subscribe to the feeds. So you need to be prepared to scale a PubSubHubbub system just as you would a Pull based Feed.

It may also be sub-optimal for systems which are syndicating out high-volume Triple level updates. The Feeds can potentially get very large and the hub system needs to be prepared to handle large exchanges. It also doesn’t say anything about how to catch-up or recover from missed updates. A hybrid approach may be required to cover for all use cases and scenarios and to produce a robust system.

In order to be able to properly compare different approaches we need to understand their respective trade-offs. I’m hoping this posting contributes to that discussion and can complement the ongoing community experimentation.

Am interested to hear your thoughts.

Linked Data Patterns: a free book for practitioners

A few months ago Ian Davis and I were chatting about some new approaches to helping practitioners climb the learning curve around Linked Data, RDF and related technologies. We were both keen to help communicate the value of Linked Data, share knowledge amongst practitioners, and to encourage the community to converge on best practices. We kicked around a number of different ideas in this vein.

For example, Ian was keen to provide guidance as to how to mix and match different vocabularies to achieve a particular goal, like describing a person or a book. Having a ready reference containing recipes for these common tasks would address a number of goals. He’s ended up exploring that idea further in the recently released Schemapedia. If you’ve not seen it yet, then you should take a look. It provides a really nice way to navigate through RDF vocabularies and explore their intersections.

The other thing that we discussed was Design Patterns. I’ve been a Design Pattern nut for some time now. Discovering them was something of a right of passage for me during my Master’s dissertation. I’d spent weeks revising and honing a design for the distributed system I was building, only to discover that what I’d produced was already documented as a design pattern in an obscure corner of the research literature. While I’d clearly reinvented the wheel, the discovery not only provided external validation for what I’d produced, but also neatly illustrated the benefit of using design patterns to share knowledge and experience within a community. Knowing when to apply particular patterns is a key skill for any developer, and the terms are a part of the design vocabulary we all share.

I suggested to Ian that we explore writing some patterns for Linked Data. Patterns for assigning identifiers, modelling data, as well as application development. We experimented with this for a while but ended up parking the discussion for a few months whilst other priorities intervened.

I recently revived the project. It’s pretty clear to me that there’s still a big skills gap between experienced practitioners and those seeking to apply the technology. I think the current situation is reminiscent of the move of OO programming from the research lab out into the developer community; design patterns played a key role there too.

Ian and I have decided to share this with the community as an on-line book, a pattern catalogue that covers a range of different use cases. We started out with about half a dozen patterns, but over the last few weeks I’ve expanded that figure to thirty. I’ve still got a number on my short-list (more than a dozen, I think) but it’s time to start sharing this with the community. The work won’t ever be complete as the space is still unfolding, it will just get refined over time.

You can read the book online at http://patterns.dataincubator.org.

The work is licensed under a Creative Commons Attribution license so you’re free to use it as you see fit, but please attribute the source. If you want to download it, then there’s a PDF, and an EPUB too. We’re using DocBook for the text so there will be a number of different access options.

I’ll stress that this is a very early draft, so be gentle. But we’d love to hear your comments.

A Tour of the OS 50k Gazetteer Linked Data

The Ordnance Survey have today published the first in a series of open datasets. In addition to the administrative geography that was published last year, the Linked Data available from data.ordnancesurvey.co.uk now includes data from their 1:50 000 Scale Gazetteer. In this blog post I thought I’d post an overview of the dataset to summarise what it contains.

Analysis

The Gazetteer identifiers all have a base URL of:

http://data.ordnancesurvey.co.uk/id/50kGazetteer/.

The base URL is suffixed with a unique numeric code. I’m not sure where this originates from, and its not present in the underlying data.

The dataset consist of 2,368,655 triples (individual facts) asserted over 259,080 unique resources. So about 9 triples per resource. Here’s how the properties break down:

http://www.w3.org/1999/02/22-rdf-syntax-ns#type 259080
http://xmlns.com/foaf/0.1/name 259080
http://www.w3.org/2000/01/rdf-schema#label 259080
http://data.ordnancesurvey.co.uk/ontology/spatialrelations/northing 259080
http://data.ordnancesurvey.co.uk/ontology/spatialrelations/easting 259080
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/featureType 259080
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/oneKMGridReference 259080
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/twentyKMGridReference 259080
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/mapReference 296015

The first few properties are labels and a type for each resource. The additional predicates are from the OS Spatial Relations ontology, providing the Eastings and Northings for each feature. The remainining four predicates provide a “feature type” and OS map & grid references. There are slightly more map references, so some resources have more than one such property, i.e. because they’re large enough to span a particular map. You can see that there are no links to other datasets as yet, or lat/long co-ordinates.

Lets look closer at some of the predicates. For the RDF types, I discovered that the every resource has the same type, they’re all instances of a “Named Place”:

http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/NamedPlace.

Presumably then the detailed classification for the different types of landscape feature is present in the “feature type” predicate. A SPARQL query to count and group the values for that predicate gives me:

http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/Other 128662
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/OtherSettlement 41228
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/Farm 34723
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/WaterFeature 24425
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/HillOrMountain 14524
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/ForestOrWood 8708
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/Antiquity 5252
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/Town 1259
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/RomanAntiquity 237
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/City 62

We can see that 128,662 resources (49% of total) are simply “Other” with another 41,228 being “Other Settlement”; not that inspiring! The rest of the feature types are more interesting, and give us some very basic data on various geographic features. The Roman Antiquity features piqued my interested; Hadrian’s Wall has the following identifier (click to see the data):

http://data.ordnancesurvey.co.uk/id/50kGazetteer/106584

The values for the Easting and Northing properties should be obvious, so I’ll skip over those. The remaining properties are all map references, and the values of these are all resources. So the Gazetteer has begun assigning URIs to all of the 1KM and 20KM grid references, as well as each of OS LandRanger Maps. Here are some sample URLs for each, taken from the descripion of Hadrian’s Wall:

http://data.ordnancesurvey.co.uk/id/1kmgridsquare/NY3359
http://data.ordnancesurvey.co.uk/id/20kmgridsquare/NY24
http://data.ordnancesurvey.co.uk/id/OSLandrangerMap/85

The URIs seem predictable and can probably be derived from data found elsewhere. Unfortunately, no further data has been included about these resources. I believe they are place-holders for data that has yet to be released.

Overall the data in the Gazetteer is pretty sparse but presumably it will become much richer once more OS data is released. Latitude and longitudes is something that I’d particularly like to see added. There’s an opportunity here for someone to link up these resources with pages in Wikipedia & resources in DbPedia.

Sample Queries

If you want to play with the data, here are a couple of SPARQL queries to get you started. The first retrieves 10 features classified as Roman Antiquities


PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX spatial: <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/>
PREFIX gaz: <http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/>

SELECT ?uri ?label ?easting ?northing ?one ?twenty ?map 
WHERE {
  ?uri 
    #filter on type
    gaz:featureType gaz:RomanAntiquity;

    #bind everything we want to return
    rdfs:label ?label;
    spatial:easting ?easting;
    spatial:northing ?northing;
    gaz:oneKMGridReference ?one;
    gaz:twentyKMGridReference ?twenty;
    gaz:mapReference ?map.
}
LIMIT 10

Results in JSON

The following query lists all of the features on a specific OS Landranger map. So even though we don’t (yet) have any details about the map, we can use its identifier as a means to filter the results:


PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX spatial: <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/>
PREFIX gaz: <http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/>

SELECT ?uri ?label ?easting ?northing ?featureType 
WHERE {
  ?uri 
    #filter on map reference
    gaz:mapReference <http://data.ordnancesurvey.co.uk/id/OSLandrangerMap/85>;

    #bind everything we want to return
    rdfs:label ?label;
    spatial:easting ?easting;
    spatial:northing ?northing;
    gaz:featureType ?featureType.
}

Results in JSON

Enhanced Descriptions: “Premium Linked Data”

I’ve had several conversations recently with people who are either interested in, or actually implementing Linked Data, and are struggling with some important questions

  • How much data should I give away?
  • If I wanted to charge for more than just the basic data, then how would I handle that?

My usual response to the first of those questions is: “as much as you feel comfortable with”. There’s still so much data that’s not yet visible or accessible in machine-readable formats that any progress is good progress. Let’s get more data out there now. More is better.

It usually doesn’t take long to get to the second question. If you’ve spent time evangelising to people about the power and value of data, and particularly their data, then its natural for them to begin thinking about how it can be monetized.

Scott Brinker has done a good job of summarising a range of options for Linked Data business models. I’ve chipped into that discussion already. Instead what I wanted to briefly discuss here is some of the mechanics of implementing access to what we might call “premium Linked Data”, or as I’ll refer to it “Enhanced Descriptions”.

Premium Linked Data

It’s possible to publish Linked Data that is entirely access controlled. Access might be limited to users behind the firewall (“Enterprise Linked Data”) or only to authorised paying customers. As a paid up customer you’d be given an entry point into that Linked Data and would supply appropriate credentials in order to access it.

This data isn’t going to be something you’d discover on the open web. There are many different authentication models that could be used to mediate access to this “Dark Data”. The precise mechanisms aren’t that important and the right one is likely to vary for different industries and use cases. Although I think there’s a strong argument in using something that dove-tails nicely with HTTP and web infrastructure in general.

What interests me more is the scenario in which a data publisher might be exposing some public data under a liberal open license, but also wants to make available some “premium” metadata. I.e. some value-added data that is only available to paid-up customers. In this scenario it would be useful to be able to link together the open and closed data, allowing a user agent to detect that there is extra value hidden behind some kind of authentication barrier. I think this is likely to become a very common pattern as it aids discovery of the value-added material. Essentially its the existing pattern for access controlling content that we have on the web of documents.

Its the mechanics of implementing this public/private scenario that has cropped up in my recent conversations.

Enhanced Descriptions

When I dereference the URI of a resource I will typically get redirected to a document that describes that resource. This document might contain data like this (in Turtle):


ex:document 
  foaf:primaryTopic ex:thing.

ex:thing 
  rdfs:label "Some Thing".

i.e. the document contains some data about the resource, and there’s a primary topic relationship between the document and the resource.

If we want to point to additional RDF documents that also describe this resource, or related data, then we can use an rdfs:seeAlso link:


ex:document 
  foaf:primaryTopic ex:thing.

ex:thing rdfs:label "Some Thing";
  rdfs:seeAlso ex:otherDocument.

We can use the rdfs:seeAlso relationship to point to additional documents either within a specific dataset or in other locations on the web. Those documents provide useful annotations about a resource.

An “Enhanced Description” will contain additional value-added data about a resource. We could just refer to this document using an rdfs:seeAlso link. But if we do that then a user agent can’t easily distinguish between an arbitrary rdfs:seeAlso link and one that refers to some additional data. We could instead use an additional relationship, a specialisation of rdfs:seeAlso, that can be used to disambiguate between the relationships. I’ve defined just such a predicate: ov:enhancedDescription.


ex:document 
  foaf:primaryTopic ex:thing.

ex:thing rdfs:label "Some Thing";
  rdfs:seeAlso ex:otherDocument;
  ov:enhancedDescription ex:premiumDocument.

By using a separate document to hold the value-added annotations we have the opportunity for user agents to identify those documents (via the predicate) and to also be challenged for credentials when they retrieve the URI (e.g. with an HTTP 401 response code).

It also means data publishers can safely dip a toe in the open data waters, but leave richer descriptions protected but still discoverable behind an access control layer.

Another Approach?

Interestingly I discovered earlier today that OpenCalais returns a “402 Payment Required” status code for some documents.

To see this in practice visit their description of IBM and try accessing the last of the owl:sameAs links. I’m guessing they’re using a similar technique to the one I’ve outlined here. But the key difference is that rather than use separate documents, they’ve decided to create new URIs for the access controlled version of the Linked Data. It would be nice if someone out there could confirm that.

Assuming I’ve interpreted what they’re doing correctly, I think this approach has some failings. Firstly it creates extra URIs that aren’t really needed. I’m not sure that we really need more URIs for things; a pattern in which publishers have 2 URIs (public & private) for each resource isn’t going to help matters

Secondly, just like using a generic “see also” relation, using owl:sameAs means its impossible to detect which resource is the one providing access to premium data, and others that exist on the web, without doing some fragile URI matching.

Apologies for the OpenCalais team if I’ve misunderstood the mechanism they’re using. I’ll happily publish a correction, but regardless, I’m intrigued by the 402 status code! 🙂

Summary

In my view, the “Enhanced Description” approach is a simple to implement pattern. Its one that I’ve been recommending to people recently but I’ve not seen documented anywhere, so thought I’d write it up.

I’d be interested to hear from others that have either implemented the same mechanism, or like OpenCalais are using other schemes.

Predicate Based Services

sameAs.org is a great service on a number of different levels. It provides a much needed piece of Semantic Web infrastructure and it achieves that through a simple clean interface and API. You don’t even need to know anything about RDF to get value from the service. In short it’s one of those nice web services that do one thing and do it really well.

I use the service as a frequent example in my talks and training sessions on Linked Data. For example, while it’s useful to review techniques for linking together datasets, in practice you can achieve a lot by simply doing a series of look-ups against sameAs.org. I’ve had some happy experiences of discovering connections between datasets without having to do any manual linking.

More than a few times recently I’ve been thinking that it would be useful to repeat what Hugh Glaser and Ian Millard achieved with sameAs.org, but for a number of other common RDF predicates.

In my opinion there are a small number of general predicates that will act as the backbone for the web of data. At the head of the predicate long tail we’ll find properties like: owl:sameAs, but also useful properties like dc:subject, foaf:knows and foaf:primaryTopic.

The topic based predicates (dc:subject, foaf:primaryTopic, foaf:topic, et al) are particularly useful for discovering documents and material that relate to a specific resource. An index of these would be extremely useful for inter-linking between content from different news and media organisations for example. I’d envisage that “topicOf.org” might index a range of different topic related predicates and expose some useful discovery tools, relations and equivalencies. Dan Brickley has a nice diagram that shows how these different predicates inter-relate.

“topicOf” is currently top of my list of these predicate based services. But the same approach would work in other contexts. For example a service that indexed foaf:knows would be useful for social networking applications. But I think that this area is already well-served by existing services already. But what about:

  • “reviewsOf.org” — find reviews about a specific resource. I believe Tom Heath has thought about doing something like with for Revyu
  • “depictionsOf.org” — find pictures of a specific resource (foaf:depiction), e.g. person, place or thing (and reliably, not like the Flickr Wrapper)
  • “madeBy.org”> — find documents, photos, or other resources that were made by a particular person (dc:creator, foaf:maker)

I can think of all sorts of useful purposes for these services. I also think that they could offer additional ways of engaging with the broader developer community and getting them to buy into the Linked Data vision.

Anyone want to have a crack at implementing some of these?

Thoughts on Enterprise Linked Data

There have been a number of discussions about “Enterprise Linked Data” recently, and I took part on a panel on precisely that topic at ESTC 2009. Unfortunately the panel was cut short due to time pressures so I didn’t get chance to say everything I’d hoped. In lieu of that debate here’s a blog post containing a few thoughts on the subject.

When we refer to enterprise use of Linked Data, there are a number of different facets to that discussion which are worth highlighting. In my opinion the issues and justifications relating to each of them are quite different. So different in fact that we’re in danger of having a confused debate unless we tease out this different aspects.

Aspects of the Debate

In my view there are three facets to the discussion:

  • Publishing Linked Data, the key question here being: What does an Enterprise have to benefit by publishing Linked Data?
  • Consuming Linked Data: What does an Enterprise have to benefit from consuming Linked Data?
  • Adopting Linked Data: What benefits can an Enterprise gain by deploying Linked Data technologies internally?

I think these facets whilst obviously closely related are largely orthogonal. For example I could see a scenario in which an organization consumed Linked Data but didn’t store or use it as RDF, but just fed it into existing applications. Similarly businesses could clearly adopt Linked Data as a technology without publishing or using any data to the web at all.

These issues are also largely orthogonal to the Open Data discussion: an enterprise might use, consume and publish Linked Data but this might not be completely open for others to reuse. The data may only be available behind the firewall, amongst authorised business partners, or only available to licensed third-parties. So, while the issue as to whether to publish open data is a very important aspect of the discussion, its not a defining one.

Here’s a few thoughts on each of these different facets.

Publishing Linked Data

So why might an enterprise publish Linked Data? And if that is a worthwhile goal, then is it clear how to achieve it? Lets tackle the second question first as its the simplest.

There is an increasingly large amount of good advice available online, as well as tools and applications, to support the publishing of Linked Data. We’re making good strides towards making the important transition from moving Linked Data out of the research area and into the hands of actual practitioners. The How to Publish Linked Data on the Web tutorial is an great resource but to my mind Jeni Tennison’s recent series on publishing Linked Data is an excellent end-to-end guide full of great practical advice.

We can declare victory when someone writes the O’Reilly book on the subject and do for Linked Data what RESTful Web Services did for REST. (And the two would make great companion pieces).

But technology issues aside, what are the benefits to an organization in publishing Linked Data? There are several ways to approach answering that question but I think in most discussions Linked Data tends to get compared with Web APIs. The value of creating an API is now reasonably well understood, and many of the benefits that come from opening data through an API also apply to Linked Data.

However the argument that Linked Data married with a SPARQL endpoint is as easy for developers to use as a Web API is still a little weak at this stage. SPARQL can be off-putting for developers used to simpler more tightly defined APIs. As a community we ought to consider it as a power tool and look for ways to make it easier to get started with. It’s also worth recognising that a search API is also a useful addition to a SPARQL endpoint as part of Linked Data deployment.

But publishing Linked Data can’t be directly compared to just creating an API, because its also largely a pattern for web publishing in general. Its increasingly easier to instrument existing content management systems to expose RDF(a) and Linked Data. So rather than create a custom API, which will involve expensive development costs, particularly if its going to scale, its possible to simply expose Linked Data as part of an existing website.

By following the Linked Data pattern for web publishing, in particular the use of strong identifiers, an enterprise can end up with a single point of presence on the web for publishing all of its human and machine-readable data, resulting in a website that is strongly Search Engine Optimised. Search engines can better crawl and index well structured websites and are increasingly ingesting embedded RDFa to improve search results and rankings. That’s a strong incentive to publish Linked Data by itself.

Adopting Linked Data, particularly as part of a reorganization of an existing web presence, could deliver improved search engine rankings and exposure of content whilst saving on the costs of developing and running a custom API. The longer term benefits of being part of the growing web of data can be the icing on the cake.

Consuming Linked Data

Next we can consider why an enterprise might want to consume Linked Data.

To my knowledge organizations are currently only publishing Linked Open Data (albeit with some wide variations in licensing terms), so we’ll skip for the present whether enterprises have an option of consuming non-open Linked Data, e.g. as part of a privately licensed dataset.

The LOD Cloud is still growing and provides a great resource of highly interlinked data. The main issues that face an organization consuming this data are ones of quantity (there’s still a lot more data that could be available); quality (how good is the data, and how well is it modelled); and trust (picking and choosing reliable sources).

To some extent these issues face any organization that begins relying on a third-party API or dataset. However at present a lot of the data in the LOD cloud is still from secondary sources. The same can’t be said for the majority of web APIs, which tend to be published by the original curators of the data.

These issues should resolve themselves over time as more primary sources join the LOD cloud. Because Linked Data is all based on the same data model bulk loading and merging data from external sources is very simple. This gives enterprises the option of creating their own mirrors of LOD data sources which will provide some additional reassurances around stability and longevity.

Linked Data, with its reliance on strong identifiers, is much easier to navigate and process than other sources, even if you’re not storing the results of that processing as RDF. There’s also a much greater chance of serendipity, resulting in the discovery of new data sources and new data items. Whereas there is virtually no serendipity in a Web API as each API needs to be explicitly integrated.

But this benefit is only going to become evident if we continue to put effort into helping (enterprise) developers understand how to consume Linked Data. E.g. as part of existing frameworks or using new data integration patterns is another area that needs more attention. The Consuming Linked Data tutorial at ISWC 2009 was a good step in that direction, although the message needs to be circulated wider, outside of the core semantic web community.

In my opinion it will be easier for enterprises to consume Linked Data if they first begin to publish it. By publishing data they are putting their identifiers out into the wild. These identifiers become points for annotation and reuse by the community, creating liminal zones from which the enterprise can harvest and filter useful data. This is a benefit that I think is unique to Linked Data as with an Web API the end results are typically mashups or widgets displaying in a third-party application; these are just new silos one step removed from the data publisher.

Adopting Linked Data

Finally, what value could be gained if an organization adopts Linked Data internally as a means to manage and integrate data behind the firewall?

The issues and potential benefits here are largely a mixture of the above, except that there are little or no issues with trust as all of the data comes from known sources. In a typical enterprise environment Linked Data as an integration technology will be compared to a wider range of systems ranging from integrated developer tools through to middleware systems. There’s a reason why SOAP based systems are still well used in enterprise IT as most organizations aren’t (yet?) internally organized as if they were true microcosms of the web.

Its interesting to see that Linked Data can potentially provide a means for solving many of the issues that Master Data Management is trying to address. Linked Data encourages strong identifiers; clean modelling; and linking to, rather than replicating data. These are core issues for data consolidation within the enterprise. Coupled with the ability to link out to data that is part of the LOD Cloud, or published by business partners, Linked Data has the potential to provide a unifying infrastructure for managing both internal and external data sources.

Its worth noting however that semantic technologies in general, e.g. document analysis, entity extraction, reasoning and ontologies seem to be much more widely deployed in enterprise systems than Linked Data. This is no doubt in large part because the advantages of those technologies may currently be much more easily articulated as they’re more easily packaged into a product.

Summary

In this post I wanted to tease out some of the questions that underpin the discussions about enterprise adoption of Linked Data. I’ve presented a few thoughts on those questions and I’d love to hear your opinions.

Along the way I’ve attempted to highlight some areas where we need to focus to help transition from a researcher-led to a practioner-led community. More data, more documentation, and more tools are the key themes.

Approaches to Publishing Linked Data via Named Graphs

This is a follow-up to my previous post on managing RDF using named graphs. In that post I looked at the basic concept of named graphs, how they are used in SPARQL 1.0/1.1, and discussed RESTful APIs for managing named graphs. In this post I wanted to look at how Named Graphs can be used to support publishing of Linked Data.

There are two scenarios I’m going to explore. The first uses Named Graphs in a way that provides a low friction method for publishing Linked Data. The second prioritizes ease of data management, and in particular the scenario where RDF is being generated by converting from other sources. Lets look at each in turn and their relative merits.

Publishing Scenario #1: One Resource per Graph

For this scenario lets assume that we’re building a simple book website. Our URI space is going to look like this:


http://www.example.org/id/thing/id
http://www.example.org/doc/thing/id

The first URI being the pattern for identifiers in our system, the second being the URI to which we’ll 303 clients in order to get the document containing the metadata about the thing with that identifier. We’ll have several types of thing in our system: books, authors, and subjects.

The application will obviously include a range of features such as authentication, registration, search, etc. But I’m only going to look at the Linked Data delivery aspects of the application here in order to highlight our Named Graphs can support that.

Our application is going to be backed by a triplestore that offers an HTTP protocol for managing Named Graphs, e.g. as specified by SPARQL 1.1. This triplestore will expose graphs from the following base URI:

http://internal.example.org/graphs

The simplest way to manage our application data is to store the data about resource in a separate Named Graph. Each resource will therefore be fully described in a single graph, so all of the metadata about:

http://www.example.org/id/book/1234

with be found in:

http://internal.example.org/graphs/book/1234

The contents of that graph will be the Concise Bounded Description of http://www.example.org/id/book/1234, i.e. all its literal properties, any related blank nodes, as well a properties referencing related resources.

This means delivering the Linked Data view for this resource is trivial. A GET request to http://www.example.org/doc/book/1234 will trigger our application to perform a GET request to our internal triplestore at http://internal.example.org/graphs/book/1234.

If the triplestore supports multiple serializations then there’s no need for our application to parse or otherwise process the results: we can request the format desired by the client directly from the store and then proxy the response straight-through. Ideally the store would also support ETags and/or other HTTP caching headers which we can also reuse. ETags will be simple to generate as it will be easy to track whether a specific Named Graph has been updated.

As the application code to do all this is agnostic to the type of resource being requested, we don’t have to change anything if we were to expand our application to store information about new types of thing. This is the sort of generic behaviour that could easily be abstracted out into a reusable framework.

Another nice architectural feature is that it will be easy to slot in internal load-balancing over a replicated store to spread requests over multiple servers. Because the data is organised into graphs there are also natural ways to “shard” the data if we wanted to replicate the data in other ways.

This gets us a simple Linked Data publishing framework, but does it help us build an application, i.e. the HTML views of that data? Clearly in that case we’ll need to parse the data so that it can be passed off to a templating engine of some form. And if we need to compose a page containing details of multiple resource then this can easily be turned into requests for multiple graphs as there’s a clear mapping from resource URI to graph URI.

When we’re creating new things in the system, e.g. capturing data about a new book, then the application will have to handle any newly submitted data, perform any needed validation and generate an RDF graph describing the resource. It then simply PUTs the newly generated data to a new graph in the store. Updates are similarly straight-forward.

If we want to store provenance data, e.g. an update history for each resource, then we can store that in a separate related graph, e.g. http://internal.example.org/graphs/provenance/book/1234.

Benefits and Limitations

This basic approach is simple, effective, and makes good use of the Named Graph feature. Identifying where to retrieve or update data is little more than URI rewriting. It’s well optimised for the common case for Linked Data, which is retrieving, displaying, and updating data about a single resource. To support more complex queries and interactions, ideally our triplestore would also expose a SPARQL endpoint that supported querying against a “synthetic” default graph which consists of the RDF union of all the Named Graphs in the system. This gives us the ability to query against the entire graph but still manage it as smaller chunks.

(Aside: Actually, we’re likely to want two different synthetic graphs: one that merges all our public data, and one that merges the public data + that in the provenance graphs.)

There are a couple of limitations which we’ll hit when managing data using the scenario. The first is that the RDF in the Linked Data views will be quite sparse, e.g the data wouldn’t contain the labels of any referenced resources. To be friendly to Linked Data browsers we’ll want to include more data. We can work around this issue by performing two requests to the store for each client request: the first to get the individual graph, the second to perform a SPARQL query something like this:


CONSTRUCT {
 <http://www.example.org/id/book/1234> ?p ?referenced.
 ?referenced rdfs:label ?label.
 ?referencing ?p2 <http://www.example.org/id/book/1234>.
 ?refencing rdfs:label ?label2.  
} WHERE {
 <http://www.example.org/id/book/1234> ?p ?referenced.
 OPTIONAL {
   ?referenced rdfs:label ?label.
 }
 ?referencing ?p2 <http://www.example.org/id/book/1234>.
 OPTIONAL {
   ?refencing rdfs:label ?label2.
 }
}

The above query would be executed against the union graph of our triplestore and would let us retrieve the labels of any resources referenced by a specific book (in this case), plus the labels and properties of any referencing resources. This query can be done in parallel to the request for the graph and merged with its RDF by our application framework.

The other limitation is also related to how we’ve chosen to factor out the data into CBDs. Any time we need to put in reciprocal relationships, e.g. when we add or update resources, then we will have to update several different graphs. This could become expensive depending on the number of affected resources. We could potentially work around that by adopting an Eventual Consistency model and deferring updates using a message queue. This lets us relax the constraint that updates to all resources need to be synchronized, allowing more of that work to be done both asynchronously and in parallel. The same approach can be applied to manage list of items in the store, e.g. a list of all authors: these can be stored as individual graphs, but regenerated on a regular basis.

The same limitation hits us if we want to do any large scale updates to all resources. In this case SPARUL updates might become more effective, especially if the engine can update individual graphs, although handling updates to the related provenance graphs might be problematic. What I think is interesting is that in this data management model this is the only area in which we might really need something with the power of SPARUL. For the majority of use cases graph level updates using simple HTTP PUTs coupled with a mechanism like Changesets are more than sufficient. This is one reason why I’m so keen to see attention paid to the HTTP protocol for managing graphs and data in SPARQL 1.1: not every system will need SPARUL.

The final limitation relates to the number of named graphs we will end up storing in our triplestore. One graph per resource means that we could easily end up with millions of individual graphs in a large system. I’m not sure that any triplestore is currently handling this many graphs, so there may be some scaling issues. But for small-medium sized applications this should be a minor concern.

Publishing Scenario #2: Multiple Resources per Graph

The second scenario I want to introduce in this posting is one which I think is slightly more conventional. As a result I’m going to spend less time reviewing it. Rather than using one graph per resource, we instead store multiple resources per Named Graph. This means that each Named Graph will be much larger, perhaps including data about thousands of resources. It also means that there may not be a simple mapping from a resource URI to a single graph URI: the triples for each resource may be spread across multiple graphs, although there’s no requirement that this be the case.

Whereas the first scenario was optimised for data that was largely created, managed, and owned by a web application, this scenario is most useful when the data in the store is derived from other sources. The primary data sources may be a large collection of inter-related spreadsheets which we are regularly converting into RDF, and the triplestore is just a secondary copy of the data created to support Linked Data publishing. It should be obvious that the same approach could be used when aggregating existing RDF data, e.g. as a result of a web crawl.

To make our data conversion workflow system easier to manage it makes sense to use a Named Graph per data source, i.e. one for each spreadsheet, rather than one per resource. E.g:


http://internal.example.org/graphs/spreadsheet/A
http://internal.example.org/graphs/spreadsheet/B
http://internal.example.org/graphs/spreadsheet/C

The end result of our document conversion workflow would then be the updating or replacing of a single specific Named Graph in the system. The underlying triplestore in our system will need to expose a SPARQL endpoint that includes a synthetic graph which is the RDF union of all graphs in the system. This ensures that where data about an individual resource might be spread across a number of underlying graphs, that a union view is available where required.

As noted in the first scenario we can store provenance data in a separate related graph, e.g. http://internal.example.org/graphs/provenance/spreadsheet/A.

Benefits and Limitations

From a data publishing point of view our application framework can no longer use URI rewriting to map a request to a GET on a Named Graph. It must instead submit SPARQL DESCRIBE or CONSTRUCT queries to the triplestore, executing them against the union graph. This lets the application ignore the details of the organisation and identifiers, of the Named Graphs in the store when retrieving data.

If the application is going to support updates to the underlying data then it will need to know which Named Graph(s) must be updated. This information should be available by querying the store to identify the graphs that contain the specific triple patterns that must be updated. SPARUL request(s) can then be issued to apply the changes across the affected graphs.

The difficult of co-ordinating updates from the application with updates from the document conversion (or crawling) workflow means that this scenario may be best suited for read-only publishing of data.

Its clear that this approach is much more optimised to support the underlying data conversion and/or collection workflows that the publishing web application. The trade-off doesn’t add much more complexity to the application implementation, but doesn’t exhibit some of the same architectural benefits, e.g. easy HTTP caching, data sharding, etc, that the first model exhibits.

Summary

In this blog post I’ve explored two different approaches to managing and publishing RDF data using Named Graphs. The first scenario described an architecture that used Named Graphs in a way that simplified application code whilst exposing some nice architectural properties. This was traded off against ease of data management for large scales updates to the system.

The second scenario was more optimised data conversion & collection workflows and is particularly well suited for systems publishing Linked Data derived from other primary sources. This flexibility was traded off against slightly more complex application implementation.

My goal has been to try to highlight different patterns for using Named Graphs and how those patterns place greater or lesser emphasis on features such as RESTful protocols for managing graphs, and different styles of update language. In reality an application might mix together both styles in different areas, or even at different stages of its lifecycle.

If you’re using Named Graphs in your applications then I’d love to hear more about how you’re making use of the feature. Particularly if you’ve layered on additional functionality such as versioning and other elements of workflow.

Better understanding of how to use these kinds of features will help the community begin assembling good application frameworks to support Linked Data application development.