RDF Data Access Options, or Isn’t HTTP already the API?

This is a follow-up to my blog post from yesterday about RDF and JSON. Ed Summers tweeted to say:

…your blog post suggests that an API for linked data is needed; isn’t http already the API?

I couldn’t answer that in 140 characters, so am writing this post to elaborate a little on the last section of my post in which I suggested that “there’s a big data access gulf between de-referencing URIs and performing SPARQL queries”. What exactly do I mean there? And why do I think that the Linked Data API helps?

Is Your Website Your API?

Most Linked Data presentations that discuss the publishing of data to the web typically run through the Linked Data principles. At point three we reach the recommend that:


“When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL)

This has encourages us to create sites that consist of a mesh of interconnected resources described using RDF. We can “follow our nose” through those relationships to find more information.

This gives us two fundamental two data access options:

  • Resource Lookups: by dereferencing APIs we can obtain a (typically) complete description of a resource
  • Graph Traversal: following relationships and recursively de-referencing URIs to retrieve descriptions of related entities; this is (typically, not not necessarily) reconstituted into a graph on the client

However, if we take the “Your Website Is Your API” idea seriously, then we should be able to reflect all of the different points of interaction of that website as RDF, not just resource lookups (viewing a page) and graph traversal (clicking around).

As Tom Coates noted back in 2006 in “Native to a Web of Data“, good data-driven websites will have “list views and batch manipulation interfaces”. So we should be able to provide RDF views of those areas of functionality too. This gives us another kind of access option:

  • Listing: ability to retrieve lists/collections of things; navigation through those lists, e.g. by paging; and list manipulation, e.g. by filtering or sorting.

It’s possible to handle much of that by building some additional structure into your dataset, e.g. creating RDF Lists (or similar) of useful collections of resources. But if you bake this into your data then those views will potentially need to be re-evaluated every time the data changes. And even then there is still no way for a user to manipulate the views, e.g. to page or sort them.

So to achieve the most flexibility you need a more dynamic way of extracting and ordering portions of the underlying data. This is the role that SPARQL often fulfills, it provides some really useful ways to manipulate RDF graphs, and you can achieve far more with it than just extracting and manipulating lists of things.

SPARQL also supports another kind of access option that would otherwise require traversing some or all of the remote graph.

One example would be: “does this graph contain any foaf:name predicates?” or “does anything in this graph relate to http://www.example.org/bob?”. These kinds of existence checks, as well as more complex graph pattern matching, also tend to be the domain of SPARQL queries. It’s more expressive and potentially more efficient to just use a query language for that kind of question. So this gives us a fourth option:

  • Existence Checks: ability to determine whether a particular structure is present in a graph

Interestingly though they are not often the kinds of questions that you can “ask” of a website. There’s no real correlation with typical web browsing features although searching comes close for simple existence check queries.

Where the Linked Data API fits in

So there are at least four kinds of data access option. I doubt whether its exhaustive, but its a useful starting point for discussion.

SPARQL can handle all of these options and more. The graph pattern matching features, and provision of four query types lets us perform any of these kinds of interaction. For example A common way of implementing Resource Lookups over a triple store is to use a DESCRIBE or a CONSTRUCT query.

However the problem, as I see it, is that when we resort to writing SPARQL graph patterns in order to request, say, a list of people, then we’ve kind of stepped around HTTP. We’re no longer specifying and refining our query by interacting with web resources via parameterised URLs, we’re tunnelling the request for what we want in a SPARQL query sent to an endpoint.

From a hypermedia perspective it would be much better if there were a way to be able to handle the “Listing” access option using something that was better integrated with HTTP. It also happens that this might actually be easier for the majority of web developers to get to grips with, because they no longer have to learn SPARQL.

This is what I meant by a “RESTful API” in yesterday’s blog post. In my mind, “Listing things” sits in between Resource Lookups and Existence Checks or complex pattern matching in terms of access options.

It’s precisely this role that the Linked Data API is intended to fulfil. It defines a way to dynamically generate lists of resources from an underlying RDF graph, along with ways to manipulate those collections of resources, e.g. by sorting and filtering. It’s possible to use it to define a number of useful list views for an RDF dataset that nicely complements the relationships present in the data. It’s actually defined in terms of executing SPARQL queries over that graph, but this isn’t obvious to the end user.

These features are supplemented with the definition of simple XML and JSON formats, to supplement the RDF serializations that it supports. This is really intended to encourage adoption by making it easier to process the data using non RDF tools.

So, Isn’t HTTP the API?

Which brings me to the answer to Ed’s question: isn’t HTTP the API we need? The answer is yes, but we need more than just HTTP, we also need well defined media-types.

Mike Amundsen has created a nice categorisation of media types and a description of different types of factors they contain: H Factor.

Section 5.2.1.2 of Fielding’s dissertation explains that:


Control data defines the purpose of a message between components, such as the action being requested or the meaning of a response. It is also used to parameterize requests and override the default behavior of some connecting elements.

As it stands today neither RDF nor the Linked Data API specification ticks all of the the HFactor boxes. What we’ve really done so far is define how to parameterise some requests, e.g. to filter or sort based on a property value, but we’ve not yet defined that in a standard media type; the API configuration captures a lot of the requisite information but isn’t quite there.

That’s a long rambly blog post for a Friday night! Hopefully I’ve clarified what I was referring to yesterday. I absolutely don’t want to see anyone define an API for RDF that steps around HTTP. We need something that is much more closely aligned with the web. And hopefully I’ve also answered Ed’s question.

RDF and JSON: A Clash of Model and Syntax

I had been meaning to write this post for some time. After reading Jeni Tennison’s post from earlier this week I had decided that I didn’t need too, but Jeni and Thomas Roessler suggested I publish my thoughts. So here they are. I’ve got more things to say about where efforts should be expended in meeting the challenges that face us over the next period of growth of the semantic web, but I’ll keep those for future posts.

Everyone agrees that a JSON serialization of RDF is a Good Thing. And I think nearly everyone would agree that a standard JSON serialization of RDF would be even better. The problem is no-one can agree on what constitutes a good JSON serialization of RDF. As the RDF Next Working Group is about to convene to try and define a standard JSON serialization now is a very good time to think about what it is we really want them to achieve.

RDF in JSON, is RDF in XML all over again

There are very few people who like RDF/XML. Personally, while it’s not my favourite RDF syntax, I’m glad its there for when I want to convert XML formats into RDF. I’ve even built an entire RDF workflow that began with the ingestion of RDF/XML documents; we even validated them against a schema!

There are several reasons why people dislike RDF/XML.

Firstly, there is a mis-match in the data models: serialization involves turning a graph into a tree. There are many different ways to achieve that so, without applying some external constraints, the output can be highly variable. The problem is that those constraints can be highly specific, so are difficult to generalize. This results in a high degree of syntax variability of RDF/XML in the wild, and that undermines the ability to use RDF/XML with standard XML tools like XPath, XSLT, etc. They (unsurprisingly) operate only on the surface XML syntax not the “real” data model.

Secondly, people dislike RDF/XML because of the mis-match in (loosely speaking) the native data types. XML is largely about elements and attributes whereas RDF has resources, properties, literals, blank nodes, lists, sequences, etc. And of course there are those ever present URIs. This leads to additional syntax short-cuts and hijacking of features like XML Namespaces to simplify the output, whilst simultaneously causing even more variability in the possible serializations.

Thirdly, when it comes to parsing, RDF/XML just isn’t a very efficient serialization. It’s typically more verbose and can involve much more of a memory overhead when parsing than some of the other syntaxes.

Because of these issues, we end up with a syntax which, while flexible, requires some profiling to be really useful within an XML toolchain. Or you just ignore the fact that its XML at all and throw it straight into a triple store, which is what I suspect most people do. If you do that then an XML serialization of RDF is just a convenient way to generate RDF data from an XML toolchain.

Unfortunately when we look at serializing RDF as JSON we discover that we have nearly all of the same issues. JSON is a tree; so we have the same variety of potential options for serializing any given graph. The data types are also still different: key-value pairs, hashes, lists, strings, dates (of a form!), etc. versus resource, properties, literals, etc. While there is potential to use more native datatypes, the practical issues of repeatable properties, blank nodes, etc mean that a 1:1 mapping isn’t feasible. Lack of support for anything like XML Namespaces means that hiding URIs is also impossible without additional syntax conventions.

So, ultimately, both XML and JSON are poor fits for handling RDF. I think most people would agree that a specific format like Turtle is much easier to work with. It’s also better as starting point for learning RDF because most of the syntax is re-used in SPARQL. That’s why standardising Turtle, ideally extended to support Named Graphs, needs to be the first item on the RDF Next Working Group’s agenda.

What do we actually want?

What purpose are we trying to achieve with a JSON serialization of RDF? I’d argue that there are several goals:

  1. Support for scripting languages: Provide better support for processing RDF in scripting languages
  2. Creating convergence: Build some convergence around the dizzying array of existing RDF in JSON proposals, to create consistency in how data is published
  3. Gaining traction: Make RDF more acceptable for web developers, with the hope of increasing engagement with RDF and Linked Data

I don’t think that anyone considers a JSON serialization of RDF as a better replacement for RDF/XML. I think everyone is looking to Turtle to provide that.

I also don’t think that anyone sees JSON as a particularly efficient serialization of RDF, particularly for bulk loading. It might be, but I think N-Triples (a subset of Turtle) fulfills that niche already: it’s easy to stream and to process in parallel.

Lets look at each of those goals in turn.

Support for scripting languages

Unarguably its much, much easier to process JSON in scripting languages like Javascript, Ruby, PHP than RDF/XML.

Parser support for JSON is ubiquitous as its the syntax de jour. Just as XML was when the RDF specifications were being written. Typically JSON parsing is much more efficient. That’s especially true when we look at Javascript in the browser.

From that perspective RDF in JSON is an instant win as it will simplify consumption of Linked Data and the results of SPARQL CONSTRUCT and DESCRIBE queries. There are other issues with getting wide-spread support for RDF across different programming languages, e.g. proper validation of URIs, but fast parsing of the basic data structure would be a step in the right direction.

Creating Convergence

I think I’ve seen about a dozen or more different RDF in JSON proposals. There’s a list on the ESW wiki and some comparison notes on the Talis Platform wiki, but I don’t think either are complete. If I get chance I’ll update them. The sheer variety confirms my earlier points about the mis-matches between models: everyone has their own conception of what constitutes a useful JSON serialization.

Because there are less syntax options in JSON, the proposals run the full spectrum from capturing the full RDF model but making poor use of JSON syntax, through to making good use of JSON syntax but at the cost of either ignoring aspects of the RDF model or layering additional syntax conventions on top of JSON itself. As an aside, I find it interesting that so many people are happy with subsetting RDF to achieve this one goal.

The thing we should recognise is that none of the existing RDF in JSON formats are really useful without an accompanying API. I’ve used a number of different formats and no matter what serialization I’ve used I’ve ended up with helper code that simplifies some or all of the following:

  • Lookup of all properties of a single resource
  • Mapping between URIs and short names (e.g. CURIES or locally defined keys) for properties
  • Mapping between conventions for encoding particular datatypes (or language annotations) and native objects in the scripting language
  • Cross-referencing between subjects and objects; and vice-versa
  • Looking up all values of a property or a single value (often the first)

In addition, if I’m consuming the results of multiple requests then I may also end up with a custom data structure and code for merging together different descriptions. Even if its just an array of parsed JSON documents and code to perform the above lookups across that collection.

So, while we can debate the relative aesthetics of different approaches, I think its focusing attention on the wrong areas. What we should really be looking at is an API for manipulating RDF. One that will work in Javascript, Ruby or PHP. While I acknowledge the lingering horror of the DOM, I think the design space here is much simpler. Maybe I’m just an optimist!

If we take this approach then what we need is an JSON serialization of RDF that covers as much of the RDF model as possible and, ideally, is already as well supported as possible. From what I’ve seen the RDF/JSON serialization is actually closest to that ideal. It’s supported in a number of different parsing and serialising libraries already and only needs to be extended to support blank nodes and Named Graphs, which is trivial to do. While its not the prettiest serialization, given a vote, I’d look at standardising that and moving on to focus on the more important area: the API.

Gaining Traction

Which brings me to the last use case. Can we create a JSON serialization of RDF that will help Linked Data and RDF get some traction in the wider web development community?

The answer is no.

If you believe that the issues with gaining adoption are purely related to syntax then you’re not listening to the web developer community closely enough. While a friendlier syntax may undoubtedly help, an API would be even better. The majority of web developers these days are very happy indeed to work with tools like JQuery to handle client-side scripting. A standard JQuery extension for RDF would help adoption much more than spending months debating the best way to profile the RDF model into a clean JSON serialization.

But the real issue is that we’re asking web developers to learn not just new syntax but also an entirely new way to access data: we’re asking them to use SPARQL rather than simple RESTful APIs.

While I think SPARQL is an important and powerful tool in the RDF toolchain I don’t think it should be seen as the standard way of querying RDF over the web. There’s a big data access gulf between de-referencing URIs and performing SPARQL queries. We need something to fill that space, and I think the Linked Data API fills that gap very nicely. We should be promoting a range of access options.

I have similar doubts about SPARQL Update as the standard way of updating triple stores over the web, but that’s the topic of another post.

Summing Up

As the RDF Next Working Group gets underway I think it needs to carefully prioritise its activities to ensure that we get the most out of this next phase of development and effort around the Semantic Web specifications. It’s particularly crucial right now as we’re beginning to see the ideas being adopted and embraced more widely. As I’ve tried to highlight here, I think there’s a lot of value to be had in having a standard JSON serialization of RDF. But I don’t think that there’s much merit in attempting to create a clean, simple JSON serialization that will meet everyone’s needs.

Standardising Turtle and an API for manipulating RDF data has more value in my view. RDF/JSON as a well implemented specification meets the core needs of the semantic web developer; a simple scripting API meets the needs of everyone else.

Gridworks Reconciliation API Implementation

Gridworks is a really fantastic tool and there’s scope to extend it in all kinds of interesting ways. Jeni Tennison has recently published a great blog post describing how to use Gridworks for generating Linked Data. I strongly encourage you to read her posting as it not only provides a good introduction to Gridworks itself, but also shows a nice real world example of generating RDF using its built-in data cleaning and templating tools.

I was luckily enough to meet David Huynh as a workshop recently and chatted to him briefly about another aspect of the Gridworks: its ability to match field values in a dataset to entities in Freebase, e.g. identifying a place based on just it’s name. Within Gridworks this process is known as “reconciliation”.

Reconciliation is an important step for generating good Linked Data as you’ll often need to correlate values in a dataset with URIs in existing datasets in order to generate links. E.g. matching company names to their URIs. While it is possible to generate identifiers algorithmically during a conversion this typically just defers the reconciliation work until a later stage, when you carry out cross-linking to introduce equivalence links.

Recognising that the ability to introduce new reconciliation services would be a powerful extension to Gridworks, David Huynh has been creating a draft specification that will allow third-parties to create and deploy their own reconciliation services. He’s been documenting his progress on implementing the client side of this protocol and has published a testing service.

It occurred to me that the reconciliation API is essentially a structured search over a dataset and thus could be implemented against the search interface exposed by Talis Platform stores. The RSS 1.0 feeds that the Platform returns includes enough information to rank and filter results as required by the API.

I’ve created a simple Ruby application, using the Sinatra web framework, that implements the reconciliation API for any Talis Platform store. You can find the code on github if you want to have a play with it. As I note in the README there are some areas where customisation is useful to get the most from the service. So while in principle it can be used against any existing Platform store you can create a simple JSON config to tweak it for particular datasets.

There’s a live version of the code running one my server here: http://ldodds.com/gridworks/.

That page has a simple API console for carrying out queries, but consult the draft specification for more details. I think I’ve covered all of the basic features (but bug reports welcome!). Consult the README for notes on configuration options and implementation decisions.

As a simple illustration, lets say that I have the value “Bath” in a dataset and want to match that to some area in the UK administrative geography. This information is available from the Linked Data exposed by statistics.data.gov.uk and this happens to be hosted in this platform store. The reconciliation API we need can therefore be found at: http://ldodds.com/gridworks/govuk-statistics/reconcile. An HTTP GET on that location retrieves the service metadata.

If we use the API explorer we can use a simple HTML form to try out examples. Select govuk-statistics from the Store drop-down and then type Bath into the search box. You’ll get this result. This is not very readable by default, so if you’re using Firefox I recommend you install the JSONView extension which provides a nicely formatted display.

Our initial search returns a number of results. The highest ranked of these being the Westminster Constituency for Bath. That seems like a pretty good initial result to me. As it is the most relevant result in the search it’s marked as an exact match, so once integrated with Gridworks it will capture and store the reconciled identifier for you.

However, we may know that in the imaginary dataset we’re working with, that a particular field doesn’t contain names of constituencies. It may instead refer to a Local Education Authority. We can refine our search by adding the URI that defines that type of resource into the type field in the API explorer.

Try pasting in http://statistics.data.gov.uk/def/geography/LocalEducationAuthority into the post and running the search again. You’ll find that this time you get a single result, which is Bath and North East Somerset. Job done.

Of course, to get the most from this you need to know what URIs you can use for filtering by types (and properties). But this is something that the Gridworks UI will help with. It can integrate with “suggestion services” that can be used to help map values to a properties and types within a schema. I’ll be looking at how to expose those as my next piece of work.

Hopefully you can see how the overall system works. Feel free to have a play with the API to try it out for yourself. If you have comments on the implementation then I’d love to hear them, but I’d suggest that comments on the specification are best addressed to the gridworks mailing list.

I also suspect the Reconciliation API has uses outside of just Gridworks. For example, I wonder how easy it would be to introduce reconciliation into Google Spreadsheets using Google Apps Script? It’s also another nice demonstration of how easy it is to map simple RESTful APIs onto RDF datasets, this implementation works for any data in the Platform, no matter what schema it confirms with. Neat.

RDF Dataset Notifications

Like many people in the RDF community I’ve been thinking about the issue of syndicating updates to RDF datasets. If we want to support truly distributed aggregation and processing of data then we need an efficient way to share updates.

There’s been a lot of experimentation around different mechanisms, and PubSubHubbub seems to be a current favourite approach. I’ve been playing with it myself recently and have hacked up a basic push mechanism around Talis Platform stores. More on that another time.

But I’ve not yet seen any general discussion about the merits of different approaches, or even discussion about what it is that we really want to syndicate.

So let’s take it from the top.

It seems to me that there’s basically three broad categories of information we want to syndicate:

  • Dataset Notifications — has a new dataset been added to a directory? has one been updated in some way, e.g. through the addition or removal of triples?
  • Resource Notifications — what resources have been added or modified within a dataset?
  • Triple Notifications — what triples have been changed within a dataset?

Each one of these categories is syndicating a different level of detail and may benefit from a different technical approach. For example there’s a different volume of information being exchanged if one is simply notifying dataset changes vs every triple. We’ll also likely need a different format or syntax.

Actually there may be a fourth category: notifications of graph structural changes to a dataset, e.g. adding or removing named graphs. I’ve not yet seen anyone exploring that level of syndication, but suspect it may be very useful.

Now, for each of those different categories, there are two different styles of notifications: push or pull. Pull mechanisms are typified by feed subscriptions, crawlers, or repeated queries of datasets. Push mechanisms are usually based on some form of publish-subscribe system.

Given those different scenarios, we can take a look at some existing technologies and categorise them. I’ve done just that and published a simple Google spreadsheet with my first stab at this analysis. (This probably needs a little more context in places but hopefully the classifications are fairly obvious).

PubSubHubbub seems to offer the most flexibility in that it mixes a standard Pull based Feed architecture with a Push based subscription system. Clearly worthy of the attention its getting. Other technologies offer similar features but are optimised for different purposes.

However that doesn’t mean that PubSubhubbub is just perfect out of the box. For example it’s worth noting that consumers aren’t required to use the Push aspects of the system, they can just subscribe to the feeds. So you need to be prepared to scale a PubSubHubbub system just as you would a Pull based Feed.

It may also be sub-optimal for systems which are syndicating out high-volume Triple level updates. The Feeds can potentially get very large and the hub system needs to be prepared to handle large exchanges. It also doesn’t say anything about how to catch-up or recover from missed updates. A hybrid approach may be required to cover for all use cases and scenarios and to produce a robust system.

In order to be able to properly compare different approaches we need to understand their respective trade-offs. I’m hoping this posting contributes to that discussion and can complement the ongoing community experimentation.

Am interested to hear your thoughts.

Linked Data Patterns: a free book for practitioners

A few months ago Ian Davis and I were chatting about some new approaches to helping practitioners climb the learning curve around Linked Data, RDF and related technologies. We were both keen to help communicate the value of Linked Data, share knowledge amongst practitioners, and to encourage the community to converge on best practices. We kicked around a number of different ideas in this vein.

For example, Ian was keen to provide guidance as to how to mix and match different vocabularies to achieve a particular goal, like describing a person or a book. Having a ready reference containing recipes for these common tasks would address a number of goals. He’s ended up exploring that idea further in the recently released Schemapedia. If you’ve not seen it yet, then you should take a look. It provides a really nice way to navigate through RDF vocabularies and explore their intersections.

The other thing that we discussed was Design Patterns. I’ve been a Design Pattern nut for some time now. Discovering them was something of a right of passage for me during my Master’s dissertation. I’d spent weeks revising and honing a design for the distributed system I was building, only to discover that what I’d produced was already documented as a design pattern in an obscure corner of the research literature. While I’d clearly reinvented the wheel, the discovery not only provided external validation for what I’d produced, but also neatly illustrated the benefit of using design patterns to share knowledge and experience within a community. Knowing when to apply particular patterns is a key skill for any developer, and the terms are a part of the design vocabulary we all share.

I suggested to Ian that we explore writing some patterns for Linked Data. Patterns for assigning identifiers, modelling data, as well as application development. We experimented with this for a while but ended up parking the discussion for a few months whilst other priorities intervened.

I recently revived the project. It’s pretty clear to me that there’s still a big skills gap between experienced practitioners and those seeking to apply the technology. I think the current situation is reminiscent of the move of OO programming from the research lab out into the developer community; design patterns played a key role there too.

Ian and I have decided to share this with the community as an on-line book, a pattern catalogue that covers a range of different use cases. We started out with about half a dozen patterns, but over the last few weeks I’ve expanded that figure to thirty. I’ve still got a number on my short-list (more than a dozen, I think) but it’s time to start sharing this with the community. The work won’t ever be complete as the space is still unfolding, it will just get refined over time.

You can read the book online at http://patterns.dataincubator.org.

The work is licensed under a Creative Commons Attribution license so you’re free to use it as you see fit, but please attribute the source. If you want to download it, then there’s a PDF, and an EPUB too. We’re using DocBook for the text so there will be a number of different access options.

I’ll stress that this is a very early draft, so be gentle. But we’d love to hear your comments.

A Tour of the OS 50k Gazetteer Linked Data

The Ordnance Survey have today published the first in a series of open datasets. In addition to the administrative geography that was published last year, the Linked Data available from data.ordnancesurvey.co.uk now includes data from their 1:50 000 Scale Gazetteer. In this blog post I thought I’d post an overview of the dataset to summarise what it contains.

Analysis

The Gazetteer identifiers all have a base URL of:

http://data.ordnancesurvey.co.uk/id/50kGazetteer/.

The base URL is suffixed with a unique numeric code. I’m not sure where this originates from, and its not present in the underlying data.

The dataset consist of 2,368,655 triples (individual facts) asserted over 259,080 unique resources. So about 9 triples per resource. Here’s how the properties break down:

http://www.w3.org/1999/02/22-rdf-syntax-ns#type 259080
http://xmlns.com/foaf/0.1/name 259080
http://www.w3.org/2000/01/rdf-schema#label 259080
http://data.ordnancesurvey.co.uk/ontology/spatialrelations/northing 259080
http://data.ordnancesurvey.co.uk/ontology/spatialrelations/easting 259080
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/featureType 259080
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/oneKMGridReference 259080
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/twentyKMGridReference 259080
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/mapReference 296015

The first few properties are labels and a type for each resource. The additional predicates are from the OS Spatial Relations ontology, providing the Eastings and Northings for each feature. The remainining four predicates provide a “feature type” and OS map & grid references. There are slightly more map references, so some resources have more than one such property, i.e. because they’re large enough to span a particular map. You can see that there are no links to other datasets as yet, or lat/long co-ordinates.

Lets look closer at some of the predicates. For the RDF types, I discovered that the every resource has the same type, they’re all instances of a “Named Place”:

http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/NamedPlace.

Presumably then the detailed classification for the different types of landscape feature is present in the “feature type” predicate. A SPARQL query to count and group the values for that predicate gives me:

http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/Other 128662
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/OtherSettlement 41228
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/Farm 34723
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/WaterFeature 24425
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/HillOrMountain 14524
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/ForestOrWood 8708
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/Antiquity 5252
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/Town 1259
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/RomanAntiquity 237
http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/City 62

We can see that 128,662 resources (49% of total) are simply “Other” with another 41,228 being “Other Settlement”; not that inspiring! The rest of the feature types are more interesting, and give us some very basic data on various geographic features. The Roman Antiquity features piqued my interested; Hadrian’s Wall has the following identifier (click to see the data):

http://data.ordnancesurvey.co.uk/id/50kGazetteer/106584

The values for the Easting and Northing properties should be obvious, so I’ll skip over those. The remaining properties are all map references, and the values of these are all resources. So the Gazetteer has begun assigning URIs to all of the 1KM and 20KM grid references, as well as each of OS LandRanger Maps. Here are some sample URLs for each, taken from the descripion of Hadrian’s Wall:

http://data.ordnancesurvey.co.uk/id/1kmgridsquare/NY3359
http://data.ordnancesurvey.co.uk/id/20kmgridsquare/NY24
http://data.ordnancesurvey.co.uk/id/OSLandrangerMap/85

The URIs seem predictable and can probably be derived from data found elsewhere. Unfortunately, no further data has been included about these resources. I believe they are place-holders for data that has yet to be released.

Overall the data in the Gazetteer is pretty sparse but presumably it will become much richer once more OS data is released. Latitude and longitudes is something that I’d particularly like to see added. There’s an opportunity here for someone to link up these resources with pages in Wikipedia & resources in DbPedia.

Sample Queries

If you want to play with the data, here are a couple of SPARQL queries to get you started. The first retrieves 10 features classified as Roman Antiquities


PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX spatial: <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/>
PREFIX gaz: <http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/>

SELECT ?uri ?label ?easting ?northing ?one ?twenty ?map 
WHERE {
  ?uri 
    #filter on type
    gaz:featureType gaz:RomanAntiquity;

    #bind everything we want to return
    rdfs:label ?label;
    spatial:easting ?easting;
    spatial:northing ?northing;
    gaz:oneKMGridReference ?one;
    gaz:twentyKMGridReference ?twenty;
    gaz:mapReference ?map.
}
LIMIT 10

Results in JSON

The following query lists all of the features on a specific OS Landranger map. So even though we don’t (yet) have any details about the map, we can use its identifier as a means to filter the results:


PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX spatial: <http://data.ordnancesurvey.co.uk/ontology/spatialrelations/>
PREFIX gaz: <http://data.ordnancesurvey.co.uk/ontology/50kGazetteer/>

SELECT ?uri ?label ?easting ?northing ?featureType 
WHERE {
  ?uri 
    #filter on map reference
    gaz:mapReference <http://data.ordnancesurvey.co.uk/id/OSLandrangerMap/85>;

    #bind everything we want to return
    rdfs:label ?label;
    spatial:easting ?easting;
    spatial:northing ?northing;
    gaz:featureType ?featureType.
}

Results in JSON

Enhanced Descriptions: “Premium Linked Data”

I’ve had several conversations recently with people who are either interested in, or actually implementing Linked Data, and are struggling with some important questions

  • How much data should I give away?
  • If I wanted to charge for more than just the basic data, then how would I handle that?

My usual response to the first of those questions is: “as much as you feel comfortable with”. There’s still so much data that’s not yet visible or accessible in machine-readable formats that any progress is good progress. Let’s get more data out there now. More is better.

It usually doesn’t take long to get to the second question. If you’ve spent time evangelising to people about the power and value of data, and particularly their data, then its natural for them to begin thinking about how it can be monetized.

Scott Brinker has done a good job of summarising a range of options for Linked Data business models. I’ve chipped into that discussion already. Instead what I wanted to briefly discuss here is some of the mechanics of implementing access to what we might call “premium Linked Data”, or as I’ll refer to it “Enhanced Descriptions”.

Premium Linked Data

It’s possible to publish Linked Data that is entirely access controlled. Access might be limited to users behind the firewall (“Enterprise Linked Data”) or only to authorised paying customers. As a paid up customer you’d be given an entry point into that Linked Data and would supply appropriate credentials in order to access it.

This data isn’t going to be something you’d discover on the open web. There are many different authentication models that could be used to mediate access to this “Dark Data”. The precise mechanisms aren’t that important and the right one is likely to vary for different industries and use cases. Although I think there’s a strong argument in using something that dove-tails nicely with HTTP and web infrastructure in general.

What interests me more is the scenario in which a data publisher might be exposing some public data under a liberal open license, but also wants to make available some “premium” metadata. I.e. some value-added data that is only available to paid-up customers. In this scenario it would be useful to be able to link together the open and closed data, allowing a user agent to detect that there is extra value hidden behind some kind of authentication barrier. I think this is likely to become a very common pattern as it aids discovery of the value-added material. Essentially its the existing pattern for access controlling content that we have on the web of documents.

Its the mechanics of implementing this public/private scenario that has cropped up in my recent conversations.

Enhanced Descriptions

When I dereference the URI of a resource I will typically get redirected to a document that describes that resource. This document might contain data like this (in Turtle):


ex:document 
  foaf:primaryTopic ex:thing.

ex:thing 
  rdfs:label "Some Thing".

i.e. the document contains some data about the resource, and there’s a primary topic relationship between the document and the resource.

If we want to point to additional RDF documents that also describe this resource, or related data, then we can use an rdfs:seeAlso link:


ex:document 
  foaf:primaryTopic ex:thing.

ex:thing rdfs:label "Some Thing";
  rdfs:seeAlso ex:otherDocument.

We can use the rdfs:seeAlso relationship to point to additional documents either within a specific dataset or in other locations on the web. Those documents provide useful annotations about a resource.

An “Enhanced Description” will contain additional value-added data about a resource. We could just refer to this document using an rdfs:seeAlso link. But if we do that then a user agent can’t easily distinguish between an arbitrary rdfs:seeAlso link and one that refers to some additional data. We could instead use an additional relationship, a specialisation of rdfs:seeAlso, that can be used to disambiguate between the relationships. I’ve defined just such a predicate: ov:enhancedDescription.


ex:document 
  foaf:primaryTopic ex:thing.

ex:thing rdfs:label "Some Thing";
  rdfs:seeAlso ex:otherDocument;
  ov:enhancedDescription ex:premiumDocument.

By using a separate document to hold the value-added annotations we have the opportunity for user agents to identify those documents (via the predicate) and to also be challenged for credentials when they retrieve the URI (e.g. with an HTTP 401 response code).

It also means data publishers can safely dip a toe in the open data waters, but leave richer descriptions protected but still discoverable behind an access control layer.

Another Approach?

Interestingly I discovered earlier today that OpenCalais returns a “402 Payment Required” status code for some documents.

To see this in practice visit their description of IBM and try accessing the last of the owl:sameAs links. I’m guessing they’re using a similar technique to the one I’ve outlined here. But the key difference is that rather than use separate documents, they’ve decided to create new URIs for the access controlled version of the Linked Data. It would be nice if someone out there could confirm that.

Assuming I’ve interpreted what they’re doing correctly, I think this approach has some failings. Firstly it creates extra URIs that aren’t really needed. I’m not sure that we really need more URIs for things; a pattern in which publishers have 2 URIs (public & private) for each resource isn’t going to help matters

Secondly, just like using a generic “see also” relation, using owl:sameAs means its impossible to detect which resource is the one providing access to premium data, and others that exist on the web, without doing some fragile URI matching.

Apologies for the OpenCalais team if I’ve misunderstood the mechanism they’re using. I’ll happily publish a correction, but regardless, I’m intrigued by the 402 status code! 🙂

Summary

In my view, the “Enhanced Description” approach is a simple to implement pattern. Its one that I’ve been recommending to people recently but I’ve not seen documented anywhere, so thought I’d write it up.

I’d be interested to hear from others that have either implemented the same mechanism, or like OpenCalais are using other schemes.