Monthly Archives: November 2009

Web Integrated Data

Last Friday I spoke at the Open Knowledge Foundation Open Data & The Semantic Web event. I was giving the opening talk of the day and thought that I’d take the opportunity to lay out a view that I’ve been meaning to articulate for some time: that integrating data with the web maximises its utility. Moving from data dumps, through APIs, and to Linked Data we maximize utility by reducing the amount of effort required to interact with data.

While there’s clearly still a lot of work to do around creating ways to visualise and explore Linked Data, the simply utility of being able to browse a dataset means that we move beyond publishing for a developer audience to publishing for anyone who can wield a browser. This is the angle to the Semantic Web vision that is most often overlooked in my opinion.

Developers often claim that “I can do the same thing using technology X, so why use technology Y”. In this early adopter phase of the Semantic Web its perfectly valid and important to critique the technology; to measure its ease of use and benefits for developers. But for me the end game is to move to a world where anyone can easily do complex manipulations on data — without resorting to writing code — because there’s enough machine support to make it achievable. That’s what standard vocabularies and a common data model enables. And its a natural part of the evolution towards increasingly declarative ways of manipulating information.

I’ll do a proper write-up of the presentation some other time, but for now here are the slides:

Linked Data Liminal Zones

One of the things that has interested me for some time now is how RDF and Linked Data enables communities to enrich information published by organizations, e.g. by annotating it with additional properties and relationships (links). This is after all, one of the intended goals of the technology: to make it easier for people to converge on common names for things and collectively share data about those things.

The ability to publish URIs for things, and then have those URIs decorated by a motivated community with additional metadata, provides organizations with an interesting way to take advantage of Linked Data. The enriched data can be reused by the organization to improve its own datasets and used to drive improved processes, new product development, etc.

The interesting angle is that while both the organization and the community directly benefits from the sharing (both gain access to data they wouldn’t have normally, or at least without extra expense) there are some asymmetries in the relationship. Specifically, an organization worried about its brand is likely to have higher, or at least different, standards for reliability and quality than its community; especially so it we consider only the non-commercial users in that community. Before the organization may ingest and republish this data (e.g. on its website) then those standards and a certain amount of filtering may need to be applied.

I like to think of these contributions as being in a “liminal zone” between the authoritative content that is completely owned and managed by the publisher and the stuff that exists out there on the Wild Wild Web which is only tangentially related (at best). There’s a zone of transition between the two spaces, where the data and the URIs start out being owned by the publisher then embraced, adopted (and even co-opted) by a community. A user may want to freely navigate between these different areas and apply their own rules about quality, reliability or general bozo filtering. And they can end up in a very different space to where they started. An organization may want to act quite differently; in terms of what and how much they fetch, and how they use what data they collect.

The following diagram attempts to sketch out this liminal zone from the perspective of the BBC.

Its the user annotations that annotate or relate to the BBC URIs that form the liminal zone between the authoritative publisher-sourced data, and the rest of the content on the web. You could put almost any organization into that central space and the same relationship would hold. Its the strong identifiers associated with Linked Data that connects up the internal and external views of the data.

I recently commissioned a project at Talis called Fanhu.bz which aims to help surface content and contributions that exist in this liminal zone. I see it as a first step towards exploring some of these subtle data sharing issues. Mapping out the fringes of Linked Data sets, as exemplified by BBC Programmes and Music, and then exploring how that data can be remixed and reused not only by the community but also by the publisher themselves, is an attempt to explore models for consuming Linked Data that goes beyond simple re-publishing and visualisation. The technology has a lot more to offer. And when we talk about “Linked Data for the Enterprise” I think we need to be thinking beyond just internal data integration.

DataIncubator: What Is It and What’s In It?

This article first appeared in Talis Nodalities magazine issue 8.

The Linking Open Data project has had a huge amount of success in bootstrapping the burgeoning Linked Data cloud. There’s now a definite sense of momentum behind the project, and a growing number of organisations are now seriously investigating how their data could further enrich the growing Semantic Web, and how the underlying technologies may help them to innovate and explore new opportunities.

The Linked Data community has rightly begun to look at the next round of challenges: What can we do with all this data? How can it be pressed into service to create new applications? What kinds of frameworks do we need to support consumption of Linked Data? But it is important that we shouldn’t lose sight of the fact that there’s still a huge amount of evangelism to be done and a great deal of data that could and should be part of the web of data. The Linked Data landscape is still not fully mapped out. In short, we need to keep up the process of accumulating, converting, publishing and linking data in as many different subject areas and disciplines as possible.

To date, the bootstrapping process has been supported by a number of community lead projects that convert and re-publish datasets to bring them into the web of data. The recently founded DataIncubator project (http://dataincubator.org) aims to adopt this same “show don’t tell” approach, but with the addition of some best practices and with an eye on long term sustainability.

Sustainability, Repeatability, Reusability

A key goal of the project is to lightly formalise the way these dataset conversions are carried out to make sure they are sustainable, repeatable, and reusable. But why are these particular aspects important?

Firstly, lets consider sustainability. As usage of the Linked Data cloud grows, we need to make sure that new data being added isn’t going to disappear later—e.g. because a small project website goes offline; or because the original project owner loses interest. It is critical that as serious applications begin to be built against this data that consumer can rely on it. One of the primary ways the project is ensuring sustainability is through making use of the Talis Connected Commons scheme (http://www.talis.com/cc). All of the public domain datasets that are converted and published through the DataIncubator project site are being hosted in the Talis Platform. This takes full advantage of the free data hosting offered under the Connected Commons initiative. Talis is therefore contributing to the sustainability of that data.

The second aspect to consider is repeatability. The first goal is to make sure that the data conversion process is itself repeatable—that is: we can easily re-generate the data to allow for modelling changes, bug fixes, and the ingesting of new data. And not just now when a project is active, but in three years time when the project may be picked up and extended by a number of other contributors. Ensuring that each of the incubated datasets is supported by open source code makes this more achievable. Ideally, the original dataset owners will be convinced by the benefits long before a project goes stale, but it’s important to recognise that evangelism can take time and that different industries move at different speeds. There are already a few Linked Data and RDF projects on the web that model and re-publish the same basic dataset in other ways. By trying to build a community around curating the conversion of a dataset and not just the data itself, DataIncubator hopes to avoid these issues.

The final aspect is one that is often over-looked: how can the original dataset owner build on what the community has created? How can the community’s efforts by reused? Reusability is enabled by ensuring that the conversion code is open source and that schemas and modelling design decisions are well documented. This can lower the barrier to entry facing data providers or publishers looking to embrace Semantic Web technology. This is the case particularly where the data conversion is acting on source data(e.g. open, but not linked data). In this case, the data owner may merely need to re-run the data conversion and publish the Linked Data through their own site rather than DataIncubator. This makes adoption much, much easier.

Community Norms

Alongside addressing these procedural aspects of the data conversion process, the DataIncubator project also encourages a number of useful community norms that will hopefully improve the quality of the converted datasets.

The first of these is to ensure that there is a sufficient amount of both linking and attribution. Every dataset within the umbrella project should reference its original sources. This should not take place just at a high-level, such as within in the corresponding Void description: http://rdfs.org/ns/void/. Instead, references should be deeper so resources can be associated with, for example, the original web pages that describe them. This ensures that there is a clear path back to the original source of the data. Attribution—in various forms—is an important community norm in its own right, but it is especially important in the context of converting and re-publishing an existing dataset. We want to ensure that the original curators of the data don’t think that the community is trying to appropriate or steal its work. Quite the opposite, we want them to embrace it.

The other norm relates once again to sustainability. Links to the data should be stable, but how do we achieve this if the data will ultimately be removed from the DataIncubator site and moved to another domain? The proposal here is that as data is migrated to its permanent home, redirects will be put into place to ensure that web browsers and semantic web agents can follow the links to their primary source. Every effort will be taken to ensure that links don’t break.

What’s In It?

The DataIncubator project already has a wide range of datasets available:

There’s a lot more that could yet be added to this list. My personal wishlist includes a conversion of the Prelinger Archives (http://www.archive.org/details/prelinger). This is hosted as part of the Internet Archive project and consists of over 2000 industrial, educational, travel, and propaganda videos published from 1903 to the 1970’s. The content is completely within the public domain, so it’s just begging to be converted. It would also be a great dataset on which to explore the modelling of media and media annotations in general.

Currently, one domain with very little Linked Data is gaming, in all of its forms. For example there is a vast amount of community curated data about Lego, Lego sets, and Lego models. And what about all of the facts and figures that are routinely collected around online gaming? Data might be available through specific community websites, but what could be built if the data were more open, allowing the community to analyse and re-present this data in new ways?

It strikes me that games and gaming is an area that is ripe for exploration. There are many interesting dimensions to the data, and the communities are very engaged. Many gamers are typically very interested in statistics and data about the games they play. This is just one area of the Linked Data landscape that the DataIncubator project is hoping to help explore.

Describing SPARQL Extension Functions

At the end of my recent post on Surveying and Classifying SPARQL Extensions I noted that I wanted to help encourage implementors to publish useful documentation about their SPARQL Extensions. If you’re interested in the current state of that survey then you can check out my current spreadsheet listing known extension functions. There are more to add there, but its a good summary of the current state of play.

At VoCamp DC last week I did some work on designing a small vocabulary for describing SPARQL Extensions. The first draft of this is online here: SPARQL Extension Descriptions. There’s a little bit of background on the Vocamp wiki too, if you want to see my working :).

Here’s an example of the vocabulary in use, describing some extensions to the ARQ SPARQL Engine:


<http://jena.hpl.hp.com/ARQ/function> a sed:FunctionLibrary;
  dc:title "ARQ Function Library";
  dc:description "A collection of SPARQL extension functions 
      implemented by the ARQ engine";
  foaf:homepage <http://jena.sourceforge.net/ARQ/library-function.html>;
  sed:includes <http://jena.hpl.hp.com/ARQ/function#sha1sum>.
  
<http://jena.hpl.hp.com/ARQ/function#sha1sum> 
  a ssd:ScalarFunction;
  rdfs:label "sha1sum";
  dc:description "Calculate the SHA1 checksum 
       of a literal or URI.";
  sed:includedIn <http://jena.hpl.hp.com/ARQ/function#>.

<http://jena.hpl.hp.com/ARQ#self> a sed:SparqlProcessor;
  foaf:homepage <http://jena.hpl.hp.com/ARQ>;
  rdfs:label "ARQ";
  sed:implementsLibrary <http://jena.hpl.hp.com/ARQ/function>;

Ideally what should happen is that every URI associated with a filter function and property function should be dereferencable, and that terms from this vocabulary be used to describe those functions. There’s a lot more detail that could be included, but I suspect this is sufficient to cover the primary use cases, i.e. documentation and validation.

The draft SPARQL 1.1. Service Description specification does cover some of this ground, but falls short in a few places, and I think some of what I’ve described here could usefully be folded into that specification without greatly extending its scope. But thats a matter for the Working Group to decide.

One specific issue is that the specification doesn’t currently recognise “functional predicates” (to use Lee Feigenbaum’s preferred term; others include “property functions” and “magic properties”) as a distinct class of extensions. They clearly exist, so I think we should have a means to describe them. In fact arguably they are the most important class of SPARQL extensions that need describing.

Filter functions are relatively well understood and can clearly be identified based on where they appear in a query. Language extensions will generate a parser error if an endpoint doesn’t support them, so will easily be caught. But functional predicates use existing turtle triple pattern syntax, but typically involve triggering custom logic in the SPARQL processor, rather than actually appearing as triples within the dataset. Without the ability to dereference their URIs and identify them as a functional predicate, a SPARQL engine will simply treat them as a triple pattern and fail silently, rather than complaining that the extension is not supported.

The following example query illustrates this:


PREFIX list: <http://jena.hpl.hp.com/ARQ/list#>
PREFIX func: <http://jena.hpl.hp.com/ARQ/function#>
PREFIX dc: <http://purl.org/dc/terms/>
PREFIX ex: <http://example.org/vocab/>

SELECT ?doc ?contributor WHERE {
   ?s dc:modified ?created.
   ?s ex:authors ?authorList.
   ?authorList list:member ?author.
   LET ( ?contributor := ?author )
   FILTER ( ?created < func:now() )
}

The above query contains 3 extensions: a language extension (LET); a filter function (func:now()); and a functional predicate (list:member). Without prior knowledge of that predicate, or the ability to dereference its URI, there’s no way to know that the functional predicate is not really a triple that the query author is attempting to match against, rather than an extension.

I’d like to urge all implementors to consider making their extension URIs dereferencable. The schema I’ve drafted is very light-weight so shouldn’t be difficult to support. I’m also very happy to take comments on its design. I’m intending it as a starting point for others to help build upon.

Managing RDF Using Named Graphs

In this post I want to put down some thoughts around using named graphs to manage and query RDF datasets. This thinking is prompted is in large part by thinking how best to use Named Graphs to support publishing of Linked Data, but also most recently by the first Working Drafts drafts of SPARQL 1.1.

While the notion of named graphs for RDF has been around for many years now, the closest they have come to being standardised as a feature is through the SPARQL 1.0 specification which refers to named graphs in its specification of the dataset for a SPARQL query. SPARQL 1.1 expands on this, explaining how named graphs may be used in SPARQL Update, and also as part of the new Uniform HTTP Protocol for Managing RDF Graphs document.

Named graphs are an increasingly important feature of RDF triplestores and are very relevant to the notion of publishing Linked Data, so their use and specification does bear some additional analysis.

What Are Named Graphs?

Named Graphs turn the RDF triple model into a quad model by extending a triple to include an additional item of information. This extra piece of information takes the form of a URI which provides some additional context to the triple with which it is associated, providing an extra degree of freedom when it comes to managing RDF data. The ability to group triples around a URI underlies features such as:

  • Tracking provenance of RDF data — here the extra URI is used to track the source of the data; especially useful for web crawling scenarios
  • Replication of RDF graphs — triples are grouped into sets, labelled by a URI, that may then be separately exchanged and replicated
  • Managing RDF datasets — here the set of triples may be an entire RDF dataset, e.g. all of dbpedia, or all of musicbrainz, making it easier to identify and query subsets within an aggregation
  • Versioning — the URI identifies a set of triples, and that URI may be separately described, e.g. to capture the creation & modification dates of the triples in that set, who performed the change, etc.
  • Access Control — by identifying sets of triples we can then record access control related metadata

…and many more. There’s some useful background available on Named Graphs in general in a paper about NG4J, and specifically on their use in OpenAnzo.

Clearly there’s some degree of overlap between these different approaches, but then you’d expect that given that they’re all built on what is a fairly simple extension to the RDF model. Two of the key differentiators are:

  • Granularity: i.e. does the named graph relate to a discrete identifiable subset of a dataset, e.g. every statement about a specific resource, or does it identify the dataset itself, e.g. dbpedia
  • Concrete-ness: do the named graphs relate to how the data is actually being managed or stored; or does it instead reflect some other useful partitioning of the data?

One of the nice things about the simplicity of Named Graphs is that you can do so many things with that extra degree of freedom, i.e. by managing quads rather than triples.

Exchanging Named Graphs

Clearly if we’re working with Named Graphs then it would be useful if there were a way to exchange them. Being able to serialize RDF quads would allow a complete Named Graph to be transferred between stores. Actually, for some uses of Named Graphs this may not be required. For example if I’m using Named Graphs to as a means to track which triples came from which URIs during a web crawl I only need to serialize the quads if I decide to move data between the stores, not as part of the basic functionality.

Unsurprisingly none of the standard RDF vocabularies are capable of serializing Named Graphs, however there are two serializations that have been developed to support their interchange: TriG and TriX. TriG is a plain text format, which is a variant of Turtle, while TriX is a highly normalized XML format for RDF that includes the ability to name graphs.

Named Graphs in SPARQL 1.0

Lets look at how Named Graphs are used in SPARQL 1.0 and in the SPARQL 1.1 drafts. SPARQL 1.0 explains that a query executes against an RDF Dataset which “…represents a collection of graphs. An RDF Dataset comprises one graph, the default graph, which does not have a name, and zero or more named graphs, where each named graph is identified by an IRI. A SPARQL query can match different parts of the query pattern against different graphs“.

In practice one uses the FROM and FROM NAMED clauses to identify the default and named graphs, and the GRAPH keyword to apply triple pattern matches to specific graphs. There’s a few things to observe here already, some of which are consequences of the above, some from wording in the SPARQL protocol:

  • A SPARQL endpoint may not support Named Graphs at all
  • A SPARQL endpoint may let you define an arbitrary dataset for your query. Some open endpoints will fetch data by dereferencing the URIs mentioned in the FROM/FROM NAMED clause but thats quite rare; mainly because of efficiency, cost, and security reasons.
  • A SPARQL endpoint may not let you define the dataset for your query, i.e. it might use a fixed dataset scoped to some backing store. Any definition of the dataset in the protocol request or query is either optional, or must match the definition of the endpoint
  • A SPARQL endpoint may let you define the default graph to be used in a query, but may not be willing/able to do arbitrary graph merges. For example in an endpoint containing dbpedia and geonames, you might be able to select FROM one of them, but not both.
  • A SPARQL endpoint may be backed by a triple store that is organized around the model of an RDF Dataset, and therefore has a fixed default graph and any number of multiple named graphs. This limits flexibility of constructing the dataset for a query, as it is fixed by the underlying storage model.
  • A SPARQL endpoint may let you query graphs that don’t physically exist in the underlying tripestore. Such a synthetic graph may be, for example, the merge of all Named Graphs in the triple store.

There may be other variations, but I’m aware of implementations and endpoints that exhibit each of those outlined above. The important thing to realise is that while SPARQL doesn’t place any restrictions on how you use named graphs, implementation decisions of the endpoint and/or the underlying triple store may place some limits on how they can be used in queries. The other important point to draw out is that the set of named graphs exposed through a SPARQL query interface may be different than the set of named graphs managed in the backing storage. This is most obvious in the case of synthetic graphs.

Synthetic graphs are a very useful feature as they can provide some useful abstraction over how the underlying data is managed and how it is queried.

For example, one might use a large number of separate named graphs when managing data, thereby making it easy to merge and manage data from different sources (e.g. a web crawl). Some applications use thousands of very small Named Graphs simply because they’re easier to manage. By using a synthetic graph which exposes all of the data through a SPARQL endpoint as if it were in fact in a single graph, then its possible to abstract over those details of storage. There are a few stores that support this kind of technique, and it can be pushed further by making the definition of the synthetic graph more flexible, e.g. the set of all graphs that are valid for between particular dates, or the set of all graphs that are related by a specific URI pattern. This approach can help abstract away management/modelling issues that are necessary for dealing with issues like versioning.

Named Graphs in SPARQL 1.1

Lets look at how SPARQL 1.1 might impact on the above scenarios. I use “might” advisedly as its still early days, we’ve only just had the first public Working Drafts and so the state of play might change.

Section 4.1 of the SPARQL 1.1 Update draft notes that: If an update service is managing some graph store, then there is no presumption that this exactly corresponds to any RDF dataset offered by some query service. The dataset of the query service may be formed from graphs managed by the update service but the dataset requested by the query can be a subset of the graphs in the update service dataset and the names of those graphs may be different. The composition of the RDF dataset may also change.

So basically the set of RDF graphs exposed by an SPARQL 1.1 Update service may be disjoint from a Query service exposed by the same endpoint. This will always be the case if the Query endpoint exposes any synthetic graphs. These, presumably overlapping, sets make sense from the perspective of wanting some flexibility in how data is managed versus how it is queried. Its likely that we’ll see implementations offer a range of options with the most likely case being that the “core” set of graphs is identical, but that an additional set may be available for querying.

SPARQL 1.1 as it currently stands, includes an Uniform HTTP Protocol for Managing RDF Graphs. I’m very happy to see this and think that its an important part of the picture for publishing RDF data on the web in a RESTful way. As part of the overall Linked Data message we’ve been saying that “your website is your API”; that by assigning clear stable URIs to things in your system and then exposing both human and machine-readable data at those URIs, then Linked Data just drops out of the design. And this is also clearly a RESTful approach.

But to make things completely RESTful then we need to not only be able to read data from those URIs, we should be able to update the data at those URIs using the uniform protocol that HTTP defines. I was always a little wary of SPARQL Update because it seemed like it might supplant a more RESTful option, but I’m encouraged by the presence of this working draft that this won’t be the case. But I don’t think the draft goes far enough in a few places: I’d like to have the ability to make changes to individual statements within a graph, as well as just whole graphs, using techniques like Changesets.

The draft currently doesn’t get into the issues surrounding how URIs might be managed on a server, instead deferring that to the implementation. But I think its an important topic to explore, so lets devote some time to it here.

Approaches to Managing Named Graphs on the Web

For the most part the mapping of graph management to the web is uncontroversial, the four HTTP verbs of GET, PUT, POST and DELETE have obvious and intuitive meanings. Some of the subtleties arise out of issues such as how are URIs assigned to graphs, and what does that URI identify?

Client Managed Graph Identifiers

There are two ways that URIs can be assigned to graphs managed in a networked store. The first and simplest is that the client assigns all URIs. To create a new graph and populate it with data, we just PUT to a new URI. Starting from a base URI, distinct from any SPARQL endpoint the service might expose, the client can build out a URI space for the graphs but just PUTting to URIs. In this scenario one really only needs GET, PUT, and DELETE. POST doesn’t have any clear role, but could be used to handle, e.g. submissions of Changesets.

Even with the simple style of client-side URIs for graphs, there’s one wrinkle we need to address. As I explained in the start of the post there may be several different reasons why someone is using Named Graphs. Using the graph identifier to keep track of the source of the data is a fairly common requirement. So this means you have several options for how URIs might be supplied:

  1. /graphs/abc — here the client is building out a collection of named graphs whose identifiers all share a common prefix, with each having a suffix. We may end up with a relatively flat structure or a hierarchical one, e.g. /graphs/abc/123. There’s no implicit requirement that graphs URIs that have a hierarchical arrangement have any formal relationship, but this does have the useful property that the URIs are hackable.
  2. /graphs/http://www.example.org/abc — this is similar to the above except the unique portion of each graph name is a complete URI. This would probably need to be encoded but I’ve omitted that for readability. This approach is useful when using Named Graphs to track the source URI of a graph.
  3. /graphs?graph=http://www.example.org/abc — this is a variant of the second option but moving the graph identifier out into a parameter rather than allowing it to be put into the path info of the base URI. I think typically the value of the parameter would be a full rather than a relative URI, but a server could support resolving URIs against a base.

Its clear that while Option 1 provides nice clean identifiers for graphs, ultimately its limiting for scenarios where the graph may have another “natural” identifier, e.g. its source. for Options 2 & 3 we have to deal with URL encoding (especially if the URI itself contains parameters). Personally of the two alternate options I think 3 is nicer, if only aesthetically. I’m not aware of any problems or limitations with performing an HTTP PUT to a URI with parameters: it is the full Request URI, including any parameters, that identifies the resource being created or updated.

Server Assigned Graph Identifiers

A server managing Named Graphs may not allow clients to assign graph identifiers. For example, the server may want to enforce a particular naming conventions for graphs. This might also be useful for clients too, e.g. if they want to throw some data into a named graph as a scratch store. What restrictions does this scenario apply?

Firstly it would require the client to POST data to be stored to a generic graphs collection (/graphs), the server would then determine the graph URI, and then return an HTTP 201 response with a Location header indicating where the client can find the data it has just stored. This way the client would know whether to find the data and could then use further requests (GET/PUT/POST/DELETE) to manage it.

To support tracking of the source of a graph, one might allow a graph parameter to be added to the URI. And, to avoid a client having to maintain a local mapping from the original graph UI to the stored alias, the server could store the value of the graph parameter as metadata associated with the graph it creates. The server could support a GET request on /graphs?graph=X, returning a 302 redirect to the URI which is acting as a local alias for graph X. The client could then PUT/POST/DELETE that resource. If a client sent a repeated POST request, identifying the same graph URI, then the server could allow this, and return a 303 See Other response rather than a 201.

Its also possible to support a hybrid approach in which a client may PUT to any URI with a base of /graphs but disallow use of graph ids that start with http://. For those URIs, the server could require that a client let it assign the id, supporting the graph parameter as described earlier in this section.

There’s no right or wrong way here. The differences fall out of the different ways we can map graph management onto the HTTP protocol. While a lot is fixed (methods, response codes and their semantics) if we are aiming to be RESTful, there are still some degrees of freedom with which to play around with different mappings. The SPARQL 1.1. uniform protocol specification doesn’t address this, so perhaps there’s room for the community to standardise best practices or conventions. However I think it’d be useful to at least see some informational text in the document.

Conclusions

Named Graphs are an important part of the overall technical framework for managing, publishing and querying RDF and Linked Data, and its important to understand the trade-offs in different approaches to using them. Hopefully this document is a step in the right direction.

If anyone has any strong opinions on how they think Named Graphs should be managed RESTfully, then please feel free to comment on this posting. I’m very interested to hear your thoughts.

One thing that interests me is: how can we use Named Graphs to support publishing of Linked Data? That’s something I’ll follow up on in a separate post.

Follow

Get every new post delivered to your Inbox.

Join 29 other followers