Patterns of Intermediation

Jon Udell has suggested that what we need is an architecture of intermediation. In that piece he’s seems to be mostly talking about the need for toolkits that make it easy to build this kind of intermediary. I agree with Simon Willison that Greasemonkey is a pretty powerful way of doing this, especially after looking at Matt Biddulph’s web site mash-up example.
What interests me are the design patterns that support that architecture of intermediation. Lots of people are writing bookmarklets, AJAX sites and Greasemonkey scripts, but I don’t yet see a lot of consolidation around the basic techniques, what tasks they solve, their relative strengths and weaknesses, etc.
Discovering design patterns involves a little digging, so lets see if a bit of classification might help.
Consider three basic axes: data source, action, and trigger. Or: where does the intermediary get its data, and how is that data used, and what triggers the intermediary?
On the “data source” axis we have:

  1. The URL, including both path and query string
  2. Embedded metadata, e.g. HEAD of an HTML document
  3. The document body, extracted either from semantic markup, or just brute-force scraped from the text
  4. An external service, invoked asynchronously
  5. A combination of these

On the “action” axis we have:

  1. Redirect, e.g. take the user to another site
  2. Annotate, e.g. add markup or data to the current page
  3. Generate Interface, e.g. generate and populate an HTML form to prompt interactions with further services

On the “trigger” axis we have:

  1. User initiated, e.g. activating a bookmarklet
  2. Event driven, e.g. when a document is loaded

By way of examples, Udell’s LibraryLookup bookmarklet would be classified as: Data 1, Action 1, Trigger 1.
Biddulph’s mashup would be: Data 4(+1), Action 2, Trigger 2.
This experimental delicious posting tool would be classified as Data 1, Action 3, Trigger 1. You get the general idea.
It’s not a great step from classifying intermediaries in this fashion, to documenting patterns using the traditional form, covering naming (an important step), motivation and general design. From there we can start to share techniques and communicate more effectively about how to build various kind of intermediaries.

SPARQLing Days and a Twinkle update

Unfortunately, despite being dead keen and a sucker for a free glass of wine, I’m not going to be able to make it to the SPARQLing days in Tuscany. Like Danny I was pleased to be invited, unfortunately the timing isn’t good. Maybe next time. I’m consoling myself with the prospect of drinking Triples in Amsterdam again this year.
My next release of Twinkle is long overdue. I got as far as supporting ARQ 0.9.3 and output of results into a table and other UI refinements, but then got side-tracked with digging further into how the API is implemented and providing Andy Seaborne with some, hopefully useful, feedback.
ARQ 0.9.4 is now available which adopts the latest syntax revisions to SPARQL (move to turtle style, dropping of FROM; not sure I agree with either, but more on that another time).
I’ll probably rush out a 0.3. release of Twinkle in the next week or so, and then aim to support querying over persistent stores for 0.4. We’ve been progressing with our triple store experiments at work and have quickly amassed about 40gb worth of data to query over (and more to follow). Should be fun to play with.

My First Computer

Sinclair ZX Spectrum
A scan of the promotional flier for the Sinclair ZX Spectrum that I carried round for months prior to my parents buying me a 48K Spectrum for Christmas.
Click through to the larger image to read the marketing text. Here’s some extracts:
“Professional power — personal computer price!”
“Your ZX Spectrum comes with a mains adaptor and all the necessary leads to connect to most cassette records and TVs (colour or black and white)”
“…later this year there will be Microdrives for massive amounts of extra on-line storage, plus an RS232/network interface board”
“Sound — BEEP command with variable pitch and duration”
“High speed LOAD & SAVE — 16K in 100 seconds via cassette, with VERIFY and MERGE for programs and separate data files.”
I learnt to program from those handy Spectrum BASIC manuals mentioned in the advert supplemented with weekly doses of Input Magazine; never did get the hang of assembly or machine code though. Not beyond a few peeks and pokes lifted from the ever trusty Crash magazine, covers of which (along with CV&G) still adorn some of my old school books lurking in the attic.