Life With Playstation

Earlier today I was playing with the new Life with PlayStation which is available as a free upgrade to the older “Folding @ Home” application that originally shipped with the PS3.
The new application looks like it is a step towards generalizing the existing interface, which is a “Google Earth-lite” style zoomable, pannable, 3D globe albeit with much less detail than its desktop equivalent. The main new feature is integrated weather reports and news feeds from the capital cities of 60 countries. You can read more about it on the website and watch a video demo.
What intrigued me was the possibility that Sony may decide to open this up further. They’re clearing expecting there to be more “channels”, which is their term for overlays that can be displayed on the globe. At present only the news, plus the older Folding@Home channels are available, but it’d be fantastic if this was opened up to web hackers to allow geo apps to be delivered directly to the Playstation. I’ve done some googling around but there’s doesn’t seem to be any discussion about how they intend to add new services, or whether there may be a developer kit.
There is a huge amount of creative work going on in the world of geo-hackery that could be re-targetted for delivery to the PS3 if Sony decide to embrace open-ness. Indeed, other than the currently fairly limited resolution of the map and the need for Sony to provide a way to feed content into their system, there seems to be little in the way of further obstacles.
I also noticed that the software license page explains that the application ships with a “simple cross-platform XML parser” and LiteSQL. An even more exciting leap would be to see a sandboxed Javascript engine in there too, but lets not run before we can walk!

The Web’s Rich Tapestry

This post was originally published on the Talis “Nodalities” blog.

We’ve all read books that linger in our memories. And there are any number of reasons why they might do so; a stirring tale or thought-provoking argument, for example. One book that has stayed with me over the years is The House of Leaves by Mark Danielweski. It’s been described as “the Blair Witch” of haunted house tales, being the story of a house, the people who live there, and those who attempt to document the strange events and structure of the building. The book is quite a challenging read as it is made up of overlapping narratives, documentary evidence from the investigators, etc. As a reader you’re assembling a narrative out of the interlocking pieces of text that the author presents you with.

But, while the tale is one of those slow burrning horror stories that does linger at the back of the mind, that’s not the primary reason why the book has stayed with me. It was the actual structure of the text that was so intriguing: the author has played with the printed form, including the basic layout of the print on the page in an attempt to further promote the mythology of the story and to help convey the labyrinthine nature of the house. For example a typical page might contain several different blocks of text, and much of the story is told through footnotes and footnotes to footnotes, and footnotes to those footnotes. Certain words are coloured differently throughout the text. There are even blocks of text embedded in the page which you have to read downwards through several pages before returning to your starting point. As a reader you’re physically exploring the text much like the characters are exploring the house.

The book is basically a hypertext novel and while certainly not the first to play with the printed form in this way, it was the first that I’d personally encountered. As a hypertext the book appeals to the technologist in me: I’ve given a number of talks over the past few years and in many of these I’ve explored the evolution of hypertext systems. But I’ve also attempted to challenge people’s pre-conceptions about the medium of the web, just as the House of Leaves challenged my conceptions about the printed medium.

My most recent talk was last week at the ALPSP Internationational Conference 2008 which took place last week in Old Windsor. The talk, titled “The Web’s Rich Tapestry“, discussed the link as the basic medium of the web and reviewed how the blurring of boundaries between websites, services and data (aka “Web 2.0″) is enabled by increasingly richer linking between resources. This is part of a move from old broadcast models of information publishing to a more web-like network of interconnected peers each contributing to a dense information medium. The ultimate endpoint of this inherent in the vision of the Semantic Web, and will complete the change from a document-centric to a data-centric world. The Semantic Web, which is just a layer on top of the existing web, is still based on linking. Albeit linking of a more fine-grained and meaningful nature.

The Semantic Web, just like the existing Web, will arrive through the actions of individuals, organizations and businesses, each contributing to the whole by sharing linked data sets; this process is already happening. And, like the Web, the more data is available, the more value there will be for everyone involved. I urged society publishers to begin more openly sharing their metadata and exploring the potential inherent in the Web of Data. I also attempted to do more than just evangelize the potential benefits of the Semantic Web and also tried to provide a few pointers towards where those benefits might be realized.

One obvious benefit relates to the generation of more traffic to content and services. For many publishers a sizeable, if not the majority, of their website traffic is driven by Google referrals. This is an inherently fragile situation, but one that I believe is ultimately temporary. The scale of this traffic generation is obviously due in major part to the popularity of the Google search engine, but it isenabled by their ability to quickly and efficiently crawl websites in order to index content. This provides a large “surface area” to which Google can generate links. By publishing open data, information providers will be able to grow this surface area by at least an order of magnitude due to the more fine-grained data publishing that the Semantic Web entails. All of this data can potential generate new, highly relevant traffic to content and services.

The other area that the Semantic Web will pay off is by enabling much more sophisticated research and analysis tools, not just for academic researchers and students, but also for all of us in our every day consumption of information. In my view there is too much of a focus on search and not enough on information visualisation and analysis tools. I pointed towards some very recent experiments which I think illustrate some of this potential, including Ubiquity and Freebase Parallax. Talis’s own Project Xiphos is also exploring the innovation that can follow from re-purposing publishing metadata, a topic that was particularly relevant to the ALPSP audience. In my new role as Programme Manager for the Talis Platform, I’m excited to begin exploring how we can start helping businesses to begin drawing value from the rapidly growing Web of Data.

How about a DJ rather than a Genius?

There’s been plenty of commentary about the new Genius feature in iTunes. A recommendation engine is a nice new feature, but personally there’s a couple of other features I’d like to see on my iPod, or in iTunes. These are more in the “reacquaint yourself with the music you already own” category rather than recommending new purchases.
For example, I use the shuffle feature quite a bit, usually when I just want some background music to blot out noise when commutting. But there’s no context navigation available from the “now playing” view. If you’re on shuffle, then you can only proceed to the next random track. But quite often I hear something and want to listen to the rest of the album, or more by the same artist. It’d be nice to be able to quickly switch from a random order to album order with a couple of clicks, rather than having to navigate back through all of the menus again. Similarly it’d be useful to be able to jump directly into that artist’s music in my collection from the same screen.
And rather than having a “genius” in the software, why not a DJ? (And I don’t mean in a cheesy voice over style!)
If listening to your collection on random is listening to your own personal radio station, then where are the other “feature programming” playlsits that you get from real radio stations? For example how about randomly programming a “Blue’s Hour”, a Second Summer of Love special, a Radiohead retrospective, or a Mercury Prize Nominee playlist?
There’s plenty of metadata in iTunes and plenty more available from an increasingly wide array of sources, so why doesn’t the software provide us with a better interface onto it? Supported by slightly more sophisticated software agents to help navigate or use it?
Implementing some of this might be possible through iTunes plugins, but some of the features it’d be nice to have on the device. The hackability of the iPhone suggests that this might be a better platform for exploration that the iPod.

The Web’s Rich Tapestry

This week I co-chaired a plenary session at the ALPSP International Conference.
The goal of the session, titled “The Web’s Rich Tapestry” (abstract), was to discuss the continuing evolution of the web from a document-centric view of the world to one that was more data and link centric.
The first half of the session was presented by my friend and former colleague Geoff Bilder, Director of Strategic Initiatives at CrossRef. Geoff focussed on discussion the nature of the link and its implementation both on the web and in early hypertext systems. Geoff discussed some of the power that was evident in these hypertext environments and the growing need and awareness for features like stable, persistent links and multi-directional links not just in scholarly communication (where they’re already very common) but more widely on the web.
I’ve explored this theme myself. It seems to me that what we’re doing is slowly rebuilding many of the features of early hypertext environments but in a more distributed, open and scalable fashion.
In my half of the talk I focused on the evolution towards the Semantic Web. I’ve included my notes below. I don’t normally write up talks in this way, but it proved a useful way to organize my thoughts on this occasion. They’re reproduced below without much editing. The accompanying slides are on Slideshare.
(Note: this was a presentation for a non-technical audience, so may not be much new content here for Planet RDF readers)

Read More »