I’m pleased to announce the first iteration of a Java API for FOAF based around the Jena semantic web toolkit.
The API, which I’ve dubbed “foaf-beans”, is an attempt to provide a number of convenience classes that will allow Java developers to quickly get to grips with reading and writing FOAF data. With this in mind the API provides a thin layer of abstraction which hides much of the RDF processing, instead presenting the user with simple factory classes that create
FOAFWriter objects for reading and writing respectively. These objects generate and process simple Java Beans that should play nicely with other Java APIs and toolkits (particularly JSP, JSTL, etc).
Read More »
This is an interesting posting from Charles McCathieNevile to the rdf-interest group discussing how to correctly document an RDF Schema.
- Ensure that the terms are annotated with labels and comments
- Flag annotations with their language code, and seek translations into other languages
- Use SKOS or custom properties to embed actual examples in the schema
- Publish schema and documentation at the namespace URI
Must remember to apply these to my own schema, and also see if we can get the FOAF schema similarly up to scratch.
foaf:interest property it’s possible for me to describe my interests (musical, technical, etc) in my FOAF profile.
The term has been specified so that it has a range of
foaf:Document, with the implication that the
foaf:topic of that
foaf:Document is what I’m interested in. Seem a bit convoluted? Maybe, but there are benefits…
Read More »
Bob DuCharme is looking for public collections of RDF.
He’s compiled an initial list and is looking for further examples of, ideally large, data sets.
Yesterday my father-in-law fell 20ft from the Severn Bridge. If you’re in the South West you may have seen it reported on the news, there’s some coverage and even a photo story of the three hour rescue on the BBC website.
The good news is that he’s basically OK, although he had to have surgery last night to deal with two broken legs and a collapsed lung. He’s now stable, and doesn’t have any other serious internal injuries.
It’s a weird experience seeing something that concerns your family on the news like this. In fact my mother-in-law rang him after hearing a report about an accident, so it was through the local news that we first found out. We still don’t have all the details about how the accident happened, but it seems likely that there will be an inquiry. For now we’ve been poring over the pictures trying to imagine how it happened. He was in the towers carrying out a lighting inspection when he fell; he works as an electrician carrying out maintenance of the bridge. He has no memory of the fall itself.
Classifier4J is a Java text classification library that includes a text summariser and a Bayesian classifier. It was my interest in the latter that lead me to play with the API recently, as I wanted to demonstrate to some colleagues the ease with which one can use Bayesian classification to create a content filter/recommender. Well, it’s easy if all the hard work is done for you in a library!
The Classifier4J API is very easy to use, and you can plug a Bayesian classifier into an application with very few lines of code.
One of the things that intrigued me about the API design was that it separates out the Classifier from the storage of the words and their probabilities. The API comes with a simple in-memory implementation and a JDBC Words Data Source which stores the data in a database table.
It occured to me that it’d be an interesting experiment to create an implementation of the data source interface that stored the data as RDF.
Why RDF? Because then we’d have the share and aggregate the results of training classifiers.
For example I could export and share a classifier trained to spot spam, semantic web topics, or any number of other categories. The classifiers could be imported into both desktop applications (e.g. Thunderbird) as well as web applications. For example I might train a classifier to spot articles that I’m interested in, and then upload that configuration into a content management system and have it mine that data for material I may be interested in — hence “bayesian agents”
By tieing my exported bayesian probabilities to my FOAF file an aggregator may merge my data with others known to share similar interests. Trust is another aspect that may reflect whether my data is shared.
Anyone have any comments on this? Is anyone doing anything similar already? (They must be…)
I’ll try and hack something up when I get a few minutes.
For the RDF I was thinking of something like the following:
Read More »
I’ve been doing some playing with a neat tool called URLinfo. It’s a simple form and customizable bookmarklet that allows you to reflect on a given URL to discover all sorts of interesting information, which ranges from related links, validators, del.icio.us bookmarks, and blog backlinks. You can even carry out some basic textual analysis on the page.
The tool does this by delegating the actual hard work to a number of other existing services. So even if you don’t find URLinfo useful in itself, it provides a nicely categorized list of other useful web tools.
Which makes me wonder: which of the other services have XML/RSS/RDF export options, and how easy would it be to aggregate the output to create higher level services?
For example URLinfo links to nine different blog aggregator/search engines that provide a “backlinks from this URL” feature. Would be nice to have a single view across all those services, but for now URLinfo is a nice start.
The only service I can see missing is FOAF Explorer. I’ve mailed in a suggestion to incorporate this and other FOAF tools.