Open Data Camp Pitch: Mapping data ecosystems

I’m going to Open Data Camp #4 this weekend. I’m really looking forward to catching up with people and seeing what sessions will be running. I’ve been toying with a few session proposals of my own and thought I’d share an outline for this one to gauge interest and get some feedback.

I’m calling the session: “Mapping open data ecosystems“.

Problem statement

I’m very interested in understanding how people and organisations create and share value through open data. One of the key questions that the community wrestles with is demonstrating that value, and we often turn to case studies to attempt to describe it. We also develop arguments to use to convince both publishers and consumers of data that “open” is a positive.

But, as I’ve written about before, the open data ecosystem consists of more than just publishers and consumers. There are a number of different roles. Value is created and shared between those roles. This creates a value network including both tangible (e.g. data, applications) and intangible (knowledge, insight, experience) value.

I think if we map these networks we can get more insight into what roles people play, what makes a stable ecosystem, and better understand the needs of different types of user. For example we can compare open data ecosystems with more closed marketplaces.

The goal

Get together a group of people to:

  • map some ecosystems using a suggested set of roles, e.g. those we are individually involved with
  • discuss whether the suggested roles need to be refined
  • share the maps with each other, to look for overlaps, draw out insights, validate the approach, etc

Format

I know Open Data Camp sessions are self-organising, but I was going to propose a structure to give everyone a chance to contribute, whilst also generating some output. Assuming an hour session, we could organise it as follows:

  • 5 mins review of the background, the roles and approach
  • 20 mins group activity to do a mapping exercise
  • 20 mins discussion to share maps, thoughts, etc
  • 15 mins discussion on whether the approach is useful, refine the roles, etc

The intention here being to try to generate some outputs that we can take away. Most of the session will be group activity and discussion.

Obviously I’m open to other approaches.

And if no-one is interested in the session then that’s fine. I might just wander round with bits of paper and ask people to draw their own networks over the weekend.

Let me know if you’re interested!

 

Mega-City One: Smart City

“A smart city is an urban development vision to integrate multiple information and communication technology (ICT) and Internet of Things (IoT) solutions in a secure fashion to manage a city’s assets – the city’s assets include, but are not limited to, local departments’ information systems, schools, libraries, transportation systems, hospitals, power plants, water supply networks, waste management, law enforcement, and other community services…ICT allows city officials to interact directly with the community and the city infrastructure and to monitor what is happening in the city, how the city is evolving, and how to enable a better quality of life. Through the use of sensors integrated with real-time monitoring systems, data are collected from citizens and devices – then processed and analyzed. The information and knowledge gathered are keys to tackling inefficiency.” – Smart City, Wikipedia

We’d like to thank the Fforde Foundation for grant funding this project. We’re also grateful to the Fictional Cities Catapult for ongoing advice and support.

In this post we share some insights from early work by our lead researcher Thursday Next. Thursday has recently been leading a team carrying out an assessment of Mega-City One against our smart city maturity model.

Housing

Homelessness is rare among the official citizenry of Mega City One. Considerable investment has been made in building homes for its rapidly growing population. Self-contained city blocks encourage close-knit communities who identify very strongly with their individual blocks.

Citizens enjoy the ability to live, shop and socialise together. Some even choose to spend their entire lives within the secure environment provided by their home block, each of which can house up to 50,000 people. Block provides immediate access to hospitals, gyms, leisure activities, schools and shopping districts. Everything a citizen needs is available on their doorstep.

Transport

Meta-City One boasts a huge variety of transportation systems, covering every form of travel. Pedestrians are able to use Eeziglide and Pedway systems, whilst mass transit is provided by Sky-Rail and other public transit systems.

Roads are adequately sized and are home to a range of autonomous vehicles. Indeed these vehicles are so spacious and reliable that many citizens choose to live in them permanently.

Transport in Mega-City One is reliable, efficient and typically only faces issues during large-scale emergencies (e.g. the Apocalypse War, robot uprising and dark judge visitations).

Education and training

While education is freely available to all citizens, there is little need for many to follow a formal education pathway. Ready access to robot butlers and high levels of automation mean that citizens rarely need to work. Many citizens choose to embrace hobbies and follow vocational training, e.g. in human taxidermy or training as professional gluttons.

But, for those citizens that display a strong aptitude, there are always opportunities in the Justice Department. A rigorous programme of physical and education training is available. Individualised learning pathways mean that citizens can find employment in a variety of public sector roles.

Leisure

Leisure is the primary pursuit of many citizens and there are many opportunities and means of participating. A culture of innovation surrounds the leisure sector which includes a range of new sports including Sky surfing, Batgliding and PowerBoarding.

Citizens are able to quickly learn of new opportunities meaning that crazes often sweep the city (see, for example, Boinging).

Health

Mega-City One is almost completely self-sufficient. Food is primarily created from artificial or synthetic sources. Popular brands like Grot Pot, provide a low-cost balanced diet. These are supplemented with imported produce such as Munce, which is sourced from artisan-lead Cursed Earth communities.

Environmental Services

Weather data and control infrastructure in Mega-City One is highly developed. Justice Department have long had control over local weather and climate conditions, allowing them to provide optimum conditions for citizens. Weather has also factored into policing, e.g. during large scale rioting and other disasters.

There is a strong culture of recycling in Mega-City One and there have been citizen-led movements encouraging greater environmental awareness. The cities Resyk centres ensure that nothing (and nobody) goes to waste.

Policing and Emergencies

Little needs to be said about Mega-City One’s crime and justice department. It is an exemplar of integrated and optimised policing solutions. The Justice Department are able to react rapidly to issues and are glad to offer a personalised service for citizens.

While data from homes, public areas and “eye in the sky” cameras are fed into central systems, actual delivery of justice is federated. Sector Houses provide local justice services across the city. This is supplemented with Citi-Def forces that handle community policing and enforcement activities in individual city blocks. Mega-City One has also embraced predictive policing through its small but effect Psi Division.

We hope this post has helped to highlight a number of important smart city innovations. Exploring how these have been operationalised and optimised to deliver services to citizens will be covered in future research. Please get in touch if you’d like us to undertake a maturity assessment of your fictional city!

 

A river of research, not news

I already hate the phrase “fake news”. We have better words to describe lies, disinformation, propaganda and slander, so lets just use those.

While the phrase “fake news” might originally have been used to refer to hoaxes and disinformation, it’s rapidly becoming a meaningless term used to refer to anything you don’t disagree with. Trump’s recent remarks being a case in point: unverified news is something very different.

Of course this is all on a sliding scale. Many news outlets breathlessly report on scientific research. This can make for fun, if eye-rolling reading. Advances in AI and discovery of alien mega-structures are two examples that spring to mind.

And then there’s the way in which statistics and research is given a spin by the newspapers or politicians. This often glosses over key details in favour of getting across a political message or point scoring. Today I was getting cross about Theresa May’s blaming of GP’s for the NHS crisis. Her remarks are based on a report recently published by the National Audit Office. I haven’t seen a single coverage of the piece link to the NAO press release or the high-level summary (PDF), so you’ll either have to accept their remarks or search for it yourself.

Organisations like Full Fact do an excellent job of digging into these claims. They link the commentary to the underlying research or statistics alongside a clear explanation. In the same vein is NHS Choices Behind the Headlines which fills a similar role, but focuses on the reporting of medical and health issues.

There’s also a lot of attention focused on helping to surface this type of fact checking and explanations via search results. Fact checking, to properly dig into statistics and clearly present them is, I suspect, a time consuming exercise. Especially if you’re hoping to present a neutral point of view.

What I think I’d like though is a service that brings all those different services together. To literally give me the missing links between research, news and commentary.

But rather than aggregating news articles or fact checking reports to give me a feed, or what we used to call a “river of news”, why not present a river of research instead? Let me see the statistics or reports that are being being debated and then let me jump off to see the variety of commentary and fact checking associated with it.

That way I could choose to read the research or a summary of it, and then decide to look at the commentary. Or, more realistically, I could at least see the variety of ways in which a specific report is being presented, described and debated. That would be a useful perspective I think. It would shift the focus away from individual outlets and help us find alternative viewpoints.

I doubt that this would become anyone’s primary way to consume the news. But it could be interesting to those of who like to dig behind the headlines. It would also be useful as a research tool in its own right. In the face of consistent lack of interest from news outlets in linking to primary sources, this might be something that could be crowd-sourced.

Does this type of service already exist? I suspect there are similar efforts around academic research, but I don’t recall seeing anything that covers a wider set of outputs including national and government statistics.

 

Donate to the commons this holiday season

Holiday season is nearly upon us. Donating to a charity is an alternative form of gift giving that shows you care, whilst directing your money towards helping those that need it. There are a lot of great and deserving causes you can support, and I’m certainly not going to tell you where you should donate your money.

But I’ve been thinking about the various ways in which I can support projects that I care about. There are a lot of them as it turns out. And it occurred to me that I could ask friends and family who might want to buy me a gift to donate to them instead. It’ll save me getting me getting yet another scarf, pair of socks, or (shudder) a brutalised toblerone.

One topic I’m interested in, as regular readers will know, is how we can create a sustainable commons: open data, open source, etc. So here’s a list of relevant donation options. I’m sharing it here in case you might find it useful too.

Open Source

Open Content & Data

Open Science

Open Standards and Rights

This isn’t meant as an exhaustive list. It’s just the organisations that immediately came to mind. Leave a comment if you’d like to suggest an addition.

 

The practice of open data

Open data is data that anyone can access, use and share.

Open data is the result of several processes. The most obvious one is the release process that results in data being made available for reuse and sharing.

But there are other processes that may take place before that open data is made available: collecting and curating a dataset; running it through quality checks; or ensuring that data has been properly anonymised.

There are also processes that happen after data has been published. Providing support to users, for example. Or dealing with error reports or service issues with an API or portal.

Some processes are also continuous. Engaging with re-users is something that is best done on an ongoing basis. Re-users can help you decide which datasets to release and when. They can also give you feedback on ways to improve how your data is published. Or how it can be connected and enriched against other sources.

Collectively these processes define the practice of open data.

The practice of open data covers much more than the technical details of helping someone else access your data. It covers a whole range of organisational activities.

Releasing open data can be really easy. But developing your open data practice can take time. It can involve other changes in your organisation, such as creating a more open approach to data sharing. Or getting better at data governance and management.

The extent to which you develop an open data practice depends on how important open data is to your organisation. Is it part of your core strategy or just something you’re doing on a more limited basis?

The breadth and depth of the practice of open data is surprising to many people. The learning process is best experienced. Going through the process of opening a dataset, however small, provides useful insight that can help identify where further learning is needed.

On aspect of the practice of open data involves understanding what data can be open, what can be shared and what must stay closed. Moving data along the data spectrum can unlock more value. But not all data can be open.

An open data practitioner works to make sure that data is at the right point on the data spectrum.

An open data practitioner will understand the practice of open data and be able to use those skills to create value for their organisation.

Often I find that when people write about “the state of open data” what they’re actually writing about is the practice of open data within a specific community. For example, the practice of open data in research, or the practice of open government data in the US, or the UK.

Different communities are developing their open data practices at different rates. It’s useful to compare practices so we can distil out the useful, reusable elements. But we must acknowledge that these differences exist. That open data can fulfil a different role and offer a different value proposition in different communities. However there will obviously be common elements to those practices; the common processes that we all follow.

The open data maturity model is an attempt to describe the practice of open data. The framework identifies a range of activities and processes that are relevant to the practice of open data. It’s based on years of experience across a range of different projects. And it’s been used by both public and private sector organisations.

The model is designed to help organisations assess and improve their open data practice. It provides a tool-kit to help you think about the different aspects of open data practice. By using a common framework we can benchmark our practices against those in other organisations. Not as a way to generate leader-boards, but as a way to identify opportunities for sharing our experiences to help each other develop.

If you take and find it useful, then let me know. And if you don’t find it useful, then let me know too. Hearing what works and what doesn’t is how I develop my own open data practice.

Discogs: a business based on public domain data

When I’m discussing business models around open data I regularly refer to a few different examples. Not all of these have well developed case studies, so I thought I’d start trying to capture them here. In this first write-up I’m going to look at Discogs.

In an attempt to explore a few different aspects of the service I’m going to:

How well that will work I don’t know, but lets see!

Discogs: the service

Discogs is a crowd-sourced database about music releases: singles, albums, artists, etc. The service was launched in 2000. In 2015 it held data on more than 6.6 million releases. As of today there are 7.7 million releases. That’s a 30% growth from 2014-15 and around 16% growth in 2015-2016. The 2015 report and this wikipedia entry contain more details.

The database has been built from the contributions of over 300,000 people. That community has grown about 10% in the last six months alone.

The database has been described as one of the most exhaustive collections of discographical metadata in the world.

The service has been made sustainable through its marketplace, which allows record collectors to buy and sell releases. As of today there are more than 30 million items for sale. A New York Times article from last year explained that the marketplace was generating 80,000 orders a week and was on track to do $100 million in sales. Of which Discogs take an 8% commission.

The company has grown from a one man operation to having 47 employees around the world, and that the website has 20 million visitors a month and over 3 million registered users. So approximately 1% of users also contribute to the database.

In 2007 Discogs added an API to allow anyone to access the database. Initially the data was made available under a custom data licence which included attribution and no derivatives clauses. The latter encouraged reusers to contribute to the core database, rather than modify it outside of the system. This licence was rapidly dropped (within a few months, as far as I can tell) in favour of a public domain licence. This has subsequently transitioned to a Creative Commons CC0 waiver.

The API has gone through a number of iterations. Over time the requirement to use API keys has been dropped, rate limits have been lifted and since 2008 full data dumps of the catalogue have been available for anyone to download. In short the data has been increasingly open and accessible to anyone that wanted to use it.

Wikipedia lists a number of pieces of music software that uses the data. In May 2012 Discogs and The Echo Nest both announced a partnership which would see the Discogs database incorporated into Echo Nest’s Rosetta Stone product which was being sold as a “big data” product to music businesses. It’s unclear to me if there’s an ongoing relationship. But The Echo Nest were acquired by Spotify in 2014 and have a range of customers, so we might expect that the Discogs data is being used regularly as part of their products.

Discogs: the data ecosystem

Looking at the various roles in the Discogs data ecosystem, we can identify:

  • Steward: Discogs is a service operated by Zink Media, Inc. They operate the infrastructure and marketplace.
  • Contributor: The team of volunteers curating the website as well as the community support and leaders on the Discogs team
  • Reusers: The database is used in a number of small music software and potentially by other organisations like Echo Nest and their customers. Some more work required here to understand this aspect more
  • Aggregator: Echo Nest aggregates data from Discogs and other services, providing value-added services to other organisations on a commercial basis. Echo Nest in turn support additional reusers and applications.
  • Beneficiaries: Through the website, the information is consumed by a wide variety of enthusiasts, collectors and music stores. A larger network of individuals and organisations is likely supported through the APIs and aggregators

Discogs: the data infrastructure

To characterise the model we can identify:

  • Assets: the core database is available as open data. Most of this is available via the data dumps, although the API also exposes some additional data and functionality, including user lists and marketplace entries. It’s not clear to me how much data is available on the historical pricing in the marketplace. This might not be openly available, in which case it would be classified as shared data available only to the Discogs team.
  • Community: the Contributors, Reusers and Aggregators are all outlined above
  • Financial Model: the service is made sustainable through the revenue generated from the marketplace transactions. Interestingly, originally the marketplace wasn’t a part of the core service but was added based on user demand. This clearly provided a means for the service to become more sustainable and supported growth of staff and office space.
  • Licensing: I wasn’t able to find any details on other partnerships or deals, but the entire data assets of the business are in the public domain. It’s the community around the dataset and the website that has meant that Discogs has continued to grow whilst other efforts have failed
  • Incentives: as with any enthusiast driven website, the incentives are around creating and maintaining a freely available, authoritative resource. The marketplace provides a means for record collectors to buy and sell releases, whilst the website itself provides a reference and a resource in support of other commercial activities

Exploring Discog as a data infrastructure using Ostrom’s principles we can see that:

While it is hard to assess any community from the outside, the fact that both the marketplace and contributor communities are continuing to grow suggests that these measures are working.

I’ll leave this case study with the following great quote from Discog’s founder, Kevin Lewandowski:

See, the thing about a community is that it’s different from a network. A network is like your Facebook group; you cherrypick who you want to live in your circle, and it validates you, but it doesn’t make you grow as easily. A web community, much like a neighborhood community, is made up of people you do not pluck from a roster, and the only way to make order out of it is to communicate and demonstrate democratic growth, which I believe we have done and will continue to do with Discogs in the future.

If you found this case study interesting and useful, then let me know. It’ll encourage me to do more. I’m particularly interested in your views on the approach I’ve taken to capture the different aspects of the ecosystem, infrastructure, etc.

Checking Fact Checkers

As of last month Google News attempts to highlight fact check articles. Content from fact checking organisations will be tagged so that their contribution to on-line debate can be more clearly identified. I think this is a great move and a first small step towards addressing wider concerns around use of the web for disinformation and a “post truth” society.

So how does it work?

Firstly, news sites can now advertise fact checking articles using a pending schema.org extension called Claim Review. The mark-up allows a fact checker to indicate which article they are critiquing along with a brief summary of what aspects are being reviewed.

Metadata alone is obviously ripe for abuse. Anyone could claim any article is a fact check. So there’s an additional level of editorial control that Google layer on top of that metadata. They’ve outlined their criteria in their help pages. These seems perfectly reasonable: it should be clear what facts are being checked, sources must be cited, organisations must be non-partisan and transparent, etc.

It’s the latter aspect that I think is worth digging into a little more. The Google News announcement references the International Fact Checking Network and a study on fact checking sites. The study, by the Duke Reporter’s Lab, outlines how they identify fact checking organisations. Again, they mention both transparency of sources and organisational transparency as being important criteria.

I think I’d go a step further and require that:

  • Google’s (and other’s) lists of approved fact checking organisations are published as open data
  • The lists are cross-referenced with identifiers from sources like OpenCorporates that will allow independent verification of ownership, etc.
  • Fact checking organisations publish open data about their sources of funding and affiliations
  • Fact checking organisations publish open data, perhaps using Schema.org annotations, about the dataset(s) they use to check individual claims in their articles
  • Fact checking organisations licence their ClaimReview metadata for reuse by anyone

Fact checking is an area that benefits from the greatest possible transparency. Open data can deliver that transparency.

Another angle to consider is that fact checking may be carried out by more than just media organisations. John Udell has written a couple of interesting pieces on annotating the wild-west of information flow and bird-dogging the web that highlight the potential role of annotation services in helping to fact check and create constructive debate and discussion on-line.