There’s a growing set of useful resources and guidance to help people run better remote meetings. I’ve been compiling a list to a few. At the risk of repeating other, better advice, I’m going to write down some brief tips for running remote meetings.
For a year or so I was chairing fortnightly meetings of the OpenActive standards group. Those meetings were an opportunity to share updates with a community of collaborators, get feedback on working documents and have debates and discussion around a range of topics. So I had to get better at doing it. Not sure whether I did, but here’s a few things I learned.
I’ll skip over general good meeting etiquette (e.g. around circulating an agenda and working documents in advance), to focus on the remote bits.
Give people time to arrive. Just because everyone is attending remotely doesn’t mean that everyone will be able to arrive promptly. They may be working through technical difficulties, for example. Build in a bit of deliberate slack time at the start of the meeting. I usually gave it around 5-10 minutes. As people arrive, greet them and let them know this is happening. You can then either chat as a group or people can switch to emails, etc while waiting for things to start.
Call the meeting to order. Make it clear when the meeting is formally starting and you’ve switched from general chat and waiting for late arrivals. This will help ensure you have people’s attention.
Use the tools you have as a chair. Monitor side chat. Monitor the video feeds to check to see if people look like they have something to say. And, most importantly, mute people that aren’t speaking but are typing or have lots of background noise. You can usually avoid the polite dance around asking people to do that, or suffering in silence, by using option to mute people. Just tell them you’ve done that. I usually had Zoom meetings set up so that people were muted on entry.
Do a roll call. Ask everyone to introduce themselves at the start. Don’t just ask everyone to do that, as they’ll talk over each other. Go through people individually as ask them to say hello or do an introduction. This helps with putting voices to names (if not everyone is on video), ensures that everyone knows how to mute/unmute and puts some structure to the meeting.
Be aware of when people are connecting in different ways. Some software, like Zoom, allow people to join in several ways. Be aware of when you have people on phone and video, especially if you’re presenting material. Try to circulate links either before or during meeting so they can see them
Use slides to help structure the meeting. I found that doing a screenshare of a set of slides for the agenda and key talking points helps to give people a sense of where you’re at in the meeting. So, for example if you have four items on your agenda, have a slide for each topic item. With key questions or decision points. It can help to focus discussion, keeps people’s attention on the meeting (rather than a separate doc) and gives people a sense of where you are. The latter is especially helpful if people are joining late.
Don’t be afraid of a quick recap. If people join a few minutes late in the meeting, give them a quick recap of where you’re at, ask them to introduce themselves. I often did this if people joined a few minutes late, but not if they dropped in 30 minutes into a 1 hour meeting.
Don’t be afraid of silence or directly asking people questions. Chairing remote meetings can be stressful and awkward for everyone. It can be particularly awkward to ask questions and then sit in silence. Often this is because people are worried about talking over each other. Or they just need time to think. Don’t be afraid of a bit of silence. Doing a roll call to ask everyone individually for feedback can be helpful if you want to make decisions. Check in on people who have not said anything for a while. It’s slow, but provides some order for everyone
Keep to time. I tried very hard not to let meetings over-run even if we didn’t cover everything. People have other events in their calendars. Video and phone calls can be tiring. It’s better to wrap up at a suitable point and follow up on things you didn’t get to cover than to have half the meeting drop out at the end.
Follow-up afterwards. Make sure to follow up afterwards. Especially if not everyone was able to attend. For OpenActive we video the calls and share those as well as a summary of discussion points.
Those are all the things I tried to consciously get better at and I think helped things go more smoothly.
This article was first published in the February 2030 edition of Sustain magazine. Ten years since the public launch of GUIDE we sit down with its designers to chat about its origin and what’s made it successful.
It’s a Saturday morning and I’m sitting in the bustling cafe at Tyntesfield house, a National Trust property south of Bristol. I’m enjoying a large pot of tea and a slice of cake with Joe Shilling and Gordon Leith designers of one of the world’s most popular social applications: GUIDE. I’d expected to meet somewhere in the city, but Shilling suggested this as a suitable venue. It turns out Tyntesfield plays a part in the origin story of GUIDE. So its fitting that we are here for the tenth anniversary of its public launch.
SHILLING: “Originally we were just playing. Exploring the design parameters of social applications.”
He stirs the pot of tea while Leith begins sectioning the sponge cake they’ve ordered.
SHILLING: “People did that more in the early days of the web. But Twitter, Facebook, Instagram…they just kind of sucked up all the attention and users. It killed off all that creativity. For a while it seemed like they just owned the space…But then TikTok happened…”
He pauses while I nod to indicate I’ve heard of it.
SHILLING: “…and small experiments like Yap. It was a slow burn, but I think a bunch of us started to get interested again in designing different kinds of social apps. We were part of this indie scene building and releasing bespoke social networks. They came and went really quickly. People just enjoyed them whilst they were around.”
Leith interjects around a mouthful of cake:
LEITH: “Some really random stuff. Social nets with built in profile decay so they were guaranteed to end. Made them low commitment, disposable. Messaging services where you could only post at really specific, sometimes random times. Networks that only came online when its members were in precise geographic coordinates. Spatial partitioning to force separation of networks for home, work and play. Experimental, ritualised interfactions.”
SHILLING: “The migratory networks grew out of that movement too. They didn’t last long, but they were intense. ”
LEITH: “Yeah. Social networks that just kicked into life around a critical mass of people. Like in a club. Want to stay a member…share the memes? Then you needed to be in its radius. In the right city, at the right time. And then keep up as the algorithm shifted it. Social spaces herding their members.”
SHILLING: “They were intense and incredibly problematic. Which is why they didn’t last long. But for a while there was a crowd that loved them. Until the club promoters got involved and then that commercial aspect killed it.”
GUIDE had a very different starting point. Flat sharing in Bristol, the duo needed money. Their indie credibility was high, but what they were looking for a more mainstream hit with some likelihood of revenue. The break-up of Facebook and the other big services had created an opportunity which many were hoping to capitalise on. But investment was a problem.
LEITH: “We wrote a lot of grant proposals. Goal was to use the money to build out decent code base. Pay for some servers that we could use to launch something bigger”.
Shilling pours the tea, while Leith passes me a slice of cake.
SHILLING: “It was a bit more principled that that. There was plenty of money for apps to help with social isolation. We thought maybe we could build something useful, tackle some social problems, work with a different demographic than we had before. But, yeah, we had our own goals too. We had to take what opportunities were out there.”
LEITH: “My mum had been attending this Memory Skills group. Passing around old photos and memorabilia to get people talking and reminiscing. We thought we could create something digital.”
SHILLING: “We managed to land a grant to explore the idea. We figured that there was a demographic that had spent time connecting not around the high street or the local football club. But with stuff they’d all been doing online. Streaming the same shows. Revisiting old game worlds. We thought those could be really useful touch points and memory triggers too. And not everyone can access some of the other services.”
LEITH: “Mum could talk for hours about Skyrim and Fallout”.
SHILLING: “So we prototyped some social spaces based around that kind of content. It was during the user testing that we had the real eye-opener”.
The first iterations of the app that ultimately became GUIDE were pretty rough. Shilling and Leith have been pretty open about their early failures.
LEITH: “The first iteration was basically a Twitch knock-off. People could join the group remotely, chat to each other and watch whatever the facilitator decided to stream.”
SHILLING: “Engagement was low. We didn’t have cash to license a decent range of content. The facilitators needed too much training on the streaming interface and real-time community management.”
LEITH: “I then tried getting a generic game engine to boot up old game worlds, so we could run tours. But the tech was a nightmare to get working. Basically needed different engines for different games”
SHILLING: “Some of the users loved it, mainly those that had the right hardware and were already into gaming. But it didn’t work for most people. And again…I…we were worried about licensing issues”
LEITH: “So we started testing a customised, open source version of Yap. Hosted chat rooms, time-limited rooms and content embedding…that ticked a lot of boxes. I built a custom index over the Internet Archive, so we could use their content as embeds”.
SHILLING: “There’s so much great stuff that people love in the Internet Archive. At the time, not many services were using it. Just a few social media accounts. So we made using it a core feature. It neatly avoided the licensing issues. We let the alpha testers run with the service for a while. We gave them and the memory service facilitators tips on hosting their own chats. And basically left them to it for a few weeks. It was during the later user testing that we discovered they were using it in different ways that we’d expected.”
Instead of having conversations with their peer groups, the most engaged users were using it to chat with their families. Grandparents showing their grandchildren stuff they’d watched, listened to, or read when they were younger.
SHILLING: “They were using it to tell stories”
Surrounded by the bustle in the cafe, we pause to enjoy the tea and cake. Then Shilling gestures around the room.
SHILLING: “We came here one weekend. To get out of the city. Take some time to think. They have these volunteers here. One in every room of the house. People just giving up their free time to answer any questions you might have as you wander around. Maybe, point out interesting things you might not have noticed? Or, if you’re interested, tell you about some of things they love about the place. It was fascinating. I realised that’s how our alpha testers were using the prototype…just sharing their passions with their family.”
LEITH: “So this is where GUIDE was born. We hashed out the core features for the next iteration in a walk through the grounds. Fantastic cake, too.”
The familiar, core features of GUIDE have stayed roughly the same since that day.
Anyone can become a Guide and create a Room which they can use to curate and showcase small collections of public domain or openly licensed content. But no more than seven videos, photos, games or whatever else you can embed from the Internet Archive. Room contents can be refreshed once a week.
Visitors are limited to a maximum of five people. Everyone else gets to wait in a lobby, with new visitors being admitted every twenty minutes. Audio feeds only from the Guides, allowing them to chat to Visitors. But Visitors can only interact with Guides via a chat interface that requires building up messages — mostly questions — from a restricted set of words and phrases that can be tweaked by Guides for their specific Room. Each visitor limited to one question every five minutes.
LEITH: “The asymmetric interface, lobby system and cool-down timers were lifted straight from games. I looked up the average number of grandchildren people had. Turns out its about fives, so we used that to size Rooms. The seven item limit was because I thought it was a lucky number. We leaned heavily on the Internet Archive’s bandwidth early on for the embeds, but we now mirror a lot of stuff. And donate, obviously.”
SHILLING: “The restricted chat interface has helped limit spamming and moderation. No video feeds from Guides means that the focus stays on the contents of the Room, not the host. Twitch had some problematic stuff which we wanted to avoid. I think its more inclusive.”
LEITH: “Audio only meant the ASMR crowd were still happy though”.
Today there are tens of thousands of Rooms. Shilling shows me a Room where the Guide gives tours of historical maps of Bath, mixing in old photos for context. Another, “Eleanor’s Knitting Room” curates knitting patterns. The Guide alternating between knitting tips and cultural critiques.
Leith has a bookmarked collection of retro-gaming Rooms. Doom WAD teardowns and classic speed-runs analysis for the most part.
In my own collection, my favourite is a Room showing a rota of Japanese manhole cover designs, the Guide an expert on Japanese art and infrastructure. I often have this one a second screen whilst writing. The lobby wait time is regularly over an hour. Shilling asks me to share that one with him.
LEITH: “There are no discovery tools in Guide. That was deliberate from the start. Strictly no search engine. Want to find a Room? You’ll need to be invited by a Guide or grab a link from a friend”.
SHILLING: “Our approach has been to allow the service to grow within the bounds of existing communities. We originally marketed the site to family groups, and an older demographic. The UK and US were late adopters, the service was much more popular elsewhere for a long time. Things really took off when the fandoms grabbed hold of it.”
An ecosystem of recommendation systems, reviews and community Room databases has grown up around the service. I asked whether that defeated the purpose of not building those into the core app?
LEITH: “It’s about power. If we ran those features then it would be our algorithms. Our choice. We didn’t want that.”
SHILLING: “We wanted the community to decide how to best use GUIDE as social glue. There’s so many more creative ways in which people interact with and use the platform now”.
The two decline to get into discussion of the commercial success of GUIDE. It’s well-documented that the two have become moderately wealthy from the service. More than enough to cover that rent in the city centre. Shilling only touches on it briefly:
SHILLING: “No ads and a subscription-based service has kept us honest. The goal was to pay the bills while running a service we love. We’ve shared a lot of that revenue back with the community in various ways”.
Of course, all of this was possible before. YouTube and Twitch supported broadcasts and streaming for years, and many people used them in similar ways. But the purposeful design of a more dedicated interface highlights how constraints can shape a community and spark creativity. Removal of many of the asymmetries inherent in the design of those older platforms has undoubtedly helped.
While we finished the last of the tea, I asked them what they thought made the service successful.
SHILLING: “You can find, watch and listen to any of the material that people are sharing in GUIDE on the open web. Just Google it. But I don’t think people just want more content. They want context. And its people that bring that context to life. You can find Rooms now where there’s a relay of Guides running 24×7. Each Guide highlighting different aspects of the exact same collection. Costume design, narrative arcs and character bios. Historical and cultural significance. Personal stories. There’s endless context to discover around the same content. That’s what fandoms have understood for years.”
LEITH: “People just like stories. We gave them a place to tell them. And an opportunity to listen.”
I like to read things when I have time to reduce distractions and give me change to absorb several viewpoints rather than simply the latest, hottest takes.
I’ve fine-tuned my approach to managing my reading and research. A few of the tools and services have changed, but the essentials stay the same. If you’re interested, here’s how I’ve made things work for me:
Manages all my subscriptions for blogs, newsletters and more into one easily accessible location
Lots of sites still support RSS its not dead, merely resting
Feedbin is great at discovering feeds if you just paste in a site URL. One of the magic parts of RSS
You can also subscribe to newsletters with a special Feedbin email address and they’ll get delivered to your reader. Brilliant. You’re not making me go back into my inbox, its scary in there.
Feedme. Feedbin allows me to read posts anywhere, but I use this Android app (there are others) as a client instead
Regularly syncs with Feedbin, so I can have all the latest unread posts on my phone for the commute or an idle few minutes
It provides a really quick interface to skim through posts and either immediately read the or add them to my “to read” list, in Pocket…
Pocket. Mobile and web app that I basically use as a way to manage a backlog of things “to read”.
Gives me a clutter free (no ads!) way to read content either in the browser (which I rarely do) or on my phone
It has its issues with some content, but you can easily switch to a full web view
Not everything I want to read comes in via my feed reader so I take links from Slack, Twitter or elsewhere and use the Pocket browser extension or its share button integration to stash things away for later reading. Basically if its not a 1-2 minute read it goes into Pocket until I’m ready for it. Keeps the number of browser tabs under control too.
The offline content syncing makes it great for using on my commute, especially on the tube
The end result is a fully self-curated feed of interesting stuff. I’m no longer fighting someone else’s algorithm, so I can easily find things again.
I can minimise number of organisations I’m following on twitter, and just subscribe to their blogs. Also helps to buck the trend towards more email newsletters which are just blogs but you’re all in denial.
Also helps to reduce the number of distractions, and fight the pressure to keep checking on twitter in case I’ve missed something interesting. It’ll be in the feed reader when I’m ready.
Long live RSS!
It’s about time we stopped rebooting social networks and rediscovered more flexible ways to create, share and read content online. Go read
Say it with me. Go on.
LONG LIVE RSS!
(*) not actually everyone, but all the cool kids anyway. Alright, just us nerds, but we loved it.
(**) not actually more civilised, but it was more decentralised
I’m interested in how people share information, particularly data, on social networks. I think it’s something to which it’s worth paying attention, so we can ensure that it’s easy for people to share insights and engage in online debates.
There’s lots of discussion at the moment around fact checking and similar ways that we can improve the ability to identify reliable and unreliable information online. But there may be other ways that we can make some small improvements in order to help people identify and find sources of data.
The design of the Twitter and Facebook platforms constrain how we can share information. Within those constraints people have, inevitably, adopted various patterns that allow them to publish and share content in preferred ways. For example, information might be shared:
As a link to a page, where the content of the tweet or post is just the title
As a link to a page, but with a comment and/or hashtags for context
As a screenshot, e.g. of some text, chart or something. This usually has some commentary attached. Some apps enable this automatically, allowing you to share a screenshot of some highlighted text
As images and photographs, e.g. of printed page or report (or even sometimes a screenshot of text from another app)
In the first examples there are always links that allow someone to go and read the original content. In fact that seems to be the typical intention: go read (or watch) this thing.
The other two examples are usually workarounds for the fact that its often hard to deep link to a section of a page or video.
Sometimes it’s just not possible because the information of interest isn’t in a bookmarkable section of a page. Or perhaps the user doesn’t know how to create that kind of deep link. Or they may be further constrained by a mobile app or other service that is restricting their ability to easily share a link. Not every application let’s the web happen.
In some cases screenshotting may also be conscious choice, e.g. posting a photo of someone’s tweet because you don’t want to directly interact with them.
Whatever the reason, this means there is usually no link in the resulting post. Which often makes it difficult for a reader to find the original content. While social media is reducing friction in sharing, its increasing friction around our ability to check the reliability and accuracy of what’s been shared.
If you tweet out a graph with some figures in a debate, I want to know where it’s come from. I want to see the context that goes with it. The ability to easily identify the source of shared content is, I think, part of “data forensics”.
So, what can we do fix this?
Firstly, there’s more that could be done to build better ways to deep link into pages, e.g. to allow sharing of individual page elements. But people have been trying to do that on and off for years without much visible success. It’s a hard problem, particularly if you want to allow someone to link to a piece of text. It could be time for a standards body to have another crack at it. Or I might have missed some exciting process, so please tell me if I have! But I think something like this would need some serious push behind. You need support from not just web frameworks and the major CMS platforms, but also (probably) browser vendors.
Secondly, Twitter and Facebook could allow us some more flexibility. For example, allow apps to post additional links and/or other metadata that are then attached to posts and tweets. It won’t address every scenario, but it could help. It also feels like a relatively easy thing for them to do as its a natural extension of some existing features.
Thirdly, we could look at ways to attach data to the images people are posting, regardless of what the platforms support. I’ve previously wondered about using XMP packets to attach provenance and attribution information to images. Unfortunately it doesn’t work for every format and it turns out that most platforms strip embedded metadata anyway. This is presumably due to reasonable concerns around privacy, but they could still white-list some metadata. We could maybe use steganography too.
But the major downsides here is that you’d need a custom social media client or browser extension to let you see and interact with the data. So, again that’s a massive deployment issue.
As things currently stand I think the best approach is to plan for visualisations and information to be shared, and design the interactions and content accordingly. Assume that your carefully crafted web page is going to be shared in a million different pieces. Which means that you should:
Include plenty of in-page anchors and use clear labelling to help people build links to relevant sections
Adapt your social media sharing buttons to not just link to the whole page, but also allow the user to share a link to a specific section
Design your twitter cards and other social metadata, for example is there a key graphic that would be best used as the page image?
Include links and source information on all of the graphs and infographics that you share. Make sure the link is short and persistent in case it has to be re-keyed from a screenshot
Provide direct ways to tweet and share out a graph that will automatically include a clearly labelled image, that contains a link
With their globe-spanning satellite network nearing completion, Peter Linkage reports on some of the key milestones in the history of the British Hypertextual Society.
The British Hypertextual Society was founded in 1905 with a parliamentary grant from the Royal Society of London. At the time there was growing international interest in finding better ways to manage information, particularly scientific research. Undoubtedly the decision to invest in the creation of a British centre of expertise for knowledge organisation was also influenced by the rapid progress being made in Europe.
Paul Otlet‘s Universal Bibliographic Repertory and his ground-breaking postal search engine were rapidly demonstrating their usefulness to scholars. Otlet’s team began publishing the first version of their Universal Decimal Classification only the year before. Letters between Royal Society members during that period demonstrate concern that Britain was losing the lead in knowledge science.
As you might expect, the launch of the British Hypertextual Society (BHS) was a grand affair. The centre piece of the opening ceremony was the Babbage Bookwheel Engine, which remains on show (and in good working order!) in their headquarters to this day. The Engine was commissioned from Henry Prevost Babbage, who refined a number of his father’s ideas to automate and improve on Ramelli’s Bookwheel concept.
While it might originally have been intended as only a centre piece, it was the creation of this Engine that laid the groundwork for many of the Society’s later successes.
Competition between the BHS members and Otlet’s team in Belgium encouraged the rapid development of new tools. This includes refinements to the Bookwheel Engine, prompting its switch from index cards to microfilm. Ultimately it was also instrumental in the creation of the United Kingdom’s national grid and the early success of the BBC.
In the 1920s, in an effort to improve on the Belgium Postal Search Service, the British Government decided to invest in its own solution. This involved reproducing decks of index cards and microfilm sheets that could be easily interchanged between Bookwheel Engines. The new, standardised electric engines were dubbed “Card Wheels”.
The task of distributing the decks and the machines to schools, universities and libraries was given to the recently launched BBC as part of its mission to inform, educate and entertain. Their microfilm version of the Domesday book was the headline grabbing release, but the BBC also freely distributed a number of scholarly and encyclopedic works.
These major advances in the distribution of knowledge across the United Kingdom lead to Otlet moving to Britain in the early 1930s. A major scandal at the time, this triggered the end of many of the projects underway in Belgium and beyond. Awarded a senior position in the BHS, Otlet transferred his work on the Mundaneum to London.
Close ties between the BHS members and key government officials meant that the London we know today is truly the “World City” envisioned by Otlet. It’s interesting to walk through London and consider how so much of the skyline and our familiar landmarks are influenced by the history of hypertext.
The development of the Memex in the 1940s laid the foundations for the development of both home and personal hypertext devices. Combining the latest mechanical and theoretical achievements of the BHS with some American entrepreneurship lead to devices rapidly spreading into people’s homes. However the device was the source of some consternation within the BHS as it was felt that British ideas hadn’t been properly credited in the development of that commercial product.
Of course we shouldn’t overlook the importance of the InterGraph in ensuring easy access to information around the globe. Designed to resist nuclear attack, the InterGraph used graph theory concepts developed by the BHS to create a world-wide mesh network between hypertext devices and sensors. All of our homes, cars and devices are part of this truly distributed network.
Tim Berners-Lee‘s development of the Hypertext Resource Locator was initially seen as a minor breakthrough. But it actually laid the foundations for the replacement of Otlet’s classification scheme and accelerated the creation of the World Hypertext Engine (WHE) and the global information commons. Today the WHE is ubiquitous. It’s something we all use and contribute to on a daily basis.
But, while we all contribute to the WHE, it’s the tireless work of the “Controllers of The Graph” in London that ensures that the entire knowledge base remains coherent and reliable. How else would we distinguish between reliable, authoritative sources and information published by any random source? Their work to fact check information, manage link integrity and ensure maintenance of core assets are key features of the WHE as a system.
Some have wondered what an alternate hypertext system might look like. Scholars have pointed to ideas such as Ted Nelson’s “Xanadu” as one example of an alternative system. Indeed it is one of many that grew out of the counter-culture movement in the 1960s. Xanadu retained many of the features of the WHE as we know it today, e.g. transclusion and micro-transactions, but removed the notion of a centralised index and register of content. This not only removed the ability to have reliable, bi-directional links, but would have allowed anyone to contribute anything, regardless of its veracity.
For many its hard to imagine how such a chaotic system would actually work. Xanadu has been dismissed as “a foam of ever-popping bubbles“. And a heavily commercialised and unreliable system of information is a vision to which a few would subscribe.
Who would want to give up the thrill of seeing their first contributions accepted into the global graph? It’s a rite of passage that many reflect on fondly. What would the British economy look like if it were not based on providing access to the world’s information? Would we want to use a system that was not fundamentally based on the “Inform, Educate and Entertain” ideal?
This brings us to the present day. The launch of a final batch of satellites will allow the British Hypertextual Society to deliver on a long-standing goal whilst also enabling its next step into the future.
Launched from the British space centre at Goonhilly, each of the standardised CardSat satellites carries both a high-resolution camera and an InterGraph mesh network node. The camera will be used to image the globe in unprecedented detail. This will be used to ensure that every key geographical feature, including every tree and many large animals can be assigned a unique identifier, bringing them into the global graph. And, by extending the mesh network into space the BHS will ensure that the InterGraph has complete global coverage, whilst also improving connectivity between the fleet of British space drones.
It’s an exciting time for the future of information sharing. Let’s keep sharing what we know!
Of course this is all on a sliding scale. Many news outlets breathlessly report on scientific research. This can make for fun, if eye-rolling reading. Advances in AI and discovery of alien mega-structures are two examples that spring to mind.
And then there’s the way in which statistics and research is given a spin by the newspapers or politicians. This often glosses over key details in favour of getting across a political message or point scoring. Today I was getting cross about Theresa May’s blaming of GP’s for the NHS crisis. Her remarks are based on a report recently published by the National Audit Office. I haven’t seen a single coverage of the piece link to the NAO press release or the high-level summary (PDF), so you’ll either have to accept their remarks or search for it yourself.
Organisations like Full Fact do an excellent job of digging into these claims. They link the commentary to the underlying research or statistics alongside a clear explanation. In the same vein is NHS Choices Behind the Headlines which fills a similar role, but focuses on the reporting of medical and health issues.
What I think I’d like though is a service that brings all those different services together. To literally give me the missing links between research, news and commentary.
But rather than aggregating news articles or fact checking reports to give me a feed, or what we used to call a “river of news”, why not present a river of research instead? Let me see the statistics or reports that are being being debated and then let me jump off to see the variety of commentary and fact checking associated with it.
That way I could choose to read the research or a summary of it, and then decide to look at the commentary. Or, more realistically, I could at least see the variety of ways in which a specific report is being presented, described and debated. That would be a useful perspective I think. It would shift the focus away from individual outlets and help us find alternative viewpoints.
I doubt that this would become anyone’s primary way to consume the news. But it could be interesting to those of who like to dig behind the headlines. It would also be useful as a research tool in its own right. In the face of consistent lack of interest from news outlets in linking to primary sources, this might be something that could be crowd-sourced.
Does this type of service already exist? I suspect there are similar efforts around academic research, but I don’t recall seeing anything that covers a wider set of outputs including national and government statistics.
As of last month Google News attempts to highlight fact check articles. Content from fact checking organisations will be tagged so that their contribution to on-line debate can be more clearly identified. I think this is a great move and a first small step towards addressing wider concerns around use of the web for disinformation and a “post truth” society.
So how does it work?
Firstly, news sites can now advertise fact checking articles using a pending schema.org extension called Claim Review. The mark-up allows a fact checker to indicate which article they are critiquing along with a brief summary of what aspects are being reviewed.
Metadata alone is obviously ripe for abuse. Anyone could claim any article is a fact check. So there’s an additional level of editorial control that Google layer on top of that metadata. They’ve outlined their criteria in their help pages. These seems perfectly reasonable: it should be clear what facts are being checked, sources must be cited, organisations must be non-partisan and transparent, etc.
Google’s (and other’s) lists of approved fact checking organisations are published as open data
The lists are cross-referenced with identifiers from sources like OpenCorporates that will allow independent verification of ownership, etc.
Fact checking organisations publish open data about their sources of funding and affiliations
Fact checking organisations publish open data, perhaps using Schema.org annotations, about the dataset(s) they use to check individual claims in their articles
Fact checking organisations licence their ClaimReview metadata for reuse by anyone
Fact checking is an area that benefits from the greatest possible transparency. Open data can deliver that transparency.
Another angle to consider is that fact checking may be carried out by more than just media organisations. John Udell has written a couple of interesting pieces on annotating the wild-west of information flow and bird-dogging the web that highlight the potential role of annotation services in helping to fact check and create constructive debate and discussion on-line.
I’ve been thinking a bit about “the commons” recently. Specifically, the global information commons that is enabled and supported by Creative Commons (CC) licences. This covers an increasingly wide variety of content as you can see in their recent annual review.
The review unfortunately doesn’t mention data although there’s an increasing amount of that published using CC (or compatible) licences. Hopefully they’ll cover that in more detail next year.
I’ve also been following with interest Tom Steinberg’s exploration of Digital Public Institutions (Part 1, Part 2). As a result of my pondering about the information and data commons, think there’s a couple of other types of institution which we might add to Tom’s list.
My proposed examples of digital public services are deliberately broad. They’re intended to serve the citizens of the internet, not just any one country.
But it’s also an archival challenge. The majority of that material will never be listened to, read or watched. Data will remain unanalysed. And in all likelihood it may disappear before anyone has had any chance to unlock its potential. Sometimes media needs time to find its audience.
This is why projects like the Internet Archive are so important. I think the Internet Archive is one of the greatest achievements of the web. If you need convincing then watch this talk by Brewster Kahle. If, like me, you’re of a certain age then these twothings alone should be enough to win you over.
I think we might see and arguably need more digital public institutions who are not just archiving great chunks of the web, but also the working with that material to help present it to a wider audience.
I see other signals that this might be a useful thing to do. Think about all of the classic film, radio and TV material that is never likely to ever see the light of day again. Not just for rights reasons, but also because its not HD quality or hasn’t been cut and edited to reflect modern tastes. I think this is at least partly the reason why we so many reboots and remakes.
Archival organisations often worry about how to preserve digital information. One tactic being to consider how to migrate between formats to ensure information remains accessible. What if we treated media the same? E.g. by re-editing or remastering it to make it engaging to a modern audience? Here’s an example of modernising classic scientific texts or and another that is remixing Victorian jokes as memes.
Maybe someone could spin a successful commercial venture out of this type of activity. But I’m wondering whether you could build a “public service broadcasting” organisation that presented refined, edited, curated views of the commons? I think there’s certainly enough raw materials.
Global data infrastructure projects
The ODI have spent some time this year trying to bring into focus the fact that data is now infrastructure. In my view the best exemplar of a truly open piece of global data infrastructure is OpenStreetMap (OSM). A collaboratively maintained map of our world. Anyone can contribute. Anyone can use it.
OSM was set up to try to solve the issue that the UK’s mapping and location infrastructure was, and largely still is, tied up with complex licensing and commercial models. Rather than knocking at the door of existing data holders to convince them to release their data, OSM shows what you can deliver with the participation of a crowd of motivated people using modern technology.
It’s a shining example of the networked age we live in.
There’s no reason to think that this couldn’t be done in for other types of data, creating more publicly owned infrastructure. There are now many more ways in which people could contribute data to such projects. Whether that information is about themselves or the world around us.
Getting coverage and depth to data could also potentially be achieved very quickly. Costs to host and serve data are also dropping, so sustainability also becomes more achievable.
And I also feel (hope?) there is a growing unease with so much data infrastructure being owned by commercial organisations. So perhaps there’s a movement towards wanting more of this type of collaboratively owned infrastructure.
Data infrastructure incubators
If you buy into the fact that we need more projects like OSM, then its natural to start thinking about the common features of such projects. Those that make them successful and sustainable. There are likely to be some common organisational patterns that can be used as a framework for designing these organisations. Currently, while focused on scholarly research, I think this is the best attempt at capturing those patterns that I’ve seen so far.
Given a common framework then it’s becomes possible to create incubators whose job it is to launch these projects and coach, guide and mentor them towards success.
So that is my third and final addition to Steinberg’s list: incubators that are focused not on the creation of the next start-up “unicorn” but on generating successful, global collaborative data infrastructure projects. Institutions whose goal is the creation of the next OpenStreetMap.
These type of projects have a huge potential impact as they’re not focused on a specific sector. OSM is relevant to many different types of application, its data is used in many different ways. I think there’s a lot more foundational data of this type which could and should be publicly owned.
I may be displaying my naivety, but I think this would be a nice thing to work towards.
I find all of these fascinating, for a variety of reasons:
How do we identify and exclude deliberately fictional data when harvesting, aggregating and analysing data from the web? Credit to Ian Davis for some early thinking about attack vectors for spam in Linked Data. While I’d expect copyright easter eggs to become less frequent they’re unlikely to completely disappear. But we can definitely expect more and more deliberate spam and attacks on authoritative data. (Categories 1, 2, 3)
Some of the most enthusiastic collectors and curators of data are those that are documenting fictional environments. Wikia is a small universe of mini-wikipedias complete with infoboxes and structured data. What can we learn from those communities and what better tools could we build for them? (Category 6)
This is interesting news and its been covered in various places already, including this good overview at Programmable Web. I find it interesting because its the first time that I can recall an public API being so visibly switched out for a closed, private alternative. Netflix will still offer an API but only for a limited set of eight existing affiliates and (of course) their own applications. Private APIs have always existed and will continue to do so, but the trend to date has been about these being made public, rather than a move in the opposite direction.
It’s reasonable to consider if this might be the first of a new trend, or whether its just an outlier. Netflix have been reasonably forthcoming about their API design decisions so I expect many others will be reflecting on their decision and whether it would make sense for them.
But does it make sense at all?
If you read this article by Daniel Jacobson (Director of Engineering for the Netflix API) you can get more detail on the decision and some insight into their thought process. By closing the public API and focusing on a few affiliates Jacobson suggests that they are able to optimise the API to fit the needs of those specific consumers. The article suggests that a fine-grained resource-oriented API is excellent for supporting largely un-mediated use by a wide range of different consumers with a range of different use cases. In contrast an API that is optimised for fewer use cases and types of query may be able to offer better performance. An API with a smaller surface area will have lower maintenance overheads. Support overheads will also be lower because there’s few interactions to consider and a smaller user base making them.
That rationale is hard to argue with from either a technical or business perspective. If you have a small number of users driving most of your revenue and a long tail of users generating little or no revenue but with a high support code, it mostly makes sense to follow the revenue. I don’t buy all of the technical rationale though. It would be possible to support a mixture of resource types in the API, as well as a mixture of support and service level agreements. So I suspect the business drivers are the main rationale here. APIs have generally meant businesses giving up control, if Netflix are able to make this work then I would be surprised if more business don’t do the same eventually, as a means to regain that control.
But by withdrawing from any kind of public API Netflix are essentially admitting that they don’t see any further innovation happening around their API: what they’ve seen so far is everything they’re going to see. They’re not expecting a sudden new type of usage to drive revenue and users to the service. Or at least not enough to warrant maintaining a more generic API. If they felt that the community was growing, or building new and interesting applications that benefited their business, they’d keep the API open. By restricting it they’re admitting that closer integration with a small number of applications is a better investment. It’s a standard vertical integration move that gives them greater control over all user experience with their platform. It wouldn’t surprise me if they acquired some of these applications in the future.
However it all feels a bit short-sighted to me as they’re essentially withdrawing from the Web. They’re no longer going to be able to benefit from any of the network effects of having their API be a part of the wider web and remixable (within their Terms of Service) with other services and datasets. Innovation will be limited to just those companies they’re choosing to work with through an “experience” driven API. That feels like a bottleneck in the making.
It’s always possible to optimise a business and an API to support a limited set of interactions, but that type of close coupling inevitably results in less flexibility. Personally I’d be backing the Web.