Quick tips for chairing remote meetings

There’s a growing set of useful resources and guidance to help people run better remote meetings. I’ve been compiling a list to a few. At the risk of repeating other, better advice, I’m going to write down some brief tips for running remote meetings.

For a year or so I was chairing fortnightly meetings of the OpenActive standards group. Those meetings were an opportunity to share updates with a community of collaborators, get feedback on working documents and have debates and discussion around a range of topics. So I had to get better at doing it. Not sure whether I did, but here’s a few things I learned.

I’ll skip over general good meeting etiquette (e.g. around circulating an agenda and working documents in advance), to focus on the remote bits.

  1. Give people time to arrive. Just because everyone is attending remotely doesn’t mean that everyone will be able to arrive promptly. They may be working through technical difficulties, for example. Build in a bit of deliberate slack time at the start of the meeting. I usually gave it around 5-10 minutes. As people arrive, greet them and let them know this is happening. You can then either chat as a group or people can switch to emails, etc while waiting for things to start.
  2. Call the meeting to order. Make it clear when the meeting is formally starting and you’ve switched from general chat and waiting for late arrivals. This will help ensure you have people’s attention.
  3. Use the tools you have as a chair. Monitor side chat. Monitor the video feeds to check to see if people look like they have something to say. And, most importantly, mute people that aren’t speaking but are typing or have lots of background noise. You can usually avoid the polite dance around asking people to do that, or suffering in silence, by using option to mute people. Just tell them you’ve done that. I usually had Zoom meetings set up so that people were muted on entry.
  4. Do a roll call. Ask everyone to introduce themselves at the start. Don’t just ask everyone to do that, as they’ll talk over each other. Go through people individually as ask them to say hello or do an introduction. This helps with putting voices to names (if not everyone is on video), ensures that everyone knows how to mute/unmute and puts some structure to the meeting.
  5. Be aware of when people are connecting in different ways. Some software, like Zoom, allow people to join in several ways. Be aware of when you have people on phone and video, especially if you’re presenting material. Try to circulate links either before or during meeting so they can see them
  6. Use slides to help structure the meeting. I found that doing a screenshare of a set of slides for the agenda and key talking points helps to give people a sense of where you’re at in the meeting. So, for example if you have four items on your agenda, have a slide for each topic item. With key questions or decision points. It can help to focus discussion, keeps people’s attention on the meeting (rather than a separate doc) and gives people a sense of where you are. The latter is especially helpful if people are joining late.
  7. Don’t be afraid of a quick recap. If people join a few minutes late in the meeting, give them a quick recap of where you’re at, ask them to introduce themselves. I often did this if people joined a few minutes late, but not if they dropped in 30 minutes into a 1 hour meeting.
  8. Don’t be afraid of silence or directly asking people questions. Chairing remote meetings can be stressful and awkward for everyone. It can be particularly awkward to ask questions and then sit in silence. Often this is because people are worried about talking over each other. Or they just need time to think. Don’t be afraid of a bit of silence. Doing a roll call to ask everyone individually for feedback can be helpful if you want to make decisions. Check in on people who have not said anything for a while. It’s slow, but provides some order for everyone
  9. Keep to time. I tried very hard not to let meetings over-run even if we didn’t cover everything. People have other events in their calendars. Video and phone calls can be tiring. It’s better to wrap up at a suitable point and follow up on things you didn’t get to cover than to have half the meeting drop out at the end.
  10. Follow-up afterwards. Make sure to follow up afterwards. Especially if not everyone was able to attend. For OpenActive we video the calls and share those as well as a summary of discussion points.

Those are all the things I tried to consciously get better at and I think helped things go more smoothly.

GUIDE, a retrospective

“Tyntesfield servants’ bells” by Caroline. CC-BY-NC-ND licence. https://www.flickr.com/photos/carolineld/4608720906/

This article was first published in the February 2030 edition of Sustain magazine. Ten years since the public launch of GUIDE we sit down with its designers to chat about its origin and what’s made it successful.

It’s a Saturday morning and I’m sitting in the bustling cafe at Tyntesfield house, a National Trust property south of Bristol. I’m enjoying a large pot of tea and a slice of cake with Joe Shilling and Gordon Leith designers of one of the world’s most popular social applications: GUIDE. I’d expected to meet somewhere in the city, but Shilling suggested this as a suitable venue. It turns out Tyntesfield plays a part in the origin story of GUIDE. So its fitting that we are here for the tenth anniversary of its public launch.

SHILLING: “Originally we were just playing. Exploring the design parameters of social applications.”

He stirs the pot of tea while Leith begins sectioning the sponge cake they’ve ordered.

SHILLING: “People did that more in the early days of the web. But Twitter, Facebook, Instagram…they just kind of sucked up all the attention and users. It killed off all that creativity. For a while it seemed like they just owned the space…But then TikTok happened…”

He pauses while I nod to indicate I’ve heard of it.

SHILLING: “…and small experiments like Yap. It was a slow burn, but I think a bunch of us started to get interested again in designing different kinds of social apps. We were part of this indie scene building and releasing bespoke social networks. They came and went really quickly. People just enjoyed them whilst they were around.”

Leith interjects around a mouthful of cake:

LEITH: “Some really random stuff. Social nets with built in profile decay so they were guaranteed to end. Made them low commitment, disposable. Messaging services where you could only post at really specific, sometimes random times. Networks that only came online when its members were in precise geographic coordinates. Spatial partitioning to force separation of networks for home, work and play. Experimental, ritualised interfactions.”

SHILLING: “The migratory networks grew out of that movement too. They didn’t last long, but they were intense. ”

LEITH: “Yeah. Social networks that just kicked into life around a critical mass of people. Like in a club. Want to stay a member…share the memes? Then you needed to be in its radius. In the right city, at the right time. And then keep up as the algorithm shifted it. Social spaces herding their members.”

SHILLING: “They were intense and incredibly problematic. Which is why they didn’t last long. But for a while there was a crowd that loved them. Until the club promoters got involved and then that commercial aspect killed it.”

RENT-SEEKING

GUIDE had a very different starting point. Flat sharing in Bristol, the duo needed money. Their indie credibility was high, but what they were looking for a more mainstream hit with some likelihood of revenue. The break-up of Facebook and the other big services had created an opportunity which many were hoping to capitalise on. But investment was a problem.

LEITH: “We wrote a lot of grant proposals. Goal was to use the money to build out decent code base. Pay for some servers that we could use to launch something bigger”.

Shilling pours the tea, while Leith passes me a slice of cake.

SHILLING: “It was a bit more principled that that. There was plenty of money for apps to help with social isolation. We thought maybe we could build something useful, tackle some social problems, work with a different demographic than we had before. But, yeah, we had our own goals too. We had to take what opportunities were out there.”

LEITH: “My mum had been attending this Memory Skills group. Passing around old photos and memorabilia to get people talking and reminiscing. We thought we could create something digital.”

SHILLING: “We managed to land a grant to explore the idea. We figured that there was a demographic that had spent time connecting not around the high street or the local football club. But with stuff they’d all been doing online. Streaming the same shows. Revisiting old game worlds. We thought those could be really useful touch points and memory triggers too. And not everyone can access some of the other services.”

LEITH: “Mum could talk for hours about Skyrim and Fallout”.

SHILLING: “So we prototyped some social spaces based around that kind of content. It was during the user testing that we had the real eye-opener”.

“Memory Box” by judy_and_ed. CC-BY-NC. https://www.flickr.com/photos/65924740@N00/18516079841/

ITERATIONS

The first iterations of the app that ultimately became GUIDE were pretty rough. Shilling and Leith have been pretty open about their early failures.

LEITH: “The first iteration was basically a Twitch knock-off. People could join the group remotely, chat to each other and watch whatever the facilitator decided to stream.”

SHILLING: “Engagement was low. We didn’t have cash to license a decent range of content. The facilitators needed too much training on the streaming interface and real-time community management.”

LEITH: “I then tried getting a generic game engine to boot up old game worlds, so we could run tours. But the tech was a nightmare to get working. Basically needed different engines for different games”

SHILLING: “Some of the users loved it, mainly those that had the right hardware and were already into gaming. But it didn’t work for most people. And again…I…we were worried about licensing issues”

LEITH: “So we started testing a customised, open source version of Yap. Hosted chat rooms, time-limited rooms and content embedding…that ticked a lot of boxes. I built a custom index over the Internet Archive, so we could use their content as embeds”.

SHILLING: “There’s so much great stuff that people love in the Internet Archive. At the time, not many services were using it. Just a few social media accounts. So we made using it a core feature. It neatly avoided the licensing issues. We let the alpha testers run with the service for a while. We gave them and the memory service facilitators tips on hosting their own chats. And basically left them to it for a few weeks. It was during the later user testing that we discovered they were using it in different ways that we’d expected.”

Instead of having conversations with their peer groups, the most engaged users were using it to chat with their families. Grandparents showing their grandchildren stuff they’d watched, listened to, or read when they were younger.

SHILLING: “They were using it to tell stories”

Surrounded by the bustle in the cafe, we pause to enjoy the tea and cake. Then Shilling gestures around the room.

SHILLING: “We came here one weekend. To get out of the city. Take some time to think. They have these volunteers here. One in every room of the house. People just giving up their free time to answer any questions you might have as you wander around. Maybe, point out interesting things you might not have noticed? Or, if you’re interested, tell you about some of things they love about the place. It was fascinating. I realised that’s how our alpha testers were using the prototype…just sharing their passions with their family.”

LEITH: “So this is where GUIDE was born. We hashed out the core features for the next iteration in a walk through the grounds. Fantastic cake, too.”

“Walkman and mix tapes” by henry… CC-BY-NC-ND. https://www.flickr.com/photos/henrybloomfield/5136897807/

MEMORY PALACE

The familiar, core features of GUIDE have stayed roughly the same since that day.

Anyone can become a Guide and create a Room which they can use to curate and showcase small collections of public domain or openly licensed content. But no more than seven videos, photos, games or whatever else you can embed from the Internet Archive. Room contents can be refreshed once a week.

Visitors are limited to a maximum of five people. Everyone else gets to wait in a lobby, with new visitors being admitted every twenty minutes. Audio feeds only from the Guides, allowing them to chat to Visitors. But Visitors can only interact with Guides via a chat interface that requires building up messages — mostly questions — from a restricted set of words and phrases that can be tweaked by Guides for their specific Room. Each visitor limited to one question every five minutes.

LEITH: “The asymmetric interface, lobby system and cool-down timers were lifted straight from games. I looked up the average number of grandchildren people had. Turns out its about fives, so we used that to size Rooms. The seven item limit was because I thought it was a lucky number. We leaned heavily on the Internet Archive’s bandwidth early on for the embeds, but we now mirror a lot of stuff. And donate, obviously.”

SHILLING: “The restricted chat interface has helped limit spamming and moderation. No video feeds from Guides means that the focus stays on the contents of the Room, not the host. Twitch had some problematic stuff which we wanted to avoid. I think its more inclusive.”

LEITH: “Audio only meant the ASMR crowd were still happy though”.

Today there are tens of thousands of Rooms. Shilling shows me a Room where the Guide gives tours of historical maps of Bath, mixing in old photos for context. Another, “Eleanor’s Knitting Room” curates knitting patterns. The Guide alternating between knitting tips and cultural critiques.

Leith has a bookmarked collection of retro-gaming Rooms. Doom WAD teardowns and classic speed-runs analysis for the most part.

In my own collection, my favourite is a Room showing a rota of Japanese manhole cover designs, the Guide an expert on Japanese art and infrastructure. I often have this one a second screen whilst writing. The lobby wait time is regularly over an hour. Shilling asks me to share that one with him.

LEITH: “There are no discovery tools in Guide. That was deliberate from the start. Strictly no search engine. Want to find a Room? You’ll need to be invited by a Guide or grab a link from a friend”.

SHILLING: “Our approach has been to allow the service to grow within the bounds of existing communities. We originally marketed the site to family groups, and an older demographic. The UK and US were late adopters, the service was much more popular elsewhere for a long time. Things really took off when the fandoms grabbed hold of it.”

An ecosystem of recommendation systems, reviews and community Room databases has grown up around the service. I asked whether that defeated the purpose of not building those into the core app?

LEITH: “It’s about power. If we ran those features then it would be our algorithms. Our choice. We didn’t want that.”

SHILLING: “We wanted the community to decide how to best use GUIDE as social glue. There’s so many more creative ways in which people interact with and use the platform now”.

The two decline to get into discussion of the commercial success of GUIDE. It’s well-documented that the two have become moderately wealthy from the service. More than enough to cover that rent in the city centre. Shilling only touches on it briefly:

SHILLING: “No ads and a subscription-based service has kept us honest. The goal was to pay the bills while running a service we love. We’ve shared a lot of that revenue back with the community in various ways”.

Photo by Jacques Bopp on Unsplash. https://unsplash.com/photos/pvtA7r3jBTc

SLOW WEB

GUIDE can be situated within the Slow Web movement. There are a host of services offering quieter online experiences. Videos of walks through foreign cities. Live feeds from orbiting satellites and VR outposts mounted on marine buoys and in wild locations around the world. Social features as bolt-on features. But GUIDE’s focus on the curation of small spaces, story telling and shared discovery sets it apart.

Of course, all of this was possible before. YouTube and Twitch supported broadcasts and streaming for years, and many people used them in similar ways. But the purposeful design of a more dedicated interface highlights how constraints can shape a community and spark creativity. Removal of many of the asymmetries inherent in the design of those older platforms has undoubtedly helped.

While we finished the last of the tea, I asked them what they thought made the service successful.

SHILLING: “You can find, watch and listen to any of the material that people are sharing in GUIDE on the open web. Just Google it. But I don’t think people just want more content. They want context. And its people that bring that context to life. You can find Rooms now where there’s a relay of Guides running 24×7. Each Guide highlighting different aspects of the exact same collection. Costume design, narrative arcs and character bios. Historical and cultural significance. Personal stories. There’s endless context to discover around the same content. That’s what fandoms have understood for years.”

LEITH: “People just like stories. We gave them a place to tell them. And an opportunity to listen.”

Long live RSS! How I manage my reading

“LONG LIVE RSS!”

I shout these words from my bedroom window every morning. Reaffirming my love for this century’s most criminally neglected data standard.

If you’ve either forgotten, or never enjoyed, the ease of managing your information consumption via the magic of RSS and a feed reader, then you’re missing out mate.

Struggling with the noise, gloom and general bombast of social media? Get yourself a feed reader and fill it full of interesting subscriptions for a most measured and sedate way to consume words.

Once upon a time everyone(*) used them. We engaged in educated discourse, shared blog rolls, sent trackbacks and wrote comments on each others websites. Elegant weapons for a more civilized age (**).

I like to read things when I have time to reduce distractions and give me change to absorb several viewpoints rather than simply the latest, hottest takes.

I’ve fine-tuned my approach to managing my reading and research. A few of the tools and services have changed, but the essentials stay the same. If you’re interested, here’s how I’ve made things work for me:

  • Feedbin
    • Manages all my subscriptions for blogs, newsletters and more into one easily accessible location
    • Lots of sites still support RSS its not dead, merely resting
    • Feedbin is great at discovering feeds if you just paste in a site URL. One of the magic parts of RSS
    • You can also subscribe to newsletters with a special Feedbin email address and they’ll get delivered to your reader. Brilliant. You’re not making me go back into my inbox, its scary in there.
  • Feedme. Feedbin allows me to read posts anywhere, but I use this Android app (there are others) as a client instead
    • Regularly syncs with Feedbin, so I can have all the latest unread posts on my phone for the commute or an idle few minutes
    • It provides a really quick interface to skim through posts and either immediately read the or add them to my “to read” list, in Pocket…
  • Pocket. Mobile and web app that I basically use as a way to manage a backlog of things “to read”.
    • Gives me a clutter free (no ads!) way to read content either in the browser (which I rarely do) or on my phone
    • It has its issues with some content, but you can easily switch to a full web view
    • Not everything I want to read comes in via my feed reader so I take links from Slack, Twitter or elsewhere and use the Pocket browser extension or its share button integration to stash things away for later reading. Basically if its not a 1-2 minute read it goes into Pocket until I’m ready for it. Keeps the number of browser tabs under control too.
    • The offline content syncing makes it great for using on my commute, especially on the tube
  • IFTTT. I use this service to do two things:
    • Once I archive something in Pocket then it automatically adds them to Pinboard for me, using the right tags.
    • If I favourite something it tweets out the link without me having to go and actually look at twitter
  • Pinboard. Basically a complete archive of articles I’ve read.

The end result is a fully self-curated feed of interesting stuff. I’m no longer fighting someone else’s algorithm, so I can easily find things again.

I can minimise number of organisations I’m following on twitter, and just subscribe to their blogs. Also helps to buck the trend towards more email newsletters which are just blogs but you’re all in denial.

Also helps to reduce the number of distractions, and fight the pressure to keep checking on twitter in case I’ve missed something interesting. It’ll be in the feed reader when I’m ready.

Long live RSS!

It’s about time we stopped rebooting social networks and rediscovered more flexible ways to create, share and read content online. Go read

Say it with me. Go on.

LONG LIVE RSS!

(*) not actually everyone, but all the cool kids anyway. Alright, just us nerds, but we loved it.

(**) not actually more civilised, but it was more decentralised

 

Enabling data forensics

I’m interested in how people share information, particularly data, on social networks. I think it’s something to which it’s worth paying attention, so we can ensure that it’s easy for people to share insights and engage in online debates.

There’s lots of discussion at the moment around fact checking and similar ways that we can improve the ability to identify reliable and unreliable information online. But there may be other ways that we can make some small improvements in order to help people identify and find sources of data.

Data forensics is a term that usually refers to analysis of data to identify illegal activities. But the term does have a broader meaning that encompasses “identifying, preserving, recovering, analyzing, and presenting attributes of digital information“. So I’m going to appropriate the term to put a label on a few ideas.

The design of the Twitter and Facebook platforms constrain how we can share information. Within those constraints people have, inevitably, adopted various patterns that allow them to publish and share content in preferred ways. For example, information might be shared:

  1. As a link to a page, where the content of the tweet or post is just the title
  2. As a link to a page, but with a comment and/or hashtags for context
  3. As a screenshot, e.g. of some text, chart or something. This usually has some commentary attached. Some apps enable this automatically, allowing you to share a screenshot of some highlighted text
  4. As images and photographs, e.g. of printed page or report (or even sometimes a screenshot of text from another app)

In the first examples there are always links that allow someone to go and read the original content. In fact that seems to be the typical intention: go read (or watch) this thing.

The other two examples are usually workarounds for the fact that its often hard to deep link to a section of a page or video.

Sometimes it’s just not possible because the information of interest isn’t in a bookmarkable section of a page. Or perhaps the user doesn’t know how to create that kind of deep link. Or they may be further constrained by a mobile app or other service that is restricting their ability to easily share a link. Not every application let’s the web happen.

In some cases screenshotting may also be conscious choice, e.g. posting a photo of someone’s tweet because you don’t want to directly interact with them.

Whatever the reason, this means there is usually no link in the resulting post. Which often makes it difficult for a reader to find the original content. While social media is reducing friction in sharing, its increasing friction around our ability to check the reliability and accuracy of what’s been shared.

If you tweet out a graph with some figures in a debate, I want to know where it’s come from. I want to see the context that goes with it. The ability to easily identify the source of shared content is, I think, part of “data forensics”.

So, what can we do fix this?

Firstly, there’s more that could be done to build better ways to deep link into pages, e.g. to allow sharing of individual page elements. But people have been trying to do that on and off for years without much visible success. It’s a hard problem, particularly if you want to allow someone to link to a piece of text. It could be time for a standards body to have another crack at it. Or I might have missed some exciting process, so please tell me if I have! But I think something like this would need some serious push behind. You need support from not just web frameworks and the major CMS platforms, but also (probably) browser vendors.

Secondly, Twitter and Facebook could allow us some more flexibility. For example, allow apps to post additional links and/or other metadata that are then attached to posts and tweets. It won’t address every scenario, but it could help. It also feels like a relatively easy thing for them to do as its a natural extension of some existing features.

Thirdly, we could look at ways to attach data to the images people are posting, regardless of what the platforms support. I’ve previously wondered about using XMP packets to attach provenance and attribution information to images. Unfortunately it doesn’t work for every format and it turns out that most platforms strip embedded metadata anyway. This is presumably due to reasonable concerns around privacy, but they could still white-list some metadata. We could maybe use steganography too.

But the major downsides here is that you’d need a custom social media client or browser extension to let you see and interact with the data. So, again that’s a massive deployment issue.

As things currently stand I think the best approach is to plan for visualisations and information to be shared, and design the interactions and content accordingly. Assume that your carefully crafted web page is going to be shared in a million different pieces. Which means that you should:

  • Include plenty of in-page anchors and use clear labelling to help people build links to relevant sections
  • Adapt your social media sharing buttons to not just link to the whole page, but also allow the user to share a link to a specific section
  • Design your twitter cards and other social metadata, for example is there a key graphic that would be best used as the page image?
  • Include links and source information on all of the graphs and infographics that you share. Make sure the link is short and persistent in case it has to be re-keyed from a screenshot
  • Provide direct ways to tweet and share out a graph that will automatically include a clearly labelled image, that contains a link
  • Help users cite their sources
  • …etc

What do you think? Any tips or suggestions you’d add to this list? With a bit of awareness around how data is shared, we might be able to make small improvements to online discussions.

The British Hypertextual Society (1905-2017)

With their globe-spanning satellite network nearing completion, Peter Linkage reports on some of the key milestones in the history of the British Hypertextual Society.

The British Hypertextual Society was founded in 1905 with a parliamentary grant from the Royal Society of London. At the time there was growing international interest in finding better ways to manage information, particularly scientific research. Undoubtedly the decision to invest in the creation of a British centre of expertise for knowledge organisation was also influenced by the rapid progress being made in Europe.

Paul Otlet‘s Universal Bibliographic Repertory and his ground-breaking postal search engine were rapidly demonstrating their usefulness to scholars. Otlet’s team began publishing the first version of their Universal Decimal Classification only the year before. Letters between Royal Society members during that period demonstrate concern that Britain was losing the lead in knowledge science.

As you might expect, the launch of the British Hypertextual Society (BHS) was a grand affair. The centre piece of the opening ceremony was the Babbage Bookwheel Engine, which remains on show (and in good working order!) in their headquarters to this day. The Engine was commissioned from Henry Prevost Babbage, who refined a number of his father’s ideas to automate and improve on Ramelli’s Bookwheel concept.

While it might originally have been intended as only a centre piece, it was the creation of this Engine that laid the groundwork for many of the Society’s later successes.

Competition between the BHS members and Otlet’s team in Belgium encouraged the rapid development of new tools. This includes refinements to the Bookwheel Engine, prompting its switch from index cards to microfilm. Ultimately it was also instrumental in the creation of the United Kingdom’s national grid and the early success of the BBC.

In the 1920s, in an effort to improve on the Belgium Postal Search Service, the British Government decided to invest in its own solution. This involved reproducing decks of index cards and microfilm sheets that could be easily interchanged between Bookwheel Engines. The new, standardised electric engines were dubbed “Card Wheels”.

The task of distributing the decks and the machines to schools, universities and libraries was given to the recently launched BBC as part of its mission to inform, educate and entertain. Their microfilm version of the Domesday book was the headline grabbing release, but the BBC also freely distributed a number of scholarly and encyclopedic works.

Problems with reliable supply of electricity to parts of the UK hampered the roll out of the Card Wheels. This lead to the Electricity (Supply) Act of 1926 and the creation of Central Electricity Board. This simultaneously laid the foundations for a significant cabling infrastructure that would later carry information to the nation in digital forms.

These data infrastructural improvements were mirrored by a number of theoretical breakthroughs. Drawing on Ada Lovelace’s work and algorithms for the Difference Engine, British Hypertextual Society scholars were able to make rapid advances in the area of graph theory and analysis.

These major advances in the distribution of knowledge across the United Kingdom lead to Otlet moving to Britain in the early 1930s. A major scandal at the time, this triggered the end of many of the projects underway in Belgium and beyond. Awarded a senior position in the BHS, Otlet transferred his work on the Mundaneum to London.

Close ties between the BHS members and key government officials meant that the London we know today is truly the “World City” envisioned by Otlet. It’s interesting to walk through London and consider how so much of the skyline and our familiar landmarks are influenced by the history of hypertext.

The development of the Memex in the 1940s laid the foundations for the development of both home and personal hypertext devices. Combining the latest mechanical and theoretical achievements of the BHS with some American entrepreneurship lead to devices rapidly spreading into people’s homes. However the device was the source of some consternation within the BHS as it was felt that British ideas hadn’t been properly credited in the development of that commercial product.

Of course we shouldn’t overlook the importance of the InterGraph in ensuring easy access to information around the globe. Designed to resist nuclear attack, the InterGraph used graph theory concepts developed by the BHS to create a world-wide mesh network between hypertext devices and sensors. All of our homes, cars and devices are part of this truly distributed network.

Tim Berners-Lee‘s development of the Hypertext Resource Locator was initially seen as a minor breakthrough. But it actually laid the foundations for the replacement of Otlet’s classification scheme and accelerated the creation of the World Hypertext Engine (WHE) and the global information commons. Today the WHE is ubiquitous. It’s something we all use and contribute to on a daily basis.

But, while we all contribute to the WHE, it’s the tireless work of the “Controllers of The Graph” in London that ensures that the entire knowledge base remains coherent and reliable. How else would we distinguish between reliable, authoritative sources and information published by any random source? Their work to fact check information, manage link integrity and ensure maintenance of core assets are key features of the WHE as a system.

Some have wondered what an alternate hypertext system might look like. Scholars have pointed to ideas such as Ted Nelson’s “Xanadu” as one example of an alternative system. Indeed it is one of many that grew out of the counter-culture movement in the 1960s. Xanadu retained many of the features of the WHE as we know it today, e.g. transclusion and micro-transactions, but removed the notion of a centralised index and register of content. This not only removed the ability to have reliable, bi-directional links,  but would have allowed anyone to contribute anything, regardless of its veracity.

For many its hard to imagine how such a chaotic system would actually work. Xanadu has been dismissed as “a foam of ever-popping bubbles“. And a heavily commercialised and unreliable system of information is a vision to which a few would subscribe.

Who would want to give up the thrill of seeing their first contributions accepted into the global graph? It’s a rite of passage that many reflect on fondly. What would the British economy look like if it were not based on providing access to the world’s information? Would we want to use a system that was not fundamentally based on the “Inform, Educate and Entertain” ideal?

This brings us to the present day. The launch of a final batch of satellites will allow the British Hypertextual Society to deliver on a long-standing goal whilst also enabling its next step into the future.

Launched from the British space centre at Goonhilly, each of the standardised CardSat satellites carries both a high-resolution camera and an InterGraph mesh network node. The camera will be used to image the globe in unprecedented detail. This will be used to ensure that every key geographical feature, including every tree and many large animals can be assigned a unique identifier, bringing them into the global graph. And, by extending the mesh network into space the BHS will ensure that the InterGraph has complete global coverage, whilst also improving connectivity between the fleet of British space drones.

It’s an exciting time for the future of information sharing. Let’s keep sharing what we know!

A river of research, not news

I already hate the phrase “fake news”. We have better words to describe lies, disinformation, propaganda and slander, so lets just use those.

While the phrase “fake news” might originally have been used to refer to hoaxes and disinformation, it’s rapidly becoming a meaningless term used to refer to anything you don’t disagree with. Trump’s recent remarks being a case in point: unverified news is something very different.

Of course this is all on a sliding scale. Many news outlets breathlessly report on scientific research. This can make for fun, if eye-rolling reading. Advances in AI and discovery of alien mega-structures are two examples that spring to mind.

And then there’s the way in which statistics and research is given a spin by the newspapers or politicians. This often glosses over key details in favour of getting across a political message or point scoring. Today I was getting cross about Theresa May’s blaming of GP’s for the NHS crisis. Her remarks are based on a report recently published by the National Audit Office. I haven’t seen a single coverage of the piece link to the NAO press release or the high-level summary (PDF), so you’ll either have to accept their remarks or search for it yourself.

Organisations like Full Fact do an excellent job of digging into these claims. They link the commentary to the underlying research or statistics alongside a clear explanation. In the same vein is NHS Choices Behind the Headlines which fills a similar role, but focuses on the reporting of medical and health issues.

There’s also a lot of attention focused on helping to surface this type of fact checking and explanations via search results. Fact checking, to properly dig into statistics and clearly present them is, I suspect, a time consuming exercise. Especially if you’re hoping to present a neutral point of view.

What I think I’d like though is a service that brings all those different services together. To literally give me the missing links between research, news and commentary.

But rather than aggregating news articles or fact checking reports to give me a feed, or what we used to call a “river of news”, why not present a river of research instead? Let me see the statistics or reports that are being being debated and then let me jump off to see the variety of commentary and fact checking associated with it.

That way I could choose to read the research or a summary of it, and then decide to look at the commentary. Or, more realistically, I could at least see the variety of ways in which a specific report is being presented, described and debated. That would be a useful perspective I think. It would shift the focus away from individual outlets and help us find alternative viewpoints.

I doubt that this would become anyone’s primary way to consume the news. But it could be interesting to those of who like to dig behind the headlines. It would also be useful as a research tool in its own right. In the face of consistent lack of interest from news outlets in linking to primary sources, this might be something that could be crowd-sourced.

Does this type of service already exist? I suspect there are similar efforts around academic research, but I don’t recall seeing anything that covers a wider set of outputs including national and government statistics.

 

Checking Fact Checkers

As of last month Google News attempts to highlight fact check articles. Content from fact checking organisations will be tagged so that their contribution to on-line debate can be more clearly identified. I think this is a great move and a first small step towards addressing wider concerns around use of the web for disinformation and a “post truth” society.

So how does it work?

Firstly, news sites can now advertise fact checking articles using a pending schema.org extension called Claim Review. The mark-up allows a fact checker to indicate which article they are critiquing along with a brief summary of what aspects are being reviewed.

Metadata alone is obviously ripe for abuse. Anyone could claim any article is a fact check. So there’s an additional level of editorial control that Google layer on top of that metadata. They’ve outlined their criteria in their help pages. These seems perfectly reasonable: it should be clear what facts are being checked, sources must be cited, organisations must be non-partisan and transparent, etc.

It’s the latter aspect that I think is worth digging into a little more. The Google News announcement references the International Fact Checking Network and a study on fact checking sites. The study, by the Duke Reporter’s Lab, outlines how they identify fact checking organisations. Again, they mention both transparency of sources and organisational transparency as being important criteria.

I think I’d go a step further and require that:

  • Google’s (and other’s) lists of approved fact checking organisations are published as open data
  • The lists are cross-referenced with identifiers from sources like OpenCorporates that will allow independent verification of ownership, etc.
  • Fact checking organisations publish open data about their sources of funding and affiliations
  • Fact checking organisations publish open data, perhaps using Schema.org annotations, about the dataset(s) they use to check individual claims in their articles
  • Fact checking organisations licence their ClaimReview metadata for reuse by anyone

Fact checking is an area that benefits from the greatest possible transparency. Open data can deliver that transparency.

Another angle to consider is that fact checking may be carried out by more than just media organisations. John Udell has written a couple of interesting pieces on annotating the wild-west of information flow and bird-dogging the web that highlight the potential role of annotation services in helping to fact check and create constructive debate and discussion on-line.