I miss being able to look people in the eye

What even is time, anymore?

I’ve seen and made many variations of this joke across Slack, twitter and meetings this week. Remote working and social isolation has disrupted all of our routines and left us feeling adrift. But, for those of us lucky enough to have good connectivity, we’re certainly not talking or seeing each other any less. I’ve ended several days this week hoarse from talking.

The number of people playing with avatars, virtual backgrounds and buying green screens speaks to the level of engagement with video meetings and chat. Of course, there’s also the memes.

By the way, Disney are sharing a nice line in backgrounds. But I have my own favourites.

In team catch-ups this week, a few people have remarked how, despite all the meetings and check-ins, they just didn’t feel as engaged. Key decisions or outcomes were not sinking in. People struggled to remember who was on a particular call. This isn’t surprising. Neither the general situation nor the medium we’re using is really great for focus and connection.

The comments have made me more conscious of the limitations of the software we’re using.

For example, one of the nice features of Zoom is the “gallery view” so you can see everyone on the call. Or at least until your call is so large that you end up with several pages of attendees. It makes it really easy to read the room when chairing. Contrast that with hangouts which doesn’t have the same feature. This makes it so much harder to gauge reactions in a discussion, identify people who want to raise questions, or even just catch when someone has had a connectivity problem.

General presence notifications are also a problem. In a drop-in meeting this week, it was only a little way into the call that I realised that we had 17 people in the discussion. That level of participation was so much easier to gauge when we were all sat around tables in the office kitchen.

We tried out Remo recently too. It has a cute office layout that facilitates break-out discussions and you can easily move between chats. I think it’s great for some types of meetings. But it didn’t create quite the same atmosphere for having drinks with the team than a raucous, messy hangout.

I think the thing that I’ve personally been struggling with is that you can’t look anyone in the eye on a video call.

Now, I’m usually terrible at looking people in the eye. In a conversation with me, you’ll find I’m typically looking around as I’m talking. It helps me think. Although when I’m listening, I’m much more attentive to others. But being able to look someone in the eye to read their reactions, look for agreement, or just to enjoy a joke is something that we can’t easily do at the minute. And I miss it.

Some people struggle with direct eye contact. Some people like the freedom to look away, fidget or play with a stress toy when listening. We’re all wired differently. Eye contact isn’t always necessary or desirable. But there’s lots of research exploring the effects of eye contact, which notes some potential impacts on memory and prosocial behaviour.

While tools like Zoom need to fix their security flaws before adding features, I’m hoping this period will lead to more user research and product development. So that we have much better and more secure tools. There’s plenty of room for innovation. Although like others I don’t think that attention correction is what we need. But I’d love to read more about interesting experiments with online presence and remote working tools.

It’s important to remember – as ever when we choose to make something digital – that many of these challenges are a fact of life for people with disabilities, who may be relying on remote participation in events and meetings.

In the meantime there’s a few things we can all do to improve our meetings. Choose the right tool. Find ways to stay in contact with everyone on the call. Take notes. Share key decisions afterwards (duh!)

And, if you’re using multiple monitors, maybe put the video call on the same desktop as your webcam. Or think about putting your webcam near your screen. Then we can at least glance in each others’ directions.

Long live RSS! How I manage my reading

“LONG LIVE RSS!”

I shout these words from my bedroom window every morning. Reaffirming my love for this century’s most criminally neglected data standard.

If you’ve either forgotten, or never enjoyed, the ease of managing your information consumption via the magic of RSS and a feed reader, then you’re missing out mate.

Struggling with the noise, gloom and general bombast of social media? Get yourself a feed reader and fill it full of interesting subscriptions for a most measured and sedate way to consume words.

Once upon a time everyone(*) used them. We engaged in educated discourse, shared blog rolls, sent trackbacks and wrote comments on each others websites. Elegant weapons for a more civilized age (**).

I like to read things when I have time to reduce distractions and give me change to absorb several viewpoints rather than simply the latest, hottest takes.

I’ve fine-tuned my approach to managing my reading and research. A few of the tools and services have changed, but the essentials stay the same. If you’re interested, here’s how I’ve made things work for me:

  • Feedbin
    • Manages all my subscriptions for blogs, newsletters and more into one easily accessible location
    • Lots of sites still support RSS its not dead, merely resting
    • Feedbin is great at discovering feeds if you just paste in a site URL. One of the magic parts of RSS
    • You can also subscribe to newsletters with a special Feedbin email address and they’ll get delivered to your reader. Brilliant. You’re not making me go back into my inbox, its scary in there.
  • Feedme. Feedbin allows me to read posts anywhere, but I use this Android app (there are others) as a client instead
    • Regularly syncs with Feedbin, so I can have all the latest unread posts on my phone for the commute or an idle few minutes
    • It provides a really quick interface to skim through posts and either immediately read the or add them to my “to read” list, in Pocket…
  • Pocket. Mobile and web app that I basically use as a way to manage a backlog of things “to read”.
    • Gives me a clutter free (no ads!) way to read content either in the browser (which I rarely do) or on my phone
    • It has its issues with some content, but you can easily switch to a full web view
    • Not everything I want to read comes in via my feed reader so I take links from Slack, Twitter or elsewhere and use the Pocket browser extension or its share button integration to stash things away for later reading. Basically if its not a 1-2 minute read it goes into Pocket until I’m ready for it. Keeps the number of browser tabs under control too.
    • The offline content syncing makes it great for using on my commute, especially on the tube
  • IFTTT. I use this service to do two things:
    • Once I archive something in Pocket then it automatically adds them to Pinboard for me, using the right tags.
    • If I favourite something it tweets out the link without me having to go and actually look at twitter
  • Pinboard. Basically a complete archive of articles I’ve read.

The end result is a fully self-curated feed of interesting stuff. I’m no longer fighting someone else’s algorithm, so I can easily find things again.

I can minimise number of organisations I’m following on twitter, and just subscribe to their blogs. Also helps to buck the trend towards more email newsletters which are just blogs but you’re all in denial.

Also helps to reduce the number of distractions, and fight the pressure to keep checking on twitter in case I’ve missed something interesting. It’ll be in the feed reader when I’m ready.

Long live RSS!

It’s about time we stopped rebooting social networks and rediscovered more flexible ways to create, share and read content online. Go read

Say it with me. Go on.

LONG LIVE RSS!

(*) not actually everyone, but all the cool kids anyway. Alright, just us nerds, but we loved it.

(**) not actually more civilised, but it was more decentralised

 

Under construction

It’s been a while since I posted a more personal update here. But, as I announced this morning, I’ve got a new job! I thought I’d write a quick overview of what I’ll be doing and what I hope to achieve.

I’ve been considering giving up freelancing for a while now. I’ve been doing it on and off since 2012 when I left Talis. Freelancing has given me a huge amount of flexibility to take on a mixture of different projects. Looking back, there’s a lot of projects I’m really proud of. I’ve worked with the Ordnance Survey, the British Library and the Barbican. I helped launch a startup which is now celebrating its fifth birthday. And I’ve had far too much fun working with the ONS Digital team.

I’ve also been able to devote time to helping lead a plucky band of civic hackers in Bath. We’ve run free training courses, built an energy-saving application for schools and mapped the city. Amongst many other things.

I’ve spent a significant amount of time over the last few years working with the Open Data Institute. The ODI is five and I think I’ve been involved with the organisation for around 4.5 years. Mostly as a part-time associate, but also for a year or so as a consultant. It turned out that wasn’t quite the right role for me, hence the recent dive back into freelancing.

But over that time, I’ve had the opportunity to work on a similarly wide-ranging set of projects. I’ve researched how election data is collected and used and learnt about weather data. I’ve helped to create guidance around open identifiers, licensing, and open data policies.  And explored ways to direct organisations on their open data journey. I’ve also provided advice and support to startups, government and multi-national organisations. That’s pretty cool.

I’ve also worked with an amazing set of people. Some of those people are still at the ODI and others have now moved on. I’ve learnt loads from all of them.

I was pretty clear what type of work I wanted to do in a more permanent role. Firstly, I wanted to take on bigger projects. There’s only so much you can do as an independent freelancer. Secondly, I wanted to work on “data infrastructure”. While collectively we’ve only just begun thinking through the idea of data as infrastructure, looking back over my career it’s a useful label for the types of work I’ve been doing. The majority of which has involved looking at applications of data, technology, standards and processes.

I realised that the best place for me to do all of that was at the ODI. So I’ve seized the opportunity to jump back into the organisation.

My new job title is “Data Infrastructure Programme Lead”. In practice this means that I’m going to be:

  • helping to develop the ODI’s programme of work around data infrastructure, including the creation of research, standards, guidance and tools that will support the creation of good data infrastructure
  • taking on product ownership for certificates and pathway, so we’ve got a way to measure good data infrastructure
  • working with the ODI’s partners and network to support them in building stronger data infrastructure
  • building relationships with others who are working on building data infrastructure in public and private sector, so we can learn from one another

And no doubt, a whole lot of other things besides!

I’ll be working closely with Peter and Olivier, as my role should complement theirs. And I’m looking forward to spending more time with the rest of the ODI team, so I can find ways to support and learn more from them all.

My immediate priorities will be are working on standards and tools to help build data infrastructure in the physical activity sector, through the OpenActive project. And leading on projects looking at how to build better standards and how to develop collaborative registers.

I’m genuinely excited about the opportunities we have for improving the publication and use of data on the web. It’s a topic that continues to occupy a lot of my attention. For example, I’m keen to see whether we can build a design manual for data infrastructure. Or improve governance around data through analysing existing sources. Or whether mapping data ecosystems and diagramming data flows can help us understand what makes a good data infrastructure. And a million other things. It’s also probably time we started to recognise and invest in the building blocks for data infrastructure that we’ve already built.

If you’re interesting in talking about data infrastructure, then I’d love to hear from you. You can reach me on twitter or email.

Experiences with the Freestyle Libre

We’ve been using the Freestyle Libre for just over a year now to help my daughter manage her Type-1 diabetes. I wanted to share a few thoughts about how well it’s been working for us. I had lots of questions at the start, so I wanted to help capture what we’ve learned in case its useful for anyone else.

I’m writing this as a parent, rather than as a person with experience of wearing a sensor or the emotional cost of dealing with diabetes. Do not take anything I write here as medical advice, this is just a summary of our experience with the sensors.

My daughter is now 13. It was her decision to trial the sensor and hers to continue its use.

Cost & Shipping

Firstly, the Libre is not currently available on the NHS. But I believe it’s under review. This means we’re paying for the sensor ourselves. We’re lucky enough to be able to afford that, but not everyone is able to do so.

To use the sensors you need a reader (£57.95) and then sensors. While the sensors are priced at £57.95 each this includes VAT. When completing an order, if you’re buying the sensor for yourself to help you manage your diabetes, or for a family member then you can fill in a disclaimer and the VAT is waived. For our last sensor order we paid £48.29 per sensor. Sensors last a maximum of 14 days (see below) so on average you will be paying around £24 a week.

Shipping is quick and you’ll pay around £5-6 for postage. We buy ours in packs of 5 as that covers around 10 weeks worth of usage and reduces postage costs.

When we first bought the sensors we bought a pack of 10. I wouldn’t advise this as the sensors do have a use-by date, so you can’t just stock up.

Lifetime

Once fitted a sensor lasts 14 days maximum. You can’t choose to wear it for longer: the reader will no longer collect data from a sensor 14 days after its been activated.

While we’ve had a full 14 days from many of the sensors, in some cases they may come off early. They’re pretty secure once fitted, and in the optimum location in the back of the upper arm they are generally out of the way. But we’ve also had a number that haven’t lasted that long. They can be knocked off. We’ve also had to put tape over some sensors that have started to come off the skin.

When travelling we generally take a spare as well as manual blood testing equipment. See below.

Fitting the sensor

Fitting the sensor is straight-forward. The arm is swabbed with an alcohol wipe to clean the skin, then the sensor is pushed into the arm using a single-use applicator that comes with each sensor. They can be fitted in a couple of minutes. After activation using the reader it takes an hour before the first readings are available.

The applicator makes a clunking sound as the sensor is injected into the skin. It’s a bit like using a hole punch. Martha occasionally has some pain and soreness but that passes quickly.

On one occasion I’ve had a sensor fail to attach properly. This was because I tried to apply the sensor too soon after using the alcohol wipe. I’d recommend letting the skin fully dry before application to ensure the sticky pad adheres properly.

You can shower and swim when wearing the sensor.

Travelling with the sensor

Travelling with diabetes isn’t easier. Airports aren’t generally welcoming to people carrying bags of needles and vials of liquid.

Our understanding is that the sensors won’t go through metal detectors, but might be OK for X-Rays. As a minimum you’ll need to inform security if you or a family member is wearing a sensor. As our daughter now also wears an insulin pump, on our last few flights we’ve ended up having to opt out of all scans. This involves some headaches as you might imagine, but staff in the UK and elsewhere have so far been very helpful.

There is probably better advice online. Our experience is relatively limited here.

Removing the sensor

The sensors are fairly easy to peel off and remove, although the glue takes some scrubbing off. There’s no needle in the device, just a hair-thin sensor. The applicators can go in the bin, but we put the sensors in our sharps bin.

Sometimes the skin under the sensor can be a bit inflamed, but we’ve not had any serious side effects or issues.

Taking readings

To collect readings from the sensor you just scan it with the Reader. It works through clothing, so very easy to do.

The sensor collects readings every 15 minutes automatically and stores up to 8 hours of readings. All of the stored readings are automatically downloaded to the Reader whenever you scan it.

If you have an NFC enabled phone then you can collect readings using the LibreLink app. There’s also a LinkUp app to share readings with family members.

How the sensor helps to manage diabetes

The sensor removes the need to do routine finger prick tests. Martha no longer has to take blood glucose readings before meals, we can just scan her sensor and then work out the necessary dose and any correction.

Now that she is also using an insulin pump its really just a matter of scanning and then entering the data into the pump. It will work out any necessary corrections. The combination of the sensor and the pump has made an incredible difference to the routines of managing diabetes. For the better.

However using the Libre doesn’t mean that you can give up finger pricks completely. The sensor has limited accuracy with blood glucose levels below 4 or above 14. Outside of those ranges you must still do a finger prick test to ensure that you have an accurate reading for treating hypo- or hyperglycemia.

Accuracy of the sensor

Our biggest challenge when starting to use the Libre was understanding its differences from routine finger prick tests. This made us very wary about its accuracy initially. I’ll try to explain why.

If you want a detailed review of the Libre’s accuracy, then you can read this scientific paper which summarises a controlled test of the Libre. It helps to demonstrate the accuracy and reliability of the sensors, but may be too detailed for some people.

When you perform a finger prick test you are directly measuring the amount of glucose in your blood. But the Libre isn’t testing your blood glucose. The Libre sensor is testing the fluid between the cells in your skin. That fluid is known as interstitial fluid.

Interstitial fluid, it’s nutrients and oxygen are replenished from your blood stream. This means that you’re only indirectly testing your blood glucose. It takes time for glucose to pass from your blood into the fluid. Roughly speaking a measurement from the sensor is around 5-10 minutes behind your actual blood glucose level. If you’re running low on the sensor, your blood glucose might be even lower. And vice versa.

This explains why you need to finger prick when you’re low or high: you need to be treating your actual levels. On a routine basis, this delay isn’t an issue. It’s only when you’re particularly low or high that you may need to be more vigilant. This also explains why you need to travel with a full set of equipment and not just replacement sensors.

While there are delays, the fact that the Libre is constantly recording means that whenever you scan you’re getting an updated graph of your glucose levels, not just a single reading. The Reader will show you the graph and also give you an ideal if you’re level, rising (or falling) slowly, or rising (or falling) rapidly. That makes a massive amount of difference.

When we started testing the Libre we were doing routine finger pricks as well. The end result was a bit like wearing several watches, each of which is showing a slightly different time. We felt like we wouldn’t be able to trust the sensor because it was so often at odds with the blood glucose readings. The fact that this was also happening at a time when Martha’s levels were particularly erratic didn’t help: with highly variable blood glucose levels, you can feel one step behind.

Once we committed to using the Libre as our means of routine testing, everything was fine. You just need to be aware of the differences. Martha’s HBA1C levels demonstrate that we’re able to efficiently manage her glucose levels.

One additional issue to be aware of is that it takes time for the sensors to bed in. A sensor won’t start reporting readings until after its been on the skin for an hour. But we, and others, have found that it can take some time after that before readings seems reliable. Some sensors seem to work fine straight away, others seem a bit variable.

We’ve not had an issue with a sensor never settling down, they’re normally fine after a few hours. But its often hard to tell: is it the sensor, or just a particularly variable set of glucose levels.

We’ve heard that some people using the Libre install a new sensor 24 hours before the previous one runs out, to allow time for it to settle in. We’ve not found it necessary to do that.

Conclusion

Type 1 diabetes is an incredibly difficult condition to live with. I have nothing but admiration for how well Martha is dealing with it. She is my hero.

The Libre has made a significant difference to her (and our) quality of life. Removing the need for routine use of needles greatly reduces the number of medical interventions we have to make every day. The ability to easily scan to get a reading of glucose levels makes it easier for Martha in all aspects of her daily life. It’s much less obtrusive than finger pricking.

As parents it’s easy for us to check on her levels when she’s sleeping. A quick scan is all it takes. An integrated sensor and pump might be even better, but the smaller size of the Libre sensors make it perfectly adequate for now.

I hope the Libre becomes more widely available on the NHS so that more people can benefit from it. I also hope this article has been useful. We’re very happy to answer any other questions. Leave a comment or drop me an email.

From services to products

Over the course of my career I’ve done a variety of consulting projects as both an employee and freelancer. I’ve helped found and run a small consulting team. And, through my experience leading engineering teams, some experience of designing products and platforms. I’ve been involved in a few discussions, particularly over the last 12 months or so, around how to generate repeatable products off the back of consulting engagements.

I wanted to jot down a few thoughts here based on my own experience and a bit of background reading. I don’t claim to have any special insight or expertise, but the topic is one that I’ve encountered time and again. And as I’m trying to write things down more frequently, I thought I’d share my perspective in the hope that it may be useful to someone wrestling with the same issues.

Please comment if you disagree with anything. I’m learning too.

What are Products and Services?

Lets start with some definitions.

A service is a a bespoke offering that typically involves a high-level of expertise. In a consulting business you’re usually selling people or a team who have a particular set of skills that are useful to another organisation. While the expertise and skills being offered are common across projects, the delivery is usually highly bespoke and tailored for the needs of the specific client.

The outcomes of an engagement are also likely to be highly bespoke as you’re delivering to a custom specification. Custom software development, specially designed training packages, and research projects are all examples of services.

A product is a packaged solution to a known problem. A product will be designed to meet a particular need and will usually be designed for a specific audience. Products are often, but not always, software. I’m ignoring manufacturing here.

Products can typically be rapidly delivered as they can be installed or delivered via a well-defined process. While a product may be tailored for a specific client they’re usually very well-defined. Product customisation is usually a service in its own right. As is product support.

The Service-Product Spectrum

I think its useful to think about services and products being at opposite ends of a spectrum.

At the service end of the spectrum your offerings are:

  • are highly manual, because you’re reliant on expert delivery
  • are difficult to scale, because you need to find the people with the skills and expertise which are otherwise in short supply
  • have low repeatability, because you’re inevitably dealing with bespoke engagements

At the product end of the spectrum your offerings are:

  • highly automated, because you’re delivering a software product or following a well defined delivery process
  • scalable, because you need fewer (or at least different) skills to deliver the product
  • highly repeatable, because each engagement is well defined, has clear life-cycle, etc.

Products are a distillation of expertise and skills.

Actually, there’s arguably a stage before service. Lets call those “capabilities” to borrow a phrase. These are skills and expertise that you have within your team but which you’ve not yet sold. I think it’s a common mistake to draw up lists of capabilities, rather than services or products.

The best way to test whether your internal capabilities are useful to others is to speak to as many potential customers as possible. And one of the best ways to develop new products is to undertake a number of bespoke engagements with those customers to understand where the opportunities lie for creating a repeatable solution. Many start-ups use consulting engagements as discovery tools.

Why Productise?

There are many obvious reasons why you’d start to productise a service:

  • to allow your business to scale. Consulting businesses can only scale with people, product businesses can scale to the web.
  • to make your engagements more repeatable, so that you can deliver a consistent quality of output
  • to distil learning and expertise in such a way as to support the training and development of junior staff, and grow the team
  • to ensure business continuity, so you’re less reliant on individual consultants
  • to reduce costs, by allowing more junior staff to contribute to some or all of an engagement. Check-lists, standard processes and internal review stages providing the appropriate quality controls
  • to focus on a specific market. Tailoring your service to a specific sector can help target your sales and marketing effort
  • to more easily measure impacts. Products solve problems and, when manifested as software, can be instrumented to collect metrics on usage and hopefully impacts.

Because they have a bounded scope, products are easier to optimise to maximise revenue or impacts. Or both.

A Product Check-list

By my definition above, a product will:

  1. solve a specific well-defined problem
  2. be targeted at a specific customer or audience
  3. be deliverable via a well-documented process, which may be partially or completely automated
  4. be deliverable within a well-defined time scale
  5. be priced according to a tried and tested pricing model

If you can’t meet at least the first three of these criteria then I’d argue that what you have is still a bespoke service. And if you’ve not sold it at all then all you have is a capability or at best an idea.

Products evolve from client engagements.

Approaches to Productisation

Some organisations will be using consulting engagements as a means to identify user needs and/or as a means to fund development of a software product or platform.

But developing a product doesn’t necessarily involve building software, although I think some form of automation is likely to be a component of a more repeatable, productised service.

You might start productising a service simply by documenting your last engagement. The next time you do a similar engagement you can base it on your previous most successful project. As you continue you’re likely to iterate on that process to start to distil it into a check-list or methodology. Ideally the process should start from pre-sales and run through to final delivery.

There’s already lots been written about lean product development, the importance of adding metrics (which can include measure product process). And also about the care you need to take about extrapolating the needs of early adopters to later customers. I already feel like I’m doing stating the obvious here when there’s a wealth of existing product development literature, so we’ll skip over that.

But I’ll also note that there’s (of course!) a lot of overlap between what I’m outlining here and the discovery phase of service design. The difference is really just in how you’re being funded.

I’d argue that taking an iterative approach is important even for freelancers or small consulting firms. Even if your end goal isn’t a software product. It’s how you get better at what you do. Retrospectives, ideally involving the client, are another useful technique to adopt from agile practices.

But productisation also takes effort. You can iterate in small steps to improve, but you need to build in the time to do that. Even a small amount of reflection and improvement will pay dividends later.

Open data and diabetes

In December my daughter was diagnosed with Type 1 diabetes. It was a pretty rough time. Symptoms can start and escalate very quickly. Hyperglycaemia and ketoacidosis are no joke.

But luckily we have one of the best health services in the world. We’ve had amazing care, help and support. And, while we’re only 4 months into dealing with a life-long condition, we’re all doing well.

Diabetes sucks though.

I’m writing this post to reflect a little on the journey we’ve been on over the last few months from a professional rather than a personal perspective. Basically, the first weeks of becoming a diabetic or the parent of a diabetic, is a crash course in physiology, nutrition, and medical monitoring. You have to adapt to new routines for blood glucose monitoring, learn to give injections (and teach your child to do them), become good at book-keeping, plan for exercise, and remember to keep needles, lancets, monitors, emergency glucose and insulin with you at all times, whilst ensuring prescriptions are regularly filled.

Oh, and there’s a stupid amount of maths because you’ll need to start calculating how much carbohydrates are in all of your meals and inject accordingly. No meal unless you do your sums.

Good job we had that really great health service to support us (there’s data to prove it). And an amazing daughter who has taken it all in her stride.

Diabetics live a quantified life. Tightly regulating blood glucose levels means knowing exactly what you’re eating, and learning how your body reacts to different foods and levels of exercise. For example we’ve learnt the different ways that a regular school day versus school holidays effects my daughters metabolism. That we need to treat ahead for the hypoglycaemia that follows a few hours after some fun on the trampoline. And that certain foods (cereals, risotto) seem to affect insulin uptake.

So to manage the condition we need to know how many carbohydrates are in:

  • any pre-packaged food my daughter eats
  • any ingredients we use when cooking, so we can calculate a total portion size
  • in any snack or meal that we eat out

Food labeling is pretty good these days so the basic information is generally available. But its not always available on menus or in an easy to use format.

The book and app that diabetic teams recommend is called Carbs and Cals. I was a little horrified by it initially as its just a big picture book of different portion sizes of food. You’re encouraged to judge everything by eye or weight. It seemed imprecise to me but with hindsight its perfectly suited to those early stages of learning to live with diabetes. No hunting over packets to get the data you need: just look at a picture, a useful visualisation. Simple is best when you’re overwhelmed with so many other things.

Having tried calorie counting I wanted to try an app to more easily track foods and calculate recipes. My Fitness Pal, for example, is pretty easy to use and does bar-code scanning of many foods. There are others that are more directly targeted at diabetics.

The problem is that, as I’ve learnt from my calorie counting experiments, the data isn’t always accurate. Many apps fill their databases through crowd-sourcing. But recipes and portion sizes change continually. And people make mistakes when they enter data, or enter just the bits they’re interested in. Look-up any food on My Fitness Pal and you’ll find many duplicate entries. It makes me distrust the data because I’m concerned its not reliable. So for now we’re still reading packets.

Eating out is another adventure. There have been recent legislative changes to require restaurants to make more nutritional information available. If you search you may find information on a company website and can plan ahead. Sometimes its only available if you contact customer support. If you ask in a (chain) restaurant they may have it available in a ring-binder you can consult with the menu. This doesn’t make a great experience for anyone. Recently we’ve been told in a restaurant to just check online for the data (when we know it doesn’t exist), because they didn’t want to risk any liability by providing information directly. On another occasion we found that certain dishes – items from the childrens menu – weren’t included on the nutritional charts.

Basically, the information we want is:

  • often not available at all
  • available, but only if you know were to look or who to ask
  • potentially out of date, as it comes from non-authoritative sources
  • incomplete or inaccurate, even from the authoritative sources
  • not regularly updated
  • not in easy to use formats
  • available electronically, e.g. in an app, but without any clear provenance

The reality is that this type of nutritional and ingredient data is basically in the same state as government data was 6-7 years ago. It’s something that really needs to change.

Legislation can help encourage supermarkets and restaurants to make data available, but really its time for them to recognize that this is essential information for many people. All supermarkets, manufacturers and major chains will have this data already, there should be little effort required in making it public.

I’ve wondered whether this type of data ought to be considered as part of the UK National Information Infrastructure. It could be collected as part of the remit of the Food Standards Agency. Having a national source would help remove ambiguity around how data has been aggregated.

Whether you’re calorie or carb counting, open data can make an important difference. Its about giving people the information they need to live healthy lives.

Getting that learning fix

I’ve been doing some domain-modelling with an arts organisation recently. The domain model that we’re working on will help underpin a new version of their website.

We gathered some domain experts across the business and ran some workshops to start capturing how they think about their domain, what they do, what the outputs are, etc.

Data modelling isn’t something that most people do. The process of trying to build a model of the world is, to varying degrees, new to them. Just as understanding the nuances of different types of art events and curating exhibitions is new to me.

So there’s been an element of me teaching them about what I do — the geeky stuff — alongside them teaching me about what they do. By coming to more of a shared understanding we can collectively get more from the exercise.

I love that process.

I love learning stuff, particularly in domains that are very different to the one in which I often operate. You don’t often get chance to sit with a bunch of domain experts and hear them discuss what they do at great length.

I also love that light-bulb moment when you can see people suddenly get what you’re teaching them. Its like you can actually see them levelling up in front of your eyes.

(I’ve been trying to rationalise what, on the surface, seems to be too very divergent interests: a love of teaching & a love of coding; whatever the reason, it probably explains why I keep doing different roles).

I then got to thinking about how so many of the events that run these days are largely domain based: domain experts talking to each other. Not people teaching each other new things, maybe in wildly different domains. I guess Ignite events might be closest, but I’ve never been to one. They also seem like they’re highly structured and involve calls to action rather than teaching and knowledge sharing, but I might be wrong.

So what kind of events am I missing? Where can I get my learning fix, or is there scope for a new type of “geek” event?