People who read a book or take a seminar from Edward Tufte on data visualization come away with two things.

  1. A really neat poster (by Charles Minard in 1869, and the basis for the Sankey diagram).

Data Visualization

2. And, awe for the brilliance of Edward Tufte. What they do not always come away with is a sense of how to do data visualization with the data they work with day to day.

After all, that poster’s pretty impressive. That’s a chart showing the journey of Napoleon’s Grand Army to Moscow in 1812, and it manages to display a number of factors: distance, geographic landmarks, weather, army size (and therefore casualties and attrition), and time. You can only look at that and sigh. Your own data doesn’t tie together quite so neatly.

So, should you simply give up on data visualization, pin the poster on your cube wall, and forget all about it? Not at all.

It can be counterproductive to connect the practice of data visualization to a masterpiece like Minard’s. It’s like looking at the Sistine Chapel.  Most of us can’t even imagine where to start if we wanted to replicate it. What’s more, we don’t need to replicate it: we’re trying to communicate a business insight, not (usually) become known to generations of art lovers or statisticians.

So how do you do data visualization?

You do it step by step, just the way Minard did. (If you’re interested in how Minard got to his end product, here’s an interesting link that discusses it.)

Here’s a recent experience I had with this:

I was trying to chart critical incidents worked by an IT department. Thankfully, there weren’t many of them (11 of them in nine months), but on the negative side, it’s hard to get much insight out of that little data. Check this out:

Data Visualization

Yes, it’s a line chart (could have also been done as a bar chart) where the values are integers between 0 and 2. You do get the general idea that we used to have 1 to 2 incidents and now we seem to go between 0 and 1 in a month, and that’s a good thing, but you’ve probably also got a nagging feeling that we’re a little short of a good sample size, so you wouldn’t bet your career on that trend.

In standard terms, this is a measure, not a metric – this is simply a count of events, and – absent any context – it’s pretty light on meaning.  How can we turn this into a metric?

Here we bring in another factor – time. Time is shown above, as incidents are assigned to standard units of time (months), but we can do something else with time.  We can calculate the average time between these incidents and chart that. So, calculating that mean time between incidents (and doing it on a rolling six-month average once there’s six months to use in the calculation), and adding a target line, we get this chart:

Data Visualization

This is better. We’ve got a real metric now, and we’re showing some progress in it. The target line is also helpful: we couldn’t really have set a target of performance on the line chart, but we can do that here.

The main thing we get out of this is a quantifiable statement that we’re getting better at something; presumably, the increased scarcity of critical incidents says we’re doing a good job at preventing them. Admittedly, we could just be on a lucky streak, so again, we shouldn’t be celebrating too loudly yet.

What comes next? What new factor can we introduce here that will give us some more insight into whether we’re getting better or just lucky?  The other factor we can introduce now is the effort needed to resolve the critical incidents.

But how can we show that? We’ve got time on one axis and some sort of magnitude measurement of the number of issues on the other.  Where can the effort go?

Inspired by another graphic I’d seen recently, I came up with this:

Data Visualization

Each incident is separately mapped to when it took place, and given a bubble that is proportional in size to the amount of effort it took to resolve. This gives us a visual on the reduced occurrence of the incidents (without having to explain mean time between incidents or that little bit about a rolling six month data sample). We get to see that, not only are incidents less frequent, they’re taking less time to resolve.  That’s got to be a good thing.

This isn’t perfect. We’re still dealing with not much data, but the factors together show a good trend. Also, I’m not claiming that this is a particularly artistic chart: it’s just an example of how we can get more information into a simple graphic.

So I tried to go another step, adding the factor of the time it took to close the tickets (elapsed time, as opposed to the effort expended which we’ve already shown). Using the left-hand axis for elapsed time, we get:

Data Visualization

Um… that additional factor actually does a worse job at telling the story. Elapsed time is correlated closely enough to effort that it doesn’t tell us much new information, and the additional axis takes away the visual impact of the extended horizontal spacing of the bubbles. If you squint, maybe you see a sort of trend line going down and to the right, but it’s really an illusion. The additional factor made things worse.

Sometimes you have to know when to stop.

I’ve had additional ideas of coding the bubbles in colors by the type of incident, or labeling them, and that could add some valuable data to the chart. We’ll work on that next. After all, data visualization is an incremental process, not something where a spark of genius hits and we see an entire solution in the flash of an eye.

Originally published by John Kackley on LinkedIn