A lot written about information quality recently. But how do we measure it?
There is a perception out there that information quality generally is very poor. But how do we know? All that digital content out there, but most of it is never read, and only a fraction of that is ever translated (and translators are often the only readers of that content). Just because a user guide isn’t read doesn’t mean the information itself within is poor quality. Perhaps, the accompanying software’s usability is spectacularly good and so it isn’t needed? Who knows?
In the GILT industry, information quality is all too often only assessed in terms of the cost, time, and effort to produce and then translate content (and usually one function is measured in isolation of the other). We have all kinds of metrics about “spend” published, time-to-market statistics analyzed, the opinions of professional linguists and terminologists debated, complicated mathematical formulae promulgated (trust me, if you reach that level you’ve clearly no real work to do), QA checklists written, certifications from standards bodies waved under our noses, and all the rest, in an attempt to define information quality. All good, though how efficient or applicable to the real world some of these things are is debatable.
However, often there’s a decider of information quality that is missing from these methodologies: the user of the information.
We need to move the key determinant of information quality to the user community: engaging users, and research, analyzing and understanding how users search for, retrieve, and use information to solve problems. For example, what search keywords and phrases do they use? Which pages do they read the most? Which parts of those pages are read and how? And so on.
The tools and opportunities for this are everywhere. Ever studied web server logs? Done any eye tracking studies (see image below) before and after an information quality project? Conducted comprehension studies on the outputs? Observed how real users consume information? Found out what terminology they actually use when helping each other, on support forums, and when they customize and extend software? Reviewed what keywords they use for searching or analyzed user comments?
So, let’s look at costs and translatability issues, post-editing metrics, number of flagged errors in QA, and so on, sure. But let’s connect it to the user experience too, regardless of language, and give the user the final say. Make users the arbiters of information quality.
Otherwise, we’re really just talking to ourselves.