Tag: acrolinx IQ

Advertisement

Measuring Information Quality: What’s Missing?

Language in the News, Personalization and Design

A lot written about information quality recently. But how do we measure it?

There is a perception out there that information quality generally is very poor. But how do we know? All that digital content out there, but most of it is never read, and only a fraction of that is ever translated (and translators are often the only readers of that content). Just because a user guide isn’t read doesn’t mean the information itself within is poor quality. Perhaps, the accompanying software’s usability is spectacularly good and so it isn’t needed? Who knows?

In the GILT industry, information quality is all too often only assessed in terms of the cost, time, and effort to produce and then translate content (and usually one function is measured in isolation of the other). We have all kinds of metrics about “spend” published, time-to-market statistics analyzed, the opinions of professional linguists and terminologists debated, complicated mathematical formulae promulgated (trust me, if you reach that level you’ve clearly no real work to do), QA checklists written, certifications from standards bodies waved under our noses, and all the rest, in an attempt to define information quality. All good, though how efficient or applicable to the real world some of these things are is debatable.

However, often there’s a decider of information quality that is missing from these methodologies: the user of the information.

We need to move the key determinant of information quality to the user community: engaging users, and research, analyzing and understanding how users search for, retrieve, and use information to solve problems. For example, what search keywords and phrases do they use? Which pages do they read the most? Which parts of those pages are read and how? And so on.

The tools and opportunities for this are everywhere. Ever studied web server logs? Done any eye tracking studies (see image below) before and after an information quality project? Conducted comprehension studies on the outputs? Observed how real users consume information? Found out what terminology they actually use when helping each other, on support forums, and when they customize and extend software? Reviewed what keywords they use for searching or analyzed user comments?


So, let’s look at costs and translatability issues, post-editing metrics, number of flagged errors in QA, and so on, sure. But let’s connect it to the user experience too, regardless of language, and give the user the final say. Make users the arbiters of information quality.

Otherwise, we’re really just talking to ourselves.

Tags:, , , , , , , , ,
+ posts

Ultan Ó Broin (@localization), is an independent UX consultant. With three decades of UX and L10n experience and outreach, he specializes in helping people ensure their global digital transformation makes sense culturally and also reflects how users behave locally.

Any views expressed are his own. Especially the ones you agree with.

Advertisement

Related News:

Advertisement

Information Quality, MT and UX

Translation Technology

I’ve been working on an acrolinx IQ deployment for my employer Oracle (yes, I do have a real job). For many who go down this route, the claim that such initiatives are only being done because it will mean the instant arrival of machine translation (MT) will seem familiar.

The ‘translation process’ imperative is the wrong way to drive such initiatives. Instead, what is critical here is understand the notion of managing information quality for the user, regardless of the translation process. Because the language used in content, or information if you like, is a user experience (UX) issue.

It’s clear that a quality input (or source) gives better returns for every kind of translation. In the case of any language technology, clean source data delivers the best returns; and is becoming even more important as more and more turn to statistical machine translation.

Furthermore, the points made by Mike Dillinger at the recent Aquatic/Bay Area Machine Translation User Group meeting need re-emphasizing: MT does not require special writing; people require special writing. The rules for writing English source text for human translators, machine translators, and users in general, are the same.

Developing quality information in the first place, and then managing it, is the way to go.

acrolinxIQ flagging errors

So, forget about “controlled authoring” (manual, automated, or whatever other method of implementing it) and indeed “writing for translation” classes as the “mandatory prerequisite” for improved translatability or machine translation. Think and practise information quality as an end user deliverable in itself that has significant translation automation (and other) externalities.

I’d love to hear other perspectives on this, too.

If you’re interested in this notion of the primacy of information quality per se in the translation space, then read Kirti Vashee’s The Importance of Information Quality & Standards.

Tags:, , , , ,
+ posts

Ultan Ó Broin (@localization), is an independent UX consultant. With three decades of UX and L10n experience and outreach, he specializes in helping people ensure their global digital transformation makes sense culturally and also reflects how users behave locally.

Any views expressed are his own. Especially the ones you agree with.

Advertisement

Related News: