There is an interesting post on the IOTA Localisation Services blog called 100% Matches – Should You Review? (Translation: Should You Pay?).
There are strong cases for reviewing 100% matches in translation memories for sure. There are counter arguments for not doing so too (beyond the obvious one of pure cost). For example, large volumes of source content itself may not be that consistent in terms of style or term usage anyway, with low impact on usability. We even know from eye tracking studies that misspellings are consumed by readers without impact on comprehension.
I’m not convinced a review of 100% matches on consistency grounds is worth undertaking without first consuming an analysis of how the existing content is used, and when. Furthermore, the onset of reasonable machine translation must impact the debate somewhat. Take user assistance components (that’s doc to you). These components can be updated frequently at source for different reasons. But, if the content isn’t used much, or is only accessed on a long-tail model (a small number of topics used by a wide range of users in different languages) then it might be more effective to have an on-demand machine translation model than reviewing and updating every 100% match using a translation memory in the optimistic hope the whole lot might ever be read anyway.
The debate about 100% matches needs to be broadened. It also strikes me that many of the techniques used in usability research have a role in translation decisions, a subject I’ll be returning to.