Many companies are moving from the traditional waterfall development model to an agile approach. Localization has to follow suit and adapt its processes and workflows to the new reality. It might have been a common practice in the past to have a department or team in the company that handled everything about localization. The department received the English source files, and weeks or months later delivered the localized version. The communication with other departments was usually sparse, and no one else in the company fully understood the details of their work.
Not anymore. With agile development and shorter release cycles, localization can no longer be contained in its own silo. Could it ever?
Here we present a case study from Teradata’s transformation from waterfall to agile localization. According to the old translation model, the user interface (UI) was translated from code freeze and released with or slightly after the English release. Internationalization testing was clumped together with localization testing and performed by personnel in our local offices after the UI was translated. The people knew the product and the language, but didn’t always put the validation work at a high priority. Schedules were sometimes compromised because the validation work had to be pushed forward when other, more urgent priorities arose.
Our local reviewers also introduced preferential corrections. In their eyes certain words or phrases could be improved, and they demanded that we change them, even if nothing was grammatically wrong with the original translation. As non-native speakers, the resources at the localization department couldn’t say if the changes reported back were preferential or real linguistic issues. That started a discussion between the language service provider (LSP), the localization department and the reviewer about what should or should not be updated.
Internationalization testing was also done in-house by the localization department, which ran through a whole test suite in at least two languages (a European and an Asian language) and sometimes all the languages supported by the product.
The internationalization testing usually started close to code freeze, when only serious issues could be addressed and fixed in the code. For any serious localization issues found after code freeze, a patch release would have to be released afterward.
The final language packages were built and tested before being delivered and installed separately at each customer site by the local office.
The localization process also contained a plethora of tools that did conversions, comparisons, kept track of new and modified strings in the resource files and so on. Maintaining them was a full-time workload. If a format changed, the corresponding tools needed to be upgraded before any localization work could resume.
On the documentation side, the English version was worked on until literally days before the release. Because the documentation team didn’t get the final product to check their documentation drafts against until late in the process, they could not hand it over any earlier.
The localization department needed to finish the translation of the product UI before they started the work on the documentation and online help, in case terminology changed during the translation/validation process.
So, in our old processes, localization was always at the end of the line.
New localization model
In 2012, a decision was made to move from the traditional development approach to an agile development model, with scrum teams, sprint demos and bi-weekly sprints.
It was a hybrid approach in the sense that the development was agile, but the product release was not. Even so, we wanted to prepare for a future where every sprint would be both translated and released.
In an agile translation model, the timeframe becomes compressed. Things must happen in days instead of weeks. There is no longer time to manually run through large test suites after each translation. But the demands for quality are still as high as before.
What were the requirements for moving to an agile localization model? First, we understood the reality — linguists cannot translate any faster just because we give them an agile project. We might wish they could, but it will not happen. The surrounding plans and processes have to handle this increase in speed, not the translators. The budget was limited, so we could not invest in a translation management system. We would still need both internationalization and linguistic testing so that the final quality didn’t suffer. Ultimately, in moving to agile, management wanted the localized version released together with English in preparation for a possible future cloud version.
In agile, the localization kit should be handed over to the vendor within 24 hours. To meet that deadline, we need:
As few transitions as possible. Every transition introduces a risk into the process.
A stable, automated process so that human factors such as vacation and illness don’t disrupt us. We do not want to be the ones missing a deadline.
An automated process with as few tools as possible, due to the same risks as with transitions.
Our solution for agile localization was to push as much as possible upstream in the process. This included translation, testing, defect fixing, everything.
With every completed sprint, there is an opportunity to send out new strings for translation. Each drop will be fairly small in size. We did a total of 14 drops before the product was finished. The initial drop was by far the largest, as we came from a waterfall model and developers had worked awhile on new features before we did the first drop. The other drops averaged around 200 words per drop.
We had to pay more in file processing fees from the translation vendor than in earlier versions, as we had multiple small translation drops instead of one large translation drop.
But when the release date approached, we had 98% of the product’s new strings already translated. The final drop was only 124 words and could be completed in a few days. In the end we were able to deliver the final translated UI the same day as the English product was released.
However, fast translations are not any good without proper validation. There was no time to do internationalization and localization testing after the final translated version was ready. It had to be done parallel with the development of the English version.
The quality assurance (QA) team was already validating all the functionality for the English product. Why not have them work with the internationalized product as well? So we asked QA to start to take more responsibility for the internationalization testing.
A benefit of using QA to test for internationalization was that they found the issues early in the development process. In agile development, when something is completed after a sprint, it’s very hard to get developers to change code from that sprint. A change request has to be submitted, reviewed and approved before anything can be corrected. With the QA team working together with the sprint teams, they test, report and have scrum teams fix internationalization defects before a sprint is over.
The new process first involved going to management to get their buy-in to the concept. Also, in order to drive home the importance of this, we added a section about globalization in the “doneness” criteria for the product, making it impossible to ignore for QA or developers.
We then had several meetings with the QA department. Firstly, we wanted to make them understand the value of our international customers and the importance of testing specifically for those customers. Secondly, we wanted to teach them how to test for most of the internationalization issues using a technique called pseudo-localization. All they had to do was to use this made up language instead of English when testing. We even prepared a one-page document that listed all the additional things QA should check for. Figure 1 shows an example of pseudo-localization.
One of the most common comments from QA team members during these presentations was “We can’t help localization with testing; we only know English.” We assured them that their part only involved internationalization testing, where no additional language knowledge is needed.
However, for linguistic testing, we needed native speakers. With a more compressed timeframe, we could no longer rely on the in-house reviewers to complete the tasks on time. We needed to hire professionals to do the work.
Linguistic validation testing (LVT) and subject matter experts (SME) are the only people who can use the target user interfaces properly, approve them for linguistic and cultural correctness, and understand subtleties in languages. We hired them to look for linguistic issues in our product.
With agile, we had to abandon the goal of 100% validation of the interface after the final translation was completed. Instead, we piggybacked the linguistic validation and SME tasks on top of our normal translation schedule (Figure 2). The validations should start late enough in the process to cover most of the new functionality in our product, but early enough to be able to complete the testing and update translations before the final release. What we did was pick a specific drop, and after translation, LVT went through the translated product focused on new functionality. Their findings were reported back to us, and the corrections were applied to the next available drop of the UI. If the validation took time, one or two additional drops might transpire before the LVT report was returned. The report was then applied to the next available drop. The different LVTs communicated internally for in-context, non-language-specific issues. After all the suggested changes were implemented or rejected, the updated translation went on to be SME validated. So we managed to keep the translation drops going at the same time that we performed the LVT and SME validation.
Our agile localization model has these four pillars:
Pseudo-language testing by QA engineers during sprints in an agile mode (in-house).
Linguistics testing (validation) in a waterfall mode (vended).
SME testing (validation) in a waterfall mode (vended).
Continuous internationalization testing — ongoing background to all other types of testing (in-house).
For the documentation and online help, the time crunch was not as bad. The documentation team kept running records of what pages were updated and final, and which ones still required work. We did a total of three online help drops for this product, only translating the sections marked final by the documentation team. Similar to the UI, splitting up the online help text made each translation drop smaller and faster to translate.
We had planned a fourth drop, but the number of new words for that drop was so small that it didn’t make sense. Quick decisions were a constant part of agile.
So what results did we experience with this new localization model? First, at least 50 internationalization issues were found by the QA team using pseudo-localization and were fixed by developers in sprints. The real number was probably much higher, but the QA is embedded in scrum teams. If QA finds an issue, it’s reported directly to the developer responsible for that piece of code. Multiply 50 with the number of languages we support, and that means that many hundreds of issues never reached our customers or were reported to the support team.
Our LVT and SME testing was focused on new functionality. Old functionality and translations had already been validated in earlier translation projects. The SME build contained more new functionality than the LVT build, as it occurred later in the development process. We had one new language for this release, which meant more issues reported for that language. The LVT/SME reviewers saw the running product and could find in-context issues.
See Figure 3 for testing results. A total of 474 bugs were reported, including redundancies, user experience (UX) issues and extended attributes. Of the 99 internationalization bugs reported by LVTs, 16 were unique and reported into our defect tracking system. All of them were fixed before release. Of the internationalization issues reported by SMEs, most were already reported by LVT and were in the process of being fixed, but were not yet in the build that the SMEs used for testing.
All internationalization issues were fixed by development before release. The average cost to find a bug was about $34.16, with about 1.15 bugs found per hour.
One lesson we learned was to run the initial validation on just one language, report and fix any internationalization bugs found, then validate all other languages for language specific defects.
When the new localization model is set up and running to your satisfaction, can you just leave it to itself? The answer is no, it will still require maintenance. Here are some tips:
Give the QA team friendly reminders regularly to make sure they continue with the pseudo-localization testing and don’t fall into their old ways. Encourage them, and tell them they are doing a good job.
Subscribe to the defect tracking system for internationalization bugs to see what the QA team finds.
Make sure the automated pseudo language process doesn’t break.
Always be available to answer internationalization related questions from engineers and QA.
Be involved early in the design stage of new features so internationalization issues can be detected and remedied early in the process.
If the release cycle becomes even shorter, there are some additional techniques that can be used. A translation management system can be used to automate routine tasks such as uploads and downloads of files and quotations. Machine translation is also a tempting route, but we have to make sure we still can live up to our customer’s quality expectations, in this case by including post-editing.
Code checking tools can be used by developers for decreasing the number of initial internationalization bugs in code. Use existing automation framework for testing internationalization issues as well.
You could deploy the English software, but hide or disable new features until they have been translated and validated.
Continue to work closely with scrum masters and their teams to get ideas, buy-ins and agreements to further integrate internationalization best practices. And show them the appreciation they so well deserve.