Localization testing is a must for any software company before delivering multilingual products to global customers. Most software companies currently choose to collaborate with outsourcing companies to perform localization testing, and so does Adobe. As is commonly known, the main advantage of outsourcing is lower costs; however, what a global software company typically cares more about is the quality of multilingual products. Therefore, how to cooperate with outsourcing teams to approach and perform localization testing more effectively and efficiently is becoming key to improving product quality and lowering operational costs.
Before the localization testing project kicks off, it’s necessary to think over how to approach localization testing, which is the foundation for the localization testing performance that follows. Ideally, all the approaches for localization testing should be listed in your localization testing project plan, which commonly includes deciding on dates on which the localization testing starts and ends, dividing testing phases, estimating testing resources for each phase, choosing the right outsourcing testing team and so on. There are several key points that we would like to highlight here.
First of all, it’s important to choose the right outsourcing testing team right after the testing resource estimate is made and the testing timeline is settled. As is often the case, the outsourcing testing teams with which we previously cooperated will be given priority, especially the ones that have experience testing similar products. Also, it’s essential to get in touch with the outsourcing program manager beforehand to make sure that the team has enough resources in the expected localization testing phase. It would be better to choose more than one outsourcing company if the testing workload is so heavy that one outsourcing company may not have enough resources in that phase. In that case, it’s important to balance the localization testing workload between two outsourcing companies. Perhaps it is even a good choice to work with two outsourcing companies if the testing project is a big one, because the best resources are more likely to be available. Besides this, practice has proven that different testing teams can observe and discover product quality issues and bugs from different perspectives, which is beneficial for localization testing quality.
Interviews of outsourcing teams are also indispensable. Interviewing is an effective way to ensure that the outsourcing team is the one you expect, and also provides a good opportunity to get to know each other before the project starts, especially when it’s the first time working with a particular outsourcing team. What’s more, even if you’re working with the same team, it’s unlikely that every member of that testing team could come back to work with you again if you released them after the last project was done. Testers may go to other product testing teams, or even to other companies’ teams, since outsourcing companies often collaborate. In that case, new testers will join your testing team. So it’s necessary to interview the new ones to make sure the tester candidates have the localization testing skills that you expect. It is especially important to interview the team lead since the lead needs to communicate with you frequently about the daily work. You could also ask the outsourcing program manager to help recruit new leads or testers if the ones the manager recommends are far from your expectations.
How to communicate with the outsourcing team also needs to be defined in this phase, because the outsourcing team usually works remotely. A clear and smooth communication channel between both sides is a guarantee of timely completion of the localization project. One example of a communication channel at Adobe is shown in Figure 1. The frequency of regular meetings also needs to be decided in this phase, to summarize what has been done, what needs to be improved and so on. In the daily work, for any urgent issue, the team lead could be contacted by phone. At the end of each week, the team lead needs to send a weekly report as a summary.
Another important step that needs to be done before a localization testing project starts is setting key performance indicators (KPIs) for the testing team. According to the past data, KPIs need to be set for the testing project to clarify and specify the bug quality and quantity. This will help the testers clearly learn what we need them to do and what their testing goals are.
At Adobe, the internationalization quality engineer, as a key connector, works closely with the internationalization program manager, product team and the outsourcing team.
A kick-off meeting will be hosted to officially launch the localization testing project one or two weeks before the testing starts. All project-related staff, including the outsourcing testing team and the manager, need to attend this meeting. It’s a good opportunity to clarify each key point and milestone for the localization testing to the whole team, and it’s also very important to highlight the KPIs to them, since this will help them to know the goal for this project as well as their testing goal. After one or two weeks’ training, the outsourcing testing team will start to do the localization testing.
Localization testing is divided into two parts at Adobe: localization functional testing and linguistic testing.
Localization functional testing
The testing team needs to check whether any product functionality is broken in localized builds. They all need to check for any other localization issues other than translation, such as locale-specific convention issues, hard-coded strings and truncations in user interface (UI) dialogs. Before this testing starts, it’s essential to discuss with the outsourcing team lead how to arrange testing tasks, languages and platforms with each team member, which means the working model for localization functional testing needs to be decided. Two working models are used in software localization functional testing. One is a language-oriented working model (testers are assigned based on languages) and the other is a feature-oriented working model (testers are assigned based on features). Though the language-oriented one is more popular, in practice, this model doesn’t work as expected in testing efficiency and effectiveness. The authors proposed a new working model, the Enhanced Feature-Oriented Working Model, during the 37th Internationalization & Unicode Conference, and applied this new model to one Adobe desktop product localization functional testing. The results exhibited significant improvement in effectiveness and efficiency of the whole team.
Linguistic testing focuses on discovering both grammatical and cultural issues in translation, so it should be performed by testers with good linguistic awareness, such as native speakers of the target language. The first step for this testing is to choose the right linguistic testing model. There are currently two models, the test-case-based linguistic testing model and the screenshot-based linguistic testing model. In the past, the test-case-based one was more popular, but it requires the linguistic testers to be able to find the corresponding translation strings in the product by following the test cases, which is time consuming. Setting up a product environment and going through the test cases are the skills most linguistic testers are not good at. Also, the payment rate of linguistic testers is higher, so this model is more costly as well. So inevitably, the screenshot-based model has become more popular, since the localization functional testers could help linguistic testers take screenshots with localized builds, and linguistic testers only need to check translation strings by comparing the localized screenshots with the English ones. Thus, the cost of linguistic testing decreases. However, with this model, linguistic testers only focus on the translations in screenshots, and other strings will not be covered.
In most cases, one model is used for this testing, but if your product supports more languages, like Adobe Acrobat, which supports more than 20 languages, the linguistic testing workload may be too much for one outsourcing company within the given time. So you may choose to collaborate with more than one outsourcing company. You could choose both models for the linguistic testing during one project, screenshot-based linguistic testing for some languages and test-case-based testing for other languages, especially if you need to collaborate with more than one outsourcing team during linguistic testing, and each of them has their own specialty. In that case, both unified workflow processes and unified rules for logging bugs (Figure 2) are very important for this process, since each outsourcing team may have different workflows and rules for logging bugs. Also, it would be better if two batches of linguistic testing are done because the testers could familiarize themselves with the processes and rules during the first batch testing. You also could discover testers’ issues in their bugs, and have them correct those issues during that batch’s testing. This will help make them work effectively and efficiently with the second batch.
Controlling the testing process
It usually takes several months or even longer to perform localization testing, because the testing tasks will be done by an outsourcing team that is more than likely working remotely. Therefore, it’s quite necessary to control the whole testing process. This doesn’t mean checking daily work or verifying the testing data, but it is more of a concern with guiding the team’s direction and promptly correcting problems in order to ensure the whole team is working on the right track during localization testing.
Most of the localization testing work is performed by an outsourcing testing team, so human factors need to be considered during the testing project, especially if the testing project is a big one, and the testers need to work on the same product for a long term. In that case, it’s very likely that they will become bored and gradually lose motivation. Also, considering that the outsourcing team usually works remotely, some incentive mechanisms are necessary to inspire enthusiasm and keep the outsourcing team motivated throughout the testing. Below are some suggested incentive mechanisms:
Selection of bug-hunting stars. For example, we may select one or two bug-hunting stars in an outsourcing team according to each tester’s performance on bug quality and quantity every month, and then one email will be sent out to the whole team and copied to their manager to praise them.
Recognition of outstanding testers in a timely fashion. For example, we may praise the testers during the regular weekly meetings if a tester logs a good bug or someone recently did a good job, which is also a great encouragement for testers.
Giving best wishes to the whole team on the eve of a festival or national holiday. For example, a best wishes email is sent out to the outsourcing testing team to wish them an enjoyable vacation for the coming days.
One more important thing is the emphasis on teamwork, not just an individual’s ability, since it’s the teamwork that guarantees the completion of the project on schedule.
We also help the outsourcing team grow. The outsourcing companies we collaborate with in localization testing projects are our partners, so it’s mutually beneficial to pursue long-term shared objectives. Therefore, aside from the localization testing quality and cost, we also hope to help the outsourcing team and their testers grow in product knowledge and personal professional skills through our projects.
We encourage testers to learn Adobe product knowledge. We may arrange training classes and materials for testers before the project starts so that they understand the product deeper before doing the localization testing.They could also take Adobe Certified Expert exams to demonstrate their professional level with one or more Adobe products, which is really helpful for their career development.
We also encourage the outsourcing team to have weekly study meetings, during which they can share product knowledge and discuss testing issues to help them improve their personal testing skills.
Additionally, it’s very useful to have localization automation testing in-house. Especially at the end stage of the testing, most of testing work focuses on the critical workflow testing for all the languages and platforms, so a lot of repeated work is necessary. In order to reduce costs, only a small number of testers are kept in the outsourcing team. The localization automation testing is also involved at this stage to provide broader platform coverage and better product quality, as well as a good balance between product quality and operational costs.
After the whole testing project is done, it’s required that the outsourcing testing team submit the final testing reports. A postmortem meeting is also required to summarize the whole testing process and its results, what has been done well and not well. This is quite beneficial for product localization testing of the next version, and for the growth and improvement of the outsourcing testing team.
Choosing the right outsourcing team, setting clear and smooth communication channels, applying flexible testing models, involving appropriate incentive mechanisms and automation testing will help make your localization testing more effective and efficient, and contribute to the success for a software product targeting the global markets. Therefore, continuing to improve the processes and methods in approaching and performing localization testing will have a huge business impact for global software companies.