Today’s IT world is experiencing rapid competition growth, making it crucial for every software development company to perform at its peak. At this point, product quality can make an indisputable difference. On the other hand, due to the increasing tendency for process optimization, the budget allocated for software testing is often reduced. Therefore, many companies are now struggling to choose the right testing approach and maintain high-quality standards with regard to scope, cost and time given.
When it comes to product adaptation for local markets, the quality of localized product versions is another important point to consider. Even if the source version has been tested carefully across its length and breadth, the localized equivalent will most likely behave incorrectly at first launch. This may occur due to the change of the installation environment or be a result of modifications made to the product in the course of the localization process. Hence, there is no way to consider quality assurance as a “tick in the box” within the software localization lifecycle without the risk of negative impact on the final localized product and therefore the company’s reputation.
Let’s have a closer look at the evolution of localization testing methods, following the recent industry changes and emerging trends. Localization quality assurance (QA) is mainly focused on product user interface (UI) checks, and there are two basic stages to be covered within this process. The first is localization testing itself, aimed to validate the product’s integrity, performance and visual consistency compared to the original version. The second is linguistic testing — performed to ensure the product has no linguistic or language-related issues. Thus, to check the quality of one localized version, at least two specialists need to be involved: a QA engineer and a linguist. Consequently, the more target languages you are localizing into, the larger the amount of resources you need to allocate for the testing process.
Moreover, there is a growing trend among software development companies to simship, meaning multiple product versions are released simultaneously.
As simship becomes the new normal, it brings up a myriad of new factors to consider. The testing and QA processes may be thoroughly reconsidered and restructured. In addition to that, there is an emerging tendency that the time to market is significantly reduced. So how can localization QA engineers meet all these challenges?
There are different approaches to localization testing that can significantly reduce the time and costs spent on product localization, on the condition that you pick the correct one for your project.
The choice fully depends on the peculiarities of the product itself, the number of target languages, the depth of localization and the type of issues you aim to find and fix at every stage of the process.
Let’s start with manual localization testing since it is considered the most conventional method. Manual testing requires a QA engineer performing test cases written in accordance with the product specification. While passing test cases, a QA engineer searches for product issues by comparing the actual result with the expected one, as defined in the product specification.
Test cases used for both functional and localization testing contain not only the repro steps — how to reproduce the bug — but also validation criteria, which can be of additional help when you test localization. After every step, a QA engineer checks the new window for possible localization defects. Consequently, the role of a test case is to lead the engineer through the product, helping him or her to avoid skipping any functionality.
This approach produces good results when all the product features are covered with test cases. It doesn’t matter what kind of technology was used for product development, as the QA engineer can always perform all the end-user actions. In addition to that, testing results can be analyzed on the go, another benefit. Even if the actual result matches the expected one, some other issues might be spotted; issues that were not supposed to be covered by the current test case.
However, this approach still has some drawbacks. The main disadvantage is its resource intensiveness — manual testing will most likely require a significant amount of time and human resources, especially for large-scale projects being localized into numerous languages.
This is where automation comes in. Automation is intended to minimize human involvement and reduce the resources spent while maintaining a high level of quality.
With the automated testing method, programming tools are used to run test cases and verify results. There are two basic approaches to quality assurance automation: testing on the code level, and graphical user interface (GUI) testing. The example of the first approach is so-called unit testing. When it comes to GUI level testing, in most cases the actions performed by the QA engineer are imitated with the help of scripts. The scripts are usually integrated into a framework that can handle the testing process, responding to the issues found and generating intermediate and final statistic reports.
An approach to automation framework creation is usually based on technology applied to the development of the product itself. There are currently many solutions for automated testing, and all of them can be divided into three categories:
Tools for desktop applications testing
Tools for web testing
Automated testing of mobile applications
In reality, however, none of these solutions are universal. The main advantage of the automation approach, compared with manual checks, is that it enables companies to cut testing costs, provided the amount of work is large enough and a methodology of continuous integration is followed.
The main expenses of automated testing are framework development and support. The cost of the testing process itself is reduced since the framework performs the work of several QA engineers at once without any outer input. Still, in some cases it may be rather difficult, or sometimes even impossible, to cover all the functionality with the help of automated test cases due to certain product specific restrictions.
As the script simulates the actions of the end-user, the automation shows good results when checking product functionality and behavior on the localized operating system. It is also important to understand that automated localization testing is the extended version of functional testing, and therefore not only the correctness of the application behavior is checked but also the way the UI elements are displayed. This is the most challenging part, as currently there is no solution that can ensure all the localized UI issues are spotted.
In the left column of Figure 1, you can see a list of the dialogues verified by the framework. In this example, the framework has detected an issue with the truncated string in the German localized version. As you can see, this defect is not that easy to spot by manual check due to the way the string has been cut.
Unfortunately, such cases are quite frequent. The outcome of the script’s work may still require some additional analysis from the engineer’s side, to sift through all the false positives and possibly bring to light more issues that were not pinpointed automatically.
In Figure 2, the semiautomated approach has been applied. The framework doesn’t perform any checks; it only captures screenshots and generates a report. As you can see, the report contains both dialogue versions: the English and the localized one. This kind of report can be further analyzed by linguists to verify translation consistency or by QA engineers to find typical localization defects.
Even though the semiautomated testing approach doesn’t eliminate the manual factor, it still simplifies the process. Moreover, such reports will help to improve interaction with linguists, since they don’t need to learn the product’s functionality, be able to reproduce a specific action or get the relevant dialogue box to appear.
It’s all about finding the smart combination of automation and human involvement that can help you get more reliable results maintaining a high level of cost effectiveness. Nowadays, more and more efforts are made toward achieving this goal. As a result, an innovative parallelism approach has been introduced and brought to the table.
Parallelism is a relatively new trend in QA, but it has already proven itself as a reliable and cost-effective solution for testing products localized into multiple languages.
This approach incorporates a number of beneficial features from both manual and automated testing. Some of them we’ve alluded to previously — the number of QA engineers involved in the process get reduced, while the testing outcome results are available for analysis on the go. This approach will help significantly improve the level of the project’s coverage with test cases.
The main idea behind this method is to enable one person to cover the work of several people. It means that when a single QA engineer performs some actions on the tested product (often it’s the English version), all his or her activity is immediately replicated into the localized versions of this product installed on separate machines.
Moreover, the engineer is able to monitor the work of all the versions and also capture screenshots and record video simultaneously. These screenshots and videos can be further analyzed by linguists or other QA engineers.
The way the action is replicated depends on the software used. There are a number of different replication methods that allow imitating user actions with a high level of accuracy subject to UI characteristics and localization peculiarities. It is clear that the more methods of replication are supported by the selected software, the more options are given to a QA engineer (Figure 3). The classic approach is to use coordinated mouse clicks, which will most likely not be enough to cover all the product functionality with test cases.
It is worth mentioning that incorporating parallel testing into a company’s QA processes won’t cause any major changes to the existing internal workflows. If no scripting skills are required here, the QA engineer who performs manual testing can work with parallel testing as well. As for automated testing, there should be a separate team with the corresponding scripting competence to develop and support a test framework, and this means more time and resources are spent.
Figure 4 is an example of the parallel testing process displayed on the QA engineer’s PC monitor.
In our case the English operating system is used as a source. We also have six controlled PCs here, replicating all the actions performed on the English version by the QA engineer. While passing test cases, the system can automatically capture screenshots and record video. In this case, the resource spending is reduced by almost six times, if we run test cases on six different versions of the product in parallel. And the main expenses here will be associated with the purchase of the parallel testing tool’s license.
Choosing an optimal project-based approach
Now let’s talk about strengths and weaknesses of the abovementioned testing approaches and try to define key criteria for choosing the one that suits your project the best.
The first selection criterion is the number of languages the product will be localized into.
Figure 5 shows the relation between the number of languages for localization and time spent on testing. This data has been collected in the course of a real project where we have implemented all three approaches: manual testing, automation and parallel method. Evidently, manual testing shows a linear tendency: the more languages you localize into, the more time is spent on testing. Automated testing is more effective than manual, as long as you have more than five languages for localization. Otherwise, the expenses of the framework development won’t get covered. Automation also shows better results compared to parallelism resource-wise, on the condition that you localize your product into more than ten languages. Parallel testing, in its turn, can be an optimal solution for a project when the number of locales is between two and ten.
The second criterion is the resource pool. As we have already mentioned, to start working with the automated approach, you will need a separate team of engineers to work with the automated framework. Such specialists are more expensive than manual QA engineers. The budget of a small project will most likely fail to cover this cost. At the same time, manual and parallel testing do not require any special skills, so the QA staff will be able to handle their work with minimum additional training.
Last but not least is the peculiarity of the product itself. First of all, we need to take into consideration the release model. Under the linear release model, localized product versions are being signed off one by one with some time in between languages and that makes parallel testing inapplicable here. So, in this case, it’s better to use automation. However, presuming all the languages are released almost at once, automation and parallelism will suit the project best.
Another factor to consider is the peculiarity of the software you need to test. There are some cases when automated or parallel testing approaches are simply inapplicable to the product and, on the other hand, some applications may be perfectly suitable for automation. The thing is that some consumer products tend to change a lot between versions, with concern to the UI especially. As for enterprise products, the updates are usually very thorough and seldom made. In the second case, automation will show the best results.
As you can see, none of the approaches is universally advantageous and the outcome differs from project to project. However, if you want to be able to pick the best testing approach for your localization project, our tips would involve following these three simple rules:
Keep pace with the innovation. It is important to keep a finger on the pulse of the latest technologies and emerging technology trends.
You still need to combine testing methods to achieve the best results. With a smart combination of the approaches, the project is more likely to be turned into a winning one. Remember, the more experience with different testing approaches the company has, the better solution it can offer to the customer.
Optimize your processes where possible. Over the last years, optimization has become a fast-growing trend that can be put into practice almost at every stage of the software localization processes resulting in significant time and costs savings.
Following these rules will help keep your company’s competence permanently up to date and ensure your QA strategy is always smartly attuned to meet the needs of a specific project. Evidently, choosing the winning testing approach requires a lot of analysis and a significant number of different factors that need to be taken into consideration. However, this is the way to maintain high quality with the most efficient resource spending.