Business intelligence applied to localization

Software localization can be more complex than it seems. It is not just about translation — it also includes engineering, testing, bug fixing and so on. By 2009, Autodesk had deployed several systems to deal with the cost management, test automation analysis or defect tracking facets of software localization. From an operational perspective, it made sense. But as the information was fragmented in separate silos, we struggled to get an overview of our projects. We also pondered how to increase the value of the data while keeping the flexibility and efficiency offered by the systems we had in place. Our answer was to develop a business intelligence program tailored specifically to the needs of our localization department. 

Business intelligence is a set of methodologies, processes and technologies that transform raw data into meaningful and useful information. It is used for effective strategic, tactical and operational decision-making. But, concretely, what does this mean and how does it apply to localization?  

Let’s start by looking at one example in detail: the monitoring of vendor performance. Because we outsource the translation and localization engineering and testing, it is crucial for us to keep an eye on the performance of our vendors. This enables us to achieve the objectives set by our upper management. To monitor vendor performance, we first had to define what it means to us and how to measure it. We started with a bottom-up approach and designed a set of 70+ function-specific detailed indicators. As someone said at that time, we had developed the equivalent of a diagnostic machine that a mechanic would plug into your car to identify any potential issue. This first set of metrics was probably more targeted to the operational leads than to our management team, but the exercise did help us grow in maturity and clarify the objectives that our vendor model needed to achieve. 

Aligning metrics with our mission, vision, strategy and goals was critical to ensure we were designing the right set of indicators for the right audience and the right usage. From this lesson, our next step was to find the best way to increase the level of abstraction and identify what the management team would need to monitor vendor performance. We did that based on the objectives identified during the bottom-up phase and also incorporated feedback we had received from our vendors. The result was a new, standardized and targeted set of indicators.  

From our perspective, performance measurement needs to be based on two characteristics, which we describe as the “what” and the “how” (Figure 1). If a vendor does not properly perform a process, we may see an impact in terms of cost, quality and time. Ensuring that the deliverable — the “what” — meets our expectation is important, but this is not enough. An output may be delivered on time and aligned with our quality criteria, but it may have required excessive support from our internal team to get it done. Because scalability is an important goal for our localization department, we think measuring vendor problem-solving skills or proactivity — the “how” — is equally important.  

Today, we measure the performance of our vendors with four key performance indicators. These are metrics with specific properties: each one needs to be actionable, comparable and aligned with your objectives.   

For example, the Deliverables indicator measures the translation quality evaluated during the execution of a project. It is based on the scores compiled from the number and the severity of the linguistic errors found per thousand of words reviewed. The Efficiency indicator measures the duration required to fix and verify defects during the development of our localized products. The Support indicator measures the support required from the internal Autodesk localization team on topics the vendor should be fully in charge of. The Service indicator reflects how the vendor performance is perceived by our internal team. It is measured on a monthly basis. By definition, a key performance indicator should be objective and fact-based. Despite its relative subjectivity, we believe this key performance indicator provides valuable insight on vendor performance and we really use it as a “360 feedback” tool. If the Deliverables and the Efficiency indicators are measuring the what, the Support and the Service indicators are targeting the how.

Other indicators are obviously monitored during the execution of localization projects. For example, we measure the ability of the vendors to meet their milestones. However, while it is important to follow such metric from an operational point of view, comparing its values between two development cycles, two products or two vendors is not relevant. We consider it as not comparable enough to qualify as a key performance indicator. Once the indicators were clarified, the next step was to define how to aggregate the data (Figure 2) and share the information with the targeted audience. We did that in two phases.  

The first phase was to consolidate all the data required in one central database, our data mart. This phase included an audit of the data and required the development of specific procedures to extract it from every source, clean and reconcile it (including from a semantic perspective) and finally load it into this central database. The data mart guarantees a consistent usage of the information across all reports and dashboards. It helps build a single point of truth, which everyone can use to answer the same question at any moment. It is also helpful to keep a history of the data and enable multidimensional and time analysis. This first phase can take quite some time, especially if you start from scratch and have several data sources to integrate. It also requires a solid technical knowledge and a good understanding of the business. As we had built our data mart over time, we mainly had to add the data that was missing to support our vendor key performance indicators and to adapt our data model.   

The second phase was to develop a dashboard to publish the information. Having everything displayed in one place helps reduce the time people spend finding the information they need, especially in a complex and heterogeneous environment. The time required to implement a dashboard depends on multiple factors: technology, expertise of the team, complexity of the data, demands from the business and so on. It took us less than a man-month to implement and publish our vendor indicator dashboard from the moment the data was available in our data mart. 

The dashboard enables multidimensional analysis and drilling-down to various levels of details (product, vendor, language, time and more).  For instance, the Deliverables indicator allows us to compare the evolution of linguistic quality between cycles and ensure it remains in line with our expectations. While each individual score is important in itself, we can also find useful and actionable information by looking at the evolution of the results over time. We have been able to take preventive actions and ensure the quality of the final Deliverables by monitoring and comparing the trends per language, per product, per component and per vendor over time. 

The same information could be used to adapt the list of products and languages which require a linguistic review. For instance, achieving a high score for multiple releases in a row could trigger a vendor to be “certified” for a specific product and language combination, helping to reduce review cost while maintaining the overall quality. This is not something we do today, but could be something to explore. 

The data collection for the Service indicator is done through a monthly survey. In addition to the rating itself, our evaluation includes a series of optional questions, giving context to the scores. From the dashboard, Autodesk localization staff members can select one specific question (such as flexibility or communication), look at its current status and quickly assess whether it has improved over time or not. It helps us understand the rationale behind the ratings and to work with our vendors on areas for improvement. This is how we use business intelligence tools and techniques to monitor vendor performance and take appropriate actions to ensure it is aligned with our expectations. We also leverage business intelligence to help us respond to other challenges our localization department faces every day, including the sharing of information and building awareness of what we do across our company. 

For example, we have implemented a dashboard to publish data about the internationalization quality of our products. Being able to rate and compare products through a single interface is a powerful way to help a localization team increase awareness of internationalization. This dashboard is currently playing a key role in the discussions we have with our development teams. 

We have also started to include questions about localization in surveys sent to our external customers. In this case, business intelligence mining techniques help us isolate the impact of localization on customers’ satisfaction scores. We also use tailored reporting capabilities to adapt the granularity of the information to the needs of the targeted audiences. While the data is the same, the level of detail in our cost reports is different depending on whether it is consumed by our staff members or by our finance department.  

We are still using separate systems to support the various facets of software localization. On the other hand, we were struggling to get an overview of our projects a few years ago, and we aren’t anymore. Business intelligence tools and techniques did help us centralize all our localization project-related data in one unified dashboard. Our staff can now analyze budgets, defects, milestones, linguistic quality, internationalization scores, vendor performance and more at different levels of detail, from different perspectives, in one single location.

Business intelligence is really about breaking silos, building knowledge and sharing information. In a world more and more driven by data, we are convinced it will continue to help us meet our objectives, define new models and identify new opportunities.