sponsored content

– Supported by Phrase –

How Hyperautomation and AI Are Redefining

the Future of Localization and Personalization

A

lready a leading translation management software company, Phrase is taking automation to the next level. MultiLingual spoke with Chief Product Officer Simone Bohnenberger-Rich, PhD, and Vice President of AI Research Dr. Alon Lavie about why hyperautomation is a fundamental stepping stone towards engaging users in a uniquely personalized way.

Can you tell us about what hyperautomation is and why it’s such a focus for you?

Simone Bohnenberger-Rich: At Phrase, we are essentially doubling down on hyperautomation. We are moving into an era of straight-through processing, where content is seamlessly processed through various AI and machine learning techniques and workflows. We are entering a realm where human intervention is focused where it can drive the most value. We are excited about this because it offers tremendous opportunities for our customers.

Through multiple workflow steps, we achieve results where assets that previously required human intervention no longer do. We provide our end users with ultimate control over each process step, which is essential for hands-off hyperautomation. This allows our enterprise customers to manage cost, speed, quality, and changes, while understanding any associated risks.

The reason we’re emphasizing this approach is that our vision centers on the future of personalization. We believe that, in the coming years, AI-generated content will proliferate, enabling our customers to engage their end users in a dynamic and highly personalized manner. This demands a scalable infrastructure capable of managing the volume and speed of content production, which is too overwhelming for human management alone. We’re preparing for this future with what we call hyperautomation.

Moreover, we aim to achieve this cost-effectively. Considering the high costs of using large language models (LLMs), we are very focused on how we deploy the most suitable and effective portfolio of AI solutions.

In essence, hyperautomation for us is about operational efficiency, enhancing customer experience, and facilitating market expansion in the next few years.

Alon Lavie: I believe, here at Phrase, we have established a solid foundation for hyperautomation and automated quality technology. This technology is key to unlocking the process that Simone described. Automated translation has improved significantly, but relying solely on machine translation (MT) without quality checks — or conversely, manual reviews for all content — is not effective. We need a method that allows for nuanced decision-making and risk management, empowering our enterprise customers to balance risk cost effectively.

Automated quality technologies aim to provide visibility into the quality of the entire translation process across the Phrase Localization Platform. As a translation and localization automation platform, Phrase handles a variety of content types. Each of these requires different workflows and components, including MT systems, pre- and post-processing decisions, and the determination of what can be automated or needs human correction and editing.

A crucial aspect is ensuring translation quality visibility and control, which is achieved with the Phrase Quality Performance Score (Phrase QPS). Every step of the localization process on our platform is assessed and assigned a quality score at the outset. We have aligned our quality technologies with the Multidimensional Quality Metrics (MQM) framework, a well-understood standard in the industry for classifying translation errors. Our QPS model has been optimized for scalability and cost. QPS predicts MQM scores, ensuring that every content piece is evaluated for quality.

With this system in place, we can implement intelligent workflows that target different quality levels for various content types, allowing enterprises to manage their hyperautomation journey effectively. For example, highly visible content requiring high quality can undergo stringent quality assessments, while other content types — like e-commerce product listings or travel site listings — may have a lower quality threshold.

The goal is to put automation at the center of localization processes, only involving human review for rapid troubleshooting and where it matters most. This approach provides a scalable solution for managing translation quality, embodying the principle of hands-off hyperautomation and enabling our clients to control the process effectively.

What does this mean for businesses that might be considered unusual candidates for international expansion — for instance, businesses previously thought too small or in an unusual field for an international footprint? 

Simone: Take the example of a fashion retailer based in Germany, renowned for its sneakers. In recent years, its growth had plateaued due to the impacts of Covid-19, prompting a temporary halt to its plans for international expansion. However, with global trade now recovering, the retailer has set ambitious goals for market growth. Expanding into new markets inherently involves substantial risks. Annual reports from publicly listed companies often disclose the significant investments required and the potential risks and uncertainties involved.

To make this a success, hyperautomation and localization are essential to manage the complexity of bringing multilingual content and assets online at the same time, personalized to your key demographics.

Previously, you would go to your localization or marketing team, which might have local teams and linguists or service providers in target locales. However, where it’s crucial to succeed quickly, traditional human translation processes aren’t fast enough. Ideally, all the assets you need to go to market get localized on day one to support the absolute best user experience possible.

With our hyper-automated approach, around 80% of content can be processed through AI, which is significantly higher than before. However, this is only feasible if you can trust the quality output ensured by our QPS scores. From the initial content creation to its translation, we assess quality at every step to ensure suitability for MT.

This system allows us to identify and correct errors like typos and grammar mistakes before AI translation, ensuring the content is optimized for processing. After AI translation, we reassess the QPS score, maintaining a consistent quality overview throughout the entire process, which is entirely unique to our platform.

By integrating these services, we enable businesses to quickly distribute content to international markets, without significant delays. This ensures cost-effective market expansion and increases the likelihood of success.

Simone Bohnenberger-Rich, PhD

These technologies have been in development for some time. Can you tell us more about what that process was like?

Alon: I can provide some insights, particularly in the context of the LLM revolution. LLMs are increasingly accurate and detailed, nearly reaching the level of translation quality analysis that humans perform manually through language quality assurance (LQA) processes like MQM. In fact, we recently released a capability called Auto LQA, which utilizes LLMs.

We’re using GPT-4 as the foundation for this to ensure that the capability is enterprise-ready. Automated LQA provides a much deeper level of translation quality information, detecting errors, classifying, assigning categories, and determining overall severity.

However, LLMs are still slow and costly for use on a large scale. Our QPS technology doesn’t rely on full-scale LLMs, but instead uses the same underlying neural technology that’s been in development for the last five years. We train and run these smaller models ourselves to predict MQM scores.

We can then run QPS on everything being translated on the Phrase Platform and set quality thresholds, allowing content that meets these thresholds to pass through. Content that doesn’t meet the threshold can be routed for processing by LLMs for detailed error analysis and automated correction. Then, if necessary, it can be routed to humans for correction and review. This approach combines two levels of technology: a non-LLM-based system for scale and an LLM-based system for detailed quality analysis and correction. We anticipate a convergence of these technologies down the road, although we are currently operating in this dual-system world.

When thinking about the development approach, we were essentially balancing AI capabilities — which are significantly higher with LLMs — against the cost, speed, and scalability of the solutions. The underlying technology in this case, LLM versus basic neural networks, provides different benefits.

Dr. Alon Lavie

So it’s a matter of efficiency?

Alon: Yes, it’s a balance between cost-effectiveness and scalability versus the more profound capabilities.

Simone: For our vision of a hyper-automated workflow, several components must be in place. A core element is a robust workflow orchestration tool that allows decisions on content quality and directs the process accordingly. Advanced analytics are also essential to gauge the overall quality of assets.

In the hyper-automated future, assets like translation memories (TMs) will be a key competitive advantage to help engage customers in a highly personalized way. This is why we also provide our AI-powered curation tools for our customers to quickly build and enhance their TMs. For hyperautomation to function, it requires continuous updates and refinements of these assets, which our technology facilitates.

Finally, when AI has processed the majority of the content, a small percentage may still require human review. For this, we provide a workbench, our localization platform, integrating all necessary elements for implementing AI-driven processes. This comprehensive approach underlines the complexity of developing a robust platform that hyper-automates the entire translation process.

Could you tell us more about how hyperautomation could develop in the future? Where do we go from here?

Alon: The Phrase Localization Platform serves as the central hub for flexible, dynamically executed workflows that are constantly evolving. With the emerging capabilities of generative AI (GenAI), we see a near future where GenAI will drive the entire workflow and its execution. It will be able to analyze the customer’s use case. Then it will be able to not only match that use case with a pre-configured existing workflow on our platform, but also generate the right workflow on the fly — tailored to the customer’s specific needs for personalization, as discussed by Simone — and then run that workflow on the Phrase Platform.

Within this workflow, many steps may be executed by GenAI itself, while some fundamental steps will remain valuable and in place. The enterprise localization manager will essentially monitor this automation, observing the content processing and intervening or optimizing as needed.

Furthermore, we envision AI generating substantial content volumes, eliminating the bottleneck of human-led content creation. In the near term, content generation will still happen in a source language and will then be followed by translation into other languages, which is the traditional two-step process in the translation industry. However, we also foresee a future where AI generates certain content types directly in multiple languages, completely bypassing this two-step process.

The Phrase Localization Platform is being developed to align with this future vision, supporting and controlling the quality of this type of multilingual content generation. Ensuring that content is generated in multiple languages simultaneously, while maintaining consistent quality and intended meaning, will remain crucial. We are designing a platform with the necessary tools and capabilities to maintain this level of quality and control for our customers.

Related Articles