PERSPECTIVES

Human Translation Skills

Needed Today and After the Singularity

By Adam Wooten

Translators have long questioned what they might do — if anything — when singularity is achieved in translation. Some of us might be surprised to know that translation providers should already be offering those same necessary skills and services.

What can human translators offer after singularity is achieved in translation? This question can inspire hope or curiosity in some, while evoking fear or despair in others — particularly in those who already detest the stereotypical offering of limited post-editing. The answer to how a human can offer value depends significantly on how we define singularity for translation.

What does it mean to achieve singularity in translation?

Sometimes, when we think of singularity in translation, we envision extreme achievements like the universal translator depicted in Star Trek and other works of science fiction. Generally, such a universal translator that instantaneously converts thoughts into speech is so advanced that no additional human services are needed, except in the rarest circumstances where a cultural expert must interpret local metaphors and allegories. If we achieved such an extreme level of advancement in artificial intelligence, we might even need to worry about more than translation, as other parallel advancements could lead to the creation of the human-exterminating Skynet depicted in the Terminator film series. 

If this article were to focus on this universal-translator definition of singularity, or other definitions that include artificial general intelligence, I might suggest that translators transform into cultural experts… at least temporarily until Skynet destroys us all. Various other definitions of singularity in translation and the appropriate corresponding human responses could each merit their own articles. 

An arguably more realistic and immediately helpful definition of singularly in translation, with clear expectations for speed, cost, and quality, was shared in the 2022 Annual Conference of the Association for Machine Translation in the Americas (AMTA) by Marco Trombetti, co-founder and CEO of Translated. Like most definitions of a threshold for reaching singularity in translation, Trombetti’s full definition assumes that this future machine translation (MT) will be fast and inexpensive — specifically, that translations will be generated in 500 milliseconds or fewer and cost less than one thousandth of the cost of human translation. However, his third requirement for quality expectations are what bring particularly unique clarity and practicality to his definition. 

Trombetti recently told Florian Faes in the SlatorPod podcast that the simple version of his definition focuses on quality levels, where he defines singularity being achieved “when a professional human translator will take less time in editing a machine translation than editing another translation done by another professional translator.” He explained this will be a natural threshold when “it is more convenient to use the output of a machine rather than to use the output of a colleague.”

Trombetti has explained more specifically that Translated has observed post-editing time in limited language pairs and domains dropping over the past decade from about 3.5 seconds per machine-translated word fast as two seconds per word. In order for MT to be better than human translation in these use cases, post-editing speed would need to drop to one second per word.

In Trombetti’s definition, singularity would be achieved not by replacing every potential human task in a lengthy translation process, but by replacing the translator in the initial translation step. This definition of singularity in translation is interesting not only because it is clear, but also because it is more readily relevant. We already see many use cases for using machine translation instead of a human, but human expertise is still frequently combined with MT in the form of post-editing. With that combination, we often see friction that might lead many to still dread such a singularity.

Why do many translation providers hate post-editing?

Why do so many translators and other language service providers hate classic post-editing of MT? Translators, managers, and others could provide a long list of reasons why they object to post-editing, but numerous reasons point to the idea of cleaning up messes. Many of us in the translation industry find satisfaction in planning, building, and creating; That’s one of the reasons why we chose this profession. However, we often detest cleaning up the messes of others, perhaps because we lack control over their creation, and that engenders feelings of helplessness.

Where do translation service providers add value to MT projects? 

In many human translation projects, we find fulfillment in providing great value to our clients throughout a process that we control from start to finish. In contrast, we might see limited opportunities to provide value in a stereotypical MT post-editing project, where value may appear to be limited or even hidden at the end. And if we fear that our value is diminished with its relegation to post-editing, how likely are customers to similarly overlook or underrate the value we offer?

That is not to say simple post-editing adds no value, but such value is more easily overlooked or underrated when it is limited like an afterthought. Consider that a basic, stereotypical MT post-editing process may have very limited linguist involvement restricted to post-editing alone. 

Humans can offer much more value to MT services both now and in the future

How do we make a shift from this basic, stereotypical process to a process with more obvious value added? The answer usually requires more human involvement from expert translation service providers, not less.

This is not a proposal to make MT projects more manual processes by having humans perform unnecessary or redundant tasks. No, computers will continue to do what they do best, and humans can add more value doing what they do best — both now and after the singularity — by using their skills to ensure the following: 

  1. Source is appropriate 
  2. MT is customized 
  3. PE (post-editing) is optimized 
  4. And target is aligned to goals.

These four areas can be broken down further into at least 20 different skills — or more, depending on how they are categorized — that human linguists should learn and use both now and in the future. The following is a short description of many of these skills.

Appropriate source preparation

Some specific source text yields better results in human or machine translation.

  • Define and Evaluate Source Quality: Define and evaluate the appropriateness of source text – including subject, style, terminology, and more – for MT.
  • Create Style Guides and Glossaries: Create and maintain language style guides and glossaries for the source to ensure it is an appropriate fit for MT.
  • Evaluate and Train Authoring Talent: Evaluate and train authors to use the appropriate guidelines and tools.
  • Create Auto QA Checks and Metrics: Where possible, convert many style guidelines and term requirements into automated quality checks and metrics in authoring tools.
  • Create Source Pre-Processing Rules: Develop additional rules or macros to convert final source text to an intermediate state that is even more ideal for MT.

MT engine customization

Customized MT engines can yield better results when engines are customized to specific domains and styles.

  • Identify and Prioritize Data: Define, evaluate, select, and prioritize existing bilingual translation memory data and monolingual data that would likely be most appropriate for training the NMT engine.
  • Clean and Prepare Existing TM Data: Clean and prepare the appropriate TM data and monolingual data, removing any noise that might negatively affect the quality of the customized MT engine. 
  • Create New TM Data: Create additional translations for example data where existing examples are insufficient.
  • Train Data Preppers: Develop guidelines, tools, and training for others to prepare data for engine training.
  • Evaluate, Select, and Prioritize or Create Term Data: Evaluate, select, and prioritize terms that can be added for specific enforcement in the NMT engine. Create additional terminology where existing data is insufficient.
  • Measure Raw and Post-Edited MT Quality: Measure the quality of the raw MT output if it is to be used raw or to compare competing MT engines. Perform post-editing and measure post-editing effort against specific goals for the post-edited output.
  • Create Hybrid Rules: Create language rules where appropriate for MT engines that use a hybrid of neural- and rules-based approaches. 

PE optimization

Post-editing generates better results with the right processes, teams, and tools.

  • Evaluate and Select the PE Team: Evaluate and select appropriate members of the post-editing team, often through test creation, administration, and evaluation.
  • Instruct and Train the PE team: Provide appropriate written, recorded, or live instruction and training to members of the post-editing team.
  • Create Post-Processing Rules: Develop additional rules or macros wherever appropriate to automatically process the target text after it is output from the machine translation engine and before it is post-edited. 
  • Create Termbase: Create a termbase for post-editors to ensure required terms are used when post-editing.
  • Create PE Guide: Create guidelines for post-editors to know which edits are required and or discouraged in order to meet all desired goals.
  • Create Auto QA Checks: Where possible, convert many of those PE guidelines into automated quality checks.
  • Monitor Quality Estimation: Monitor the effectiveness of automatic machine translation quality estimation (QE) and its use by post-editors.
  • Evaluate PE Effort: Evaluate post-editing effort to determine if post-editors are editing too lightly, editing too heavily, or meeting specifications.
  • Use Feedback for Continuous Improvement: Collect and evaluate relevant post-editing data for continuous feedback and improvement of the post-editing processes, tools, team, and more. 

Target aligned to goals

Target output must be monitored to continuously improve translation processes.

  • Create Quality Metric: Identify or develop an appropriate quality metric to ensure target goals are reached. This same metric should be used during subsequent engine customization and development. 
  • Evaluate Quality: Evaluate the quality of the final translated output material. 
  • Interpret User Quality Feedback: Interpret both quantitative and qualitative user feedback on final translation quality to determine how other metrics, processes, tools, and teams should be improved. 

There are too many assumptions and exceptions for this to apply every use case

Examples abound to illustrate that not all of these steps are currently needed in every use case. In some cases, raw MT may be enough and post-editing is unnecessary, or a generic NMT engine is enough and customization is unnecessary. 

In the future, we can’t assume that humans will remain the best choice for all of these tasks. Just as singularity in translation may replace some human translation steps, simultaneous progress toward singularity in other areas may lead to automatic displacement of other human involvement. For example, additional data cleaning steps are very likely to be automated.

Now and in the future, additional value may be offered by humans for more specific use cases. Professionals will also need to provide additional services not listed here to add value through uniquely human logic, creativity, imagination, empathy, sensitivity, and discernment of truth.

The translation industry will have many jobs after singularity is achieved, but those jobs are likely to look different. To prepare for those jobs, we should be strengthening not only those skills that make us uniquely human but also our ability to work synergistically with technology. Such preparation will already benefit many current MT projects where humans may add more value beyond simple post-editing.

Adam Wooten , co-founder of AccuLing, has worked as a court interpreter, in-house translator, translation project manager, interpreter coordinator, translation technology instructor, VP of sales and marketing, country general manager, and director of automated solutions.

RELATED ARTICLES

WEEKLY DIGEST

Subscribe to stay updated between magazine issues.

MultiLingual Media LLC