Perspectives
Data: Of Course!
MT: Useful or Risky.
Translators: Here to Stay!
PERSPECTIVES
Data: Of Course!
MT: Useful or Risky.
Translators: Here to Stay!
Dr. Alan K. Melby
Dr. Alan Melby holds a PhD in computational linguistics and is an ATA-certified French-to-English translator. He is the co-author with Terry Warner of The Possibility of Language. He is currently vice-president of the International Federation of Translators (FIT) and president of LTAC Global.
Dr. Christopher Kurz
Dr. Christopher Kurz is Head of Translation Management at ENERCON and has worked as a translator, project manager, and translation manager at Ferrari, BMW, Deutsche Bahn, and SDL International (now RWS). Together with Jean-Marc Dalla-Zuanna he is the co-editor of Translation Quality in the Age of Digital Transformation. Since 2013 he is also a part-time lecturer in translation and standardization at the Anhalt University of Applied Sciences.
Dr. Alan K. Melby
Dr. Alan Melby holds a PhD in computational linguistics and is an ATA-certified French-to-English translator. He is the co-author with Terry Warner of The Possibility of Language. He is currently vice-president of the International Federation of Translators (FIT) and president of LTAC Global.
Dr. Christopher Kurz
Dr. Christopher Kurz is Head of Translation Management at ENERCON and has worked as a translator, project manager, and translation manager at Ferrari, BMW, Deutsche Bahn, and SDL International (now RWS). Together with Jean-Marc Dalla-Zuanna he is the co-editor of Translation Quality in the Age of Digital Transformation. Since 2013 he is also a part-time lecturer in translation and standardization at the Anhalt University of Applied Sciences.
Introduction
In the July/August issue of MultiLingual magazine, Jaap van der Meer presented a vision of the future in which the current “mixed economic model” — a combination of raw machine translation delivered at nearly zero cost and paid translation delivered by human translation service providers — becomes unsustainable. It is replaced by a radically different model in which all translation is “zero cost” and the need for professional human translators is eliminated.
As promised in our Letter to the Editor that was published on multilingual.com, we present an alternative view of the translation industry. We believe that the current mixed economic model is not only sustainable but beneficial to society. Consequently, we believe that there is definitely a future for professional human translators. We encourage young people with language skills and cultural knowledge to choose a specialization and pursue a career in translation or other professions related to the language industry.
The “Translation Economics of the 2020s” article will hereinafter be referred to as the “Reconfiguration” article, based on the phrase, “industry reconfiguration.” The final sentence of that paragraph encourages “healthy debate.” That is what we hope to spark by presenting an alternative vision of the future. A healthy debate takes place within the marketplace of ideas, where there is no room for personal criticism and where proponents of opposing views can sit down together afterwards for a respectful chat.
The Reconfiguration article is full of claims. Space limitations will only permit us to address a few of them at this time with others to be addressed later. We are also prepared to eat our hats if future reality invalidates our position.
Let’s start with the claim in Section 1 of the Reconfiguration article that the Singularity will arrive very soon, approximately in 2030. There is no explicit reference, but we assume this is the Singularity predicted by Ray Kurzweil. Of course, if Kurzweil’s Singularity does arrive, there will no longer be a need for human translators. Neither will there be a need for doctors or physicists or any other professionals in our current society, because machines will surpass humans in every intellectual task. Our lives would be so thoroughly disrupted that obviously, translators will also no longer be needed. Therefore, we will limit ourselves to a pre-Singularity world (the world we live in and the future, until the Singularity arrives and changes everything) as we discuss the following three topics regarding the Reconfiguration article:
- the nature of data in the translation industry;
- the relationship between artificial and real intelligence; and
- the nature of the translation industry, itself.
Note: Alan Melby is the primary author of Sections 1 and 2, and Christopher Kurz is the primary author of Section 3. However, they collaborated on the entire article.
Section 1: Data in the translation industry
A central claim of the Reconfiguration article is that data will be crucial to fundamental changes in the translation industry.
We, of course, agree that data is extremely important. As pointed out in the Sept. 8 (2021) SlatorCon by well-known industry figure Jochen Hummel, founder of Trados, the same bilingual data used in translation memory systems can also be used to train data-driven machine-translation systems, which primarily use a neural machine translation (NMT) approach.
As has been pointed out by multiple colleagues, NMT can be viewed as a natural evolution of translation memory (TM) lookup at the segment level, followed by a sub-segment-level lookup, and finally processing at the word or even character level. Granted, NMT is enormously more computationally intensive than a TM lookup. But the same translation memory data, represented in the translation memory exchange (TMX) format, can either be used by a human with a computer-aided translation (CAT) tool or by a software engineer in an NMT training session.
Regardless of how a TMX file is used, it consists of the same information: a logically unordered set of translation units, each consisting of a segment of source text and a corresponding segment of target text. This allows for an examination of what is called co-text in linguistics: the words immediately preceding and following a given word. Co-text is only one aspect of context. Other types of context that are relevant to translation are chron-text, rel-text, bi-text, and non-text.
It is surprising how much can be done by applying machine-learning algorithms to the bilingual data in
a TMX file, but there is more to human language than segment-level co-text and bi-text. For example, in all but a few texts, the order of the sentences is relevant. In linguistics and translation studies, this real-life fact about language is part of what is called cohesion. This introduces the need for a bi-text corpus, which preserves the integrity of the source and target texts. Compare that to a translation-memory database, which destroys cohesion, as it deletes duplicate translation units and indexes them. The need for document-level co-text is fulfilled by representing bilingual data using the XML localization interchange file (XLIFF) format, rather than TMX.
The difference between segment-level bilingual data and document-level bilingual data XLIFF is substantial. It is not clear which data type is the focus of the Reconfiguration article. One observation is that a bi-text corpus can be converted to a translation-memory database, but not the other way around. The Reconfiguration article emphasizes the relevance of the vast amount of data that has been compiled over the past several decades and is available for training MT systems.
The article also suggests that copyright issues do not apply because the segments needed for training are short (under eleven words). Suppose that longer stretches of co-text are eventually needed for MT training. What would be the implications for copyright infringement then?
A data-related claim in the Reconfiguration article is that Google and Microsoft are working on massively multilingual MT systems “that can tackle any language pair in the world.” We doubt that even these giant companies would validate this claim. How much data is currently available beyond the 100 or so most economically significant languages? Many of the remaining world languages — 3,000-7,000 of them, depending on boundaries between dialects and languages — are not even digitized. Some speaker communities have no desire to develop an alphabet. Many of those who currently have no written representation — would like to develop one and make contact with the internet — speak what is considered an Indigenous language. We hope there will be much discussion of these languages during the upcoming Decade of Indigenous Languages (see unesco.org and translationcommons.org for details on this event).
Now, consider that advances in machine translation require more than a transition from segment-level to document-level co-text.
As pointed out by Arle Lommel in his recent keynote address at the 2021 MT Summit, one thing currently missing in machine translation is metadata. We hope that Lommel’s vision of metadata will be published and widely discussed. It is clearly relevant to the translation industry. Exactly what kind of metadata will be needed for the next advances in machine translation, prior to the arrival of the Singularity? In addition, how will metadata be inserted into a bi-text corpus?
The Reconfiguration article seems to acknowledge the need for more than co-text, where a distinction is made between two data streams in the “modern translation pipeline”: The bilingual data found, for example, in a TMX file, and metadata.
But it also confusingly labels metadata as “translation data.” We reject this label for metadata. However, we draw attention to the following types of metadata that are explicitly listed in the article:
- the machine-translation engine that was used;
- the human editor who examined and corrected the raw machine translation;
- the throughput time (presumably the time required to post-edit);
- the editing distance between the raw output of the engine and the result of post-editing; and
- the quality score assigned by a human evaluator.
One obvious conclusion from the inclusion of these types of metadata is that translation cannot always be “zero cost.” Post-editing, for example, when needed for a given use case, is expensive, but worth it. This brings us to a crucial pair of competing claims:
- We claim that in a pre-Singularity world, there are, and will continue to be, many use cases where human translation or post-editing is used and needed, and translation cannot be zero cost.
- The Reconfiguration article claims that a mixed model is unsustainable, and will soon give way to an economic model in which all translation is zero cost.
These two claims are clearly incompatible. Which better describes the future? We will return to the key question of specific use cases later. For now, we ask the reader to consider the acronym FAUT, found in the Modern Translation Pipeline diagram that appears in Section 4. It is discussed in the first paragraph of Section 6. FAUT is expanded in the diagram as Fully Automatic Useful Translation. We invite the reader to consider the pivotal question regarding this acronym: Useful for what purpose? Could there be use cases where FAUT should instead be expanded as Fully Automatic Useless Translation?
Both humans and machines can produce useless translations, but does that necessarily imply that humans and machines use the same process when translating? Definitely not. No logician would tolerate such a conclusion. This brings us to the second topic listed at the beginning of this rebuttal article.
Section 2: Artificial vs. real intelligence
The Reconfiguration article claims that computers will become much better at “understanding” context in a document. It also suggests that this is “just around the corner;” presumably well before the arrival of the Singularity.
This claim is reminiscent of the series of claims that human translators will be completely replaced by computers within five years. Certainly, raw machine-translation output has improved with each major paradigm shift, from rule-based to phrase-based statistical MT (typically referred to as SMT), and from SMT to NMT. But are we getting closer to machines understanding language? The Reconfiguration article claims that we are very close.
According to the Reconfiguration article “MT is a simple sum of algorithms and data.” We fully agree. However, there are some unintended consequences of this aspect of NMT. The MT algorithms are empty without a built-in conceptual layer that allows them to understand the data they are trained on, in the way humans learn languages. One principle, on which all modern language-learning theorists agree, is that input must be comprehensible to the learner. They differ on the details of what constitutes comprehensible input, but it does not take a degree in language acquisition to observe that someone can listen to a new foreign language, distant from one’s native language (with few cognates), for hundreds of hours without getting any closer to understanding that language.
One consequence for NMT is that a system can be trained on massive amounts of data and produce impressive results without understanding language. The system simply manipulates words mechanically. We claim that meaning is not in words, but rather in the mind of someone who produces or interprets language. A detailed discussion of this claim would take us deep into the philosophy of language, and this article is focused on verifiable facts and the opinions of recognized experts. For example, Peter Szolovits, a distinguished professor at MIT, has observed that, even though the results of mechanical manipulation of words through machine-learning techniques are impressive, they have not brought us closer to an understanding of how humans process language.
Another aspect of real intelligence is the ability to interact with intelligent entities about why a decision was made and apply principles, not just more data. Hans-Christian Boos, a respected expert on artificial intelligence in Germany, has developed a pyramid that places data at the bottom and wisdom at the peak; thus, adding to the contrast between real and artificial intelligence (figure 1).
The Reconfiguration article is focused on machines eliminating professional human translators. We believe that a better approach is one promoted by Mike Dillinger in his 2016 keynote address at the Association for Machine Translation in the Americas (AMTA) conference. In addition to identifying use cases where raw machine translation is appropriate, we should work harder to help humans be more productive in use cases where raw machine translation is not appropriate.
We have thus returned to use cases, as promised in Section 1. We now reveal a huge, unstated assumption in the Reconfiguration article. That assumption is that raw machine translation is equally appropriate in all use cases. There is nothing at all in the Reconfiguration article about use cases or translation specifications, and nothing about risk analysis. Who would want to take a medication according to instructions that consist of raw machine translation? Or be judged in court, according to the raw machine translation of a piece of legislation? In a recent article, Donald Barabé identifies a class of texts called prejudicial texts, where a professional human translator is needed. For prejudicial texts, errors in the translation can cause damage, injury, or harm.
One major use case with multiple stages of human involvement is all that is needed to counter the claim that the “mixed model” will disappear. The final section of this rebuttal is a detailed description of use cases in business, where the use of raw MT output can be problematic. We anticipate discussions of the strengths and limitations of raw machine translation in other contexts. Such discussions render the central claim of the Reconfiguration article, the disappearance of the mixed model, unconvincing. To the contrary, the mixed model is needed. One way of visualizing the mixed model is to use a pie chart. The pie, as visualized in figure 2, is all the content that people would like to see translated. One slice shows use cases where raw MT output (zero-cost translation) is suitable. Another slice is where human translation or post-edited machine translation is needed and used. The third slice is where raw MT output is not suitable, but there is either a lack of funding or a lack of human resources. As the pie grows, there is more and more work for humans, even if the width of the funded translation slice becomes narrower. In addition, human intelligence is needed to develop specifications and decide when to use fully automatic, zero-cost translation and when to involve humans.
Figure 1: The Hans-Christian Boos pyramid, a model of machine-learning processes, places data on the bottom and wisdom on the top.
Figure 1: The Hans-Christian Boos pyramid, a model of machine-learning processes, places data on the bottom and wisdom on the top.
Section 3: The nature of the translation industry, specifications, quality, and translation services
All of the above leads us to discussing the nature of the translation industry, the nature of professional translations, translation service providers, and the suitability of raw MT output in a business context. “Business,” in this article, is regarded as business-to-business (B2B) and business-to-consumer/client/customer (B2C).
Business source-text documents in a professional context have a communicative function. These documents want to “accomplish” something, whether it is marketing material, manuals, legal documents, or any other kind of communication between companies, or companies and (potential) customers. companies’ documents contain the distilled image of the corporate identity — the corporate language.
The thing that unites the aforementioned document types is their purpose; there is a reason why they were written and created. This means that the translations also have to serve a purpose in the target language and target culture. The translation process also follows a purpose, a skopos, to use the Greek word.
All business translation begins with determining the communicative purpose (skopos). The key, however, to defining and verbalizing any skopos in this kind of scenario is specifications. The topic of specifications leads directly to the ISO 17100 standard for translation services and its applied principle of quality that can be found in the ISO quality-management standard’s definition: “Quality: degree to which a set of inherent characteristics […] of an object […] fulfills requirements.” Therefore, professional translation relates to the fulfillment of specifications and translation quality, and, hence, to translation management.
Justa Holz-Mänttäri describes the act of specification-based translation as a translational action. Now, more than 35 years later, this principle still applies: Translators in today’s business world act in a highly specialized, predefined, and prescribed way of doing things (the translation process) to produce high-quality translation output (in the sense of the ISO 9000 series of standards and cross-industry quality management principles). Therefore, the translation output needs to tick all the boxes of a production specification sheet before leaving the production phase and being delivered to the customer.
In the end, there is nothing that sets a translation product apart from a cell-phone product or a tablet product, in terms of fulfilling the production phase’s requirements. Creativity that ignores agreed-upon requirements is unwanted in the majority of today’s professional translation industry. The underlying concept of this theory is that humans can check their own behavior in the translation process and verify their translations against specifications. Can machine translation systems do the same? We doubt it.
There is no question that zero-cost translations can be used for translating some texts. Dozens of MT systems are freely available on the Internet. And yes, some are success stories. Some fields of application for zero-cost translations are source texts that are only used for triage or gisting, low-risk source texts with a very limited information lifecycle that become obsolete very quickly, or cases where neither the time nor the budget is available for the paid, human translations of these low-risk texts. This means that zero-risk source texts can be suitable for zero-cost translation.
However, would these texts and use cases match the criteria of the aforementioned business source texts, used in professional inter-company or company-to-customer communication scenarios? Hardly. What the aforementioned kinds of translations also have in common is that there are no specifications to be met and no expectations (at least, not explicitly) with regard to the translation output of these source texts. Analogous to zero-cost translations, we could call these (MT) translations zero-specification translations. Because they don’t have a translation purpose (skopos), they might qualify to be processed to raw MT output and might even be used in intra-company communications. Otherwise, they are likely to be found outside of the business sector.
Does this scenario really have a significant impact on the $24 billion professional translation industry, the translation agencies, and the freelance translators? Or is zero-specification, zero-cost translation impacting the translation industry to a far lesser extent than might be presumed? We think it is far less, for example, than the impact of machine translation with human post-editing.
“Machine learning systems can learn biases based on assumptions that are built into the algorithms used to train them, but the cause is mainly rooted in the data. Examples of biased AI are fairly common these days.”
Apart from that, the common opinion of NMT in the more critical part of the translation industry is that NMT might read well, but it doesn’t really produce translations with fewer semantic/pragmatic errors than statistical MT. The better readability of NMT translations tends to blur the reader’s attention to semantic/pragmatic errors. It’s like a smoke screen that prevents the reader from paying critical attention to semantic/pragmatic translation errors (assuming they are proficient in the source and target languages). This is a widespread and common opinion among translation experts world-wide who deal with NMT every day in real life and in real translation jobs — not under laboratory conditions in NMT research.
Despite the previously explained limited applicability of zero-specification, zero-cost translation in possible intra-company or private scenarios, there are several scenarios that underline the importance of human involvement in translation. Among the many positive aspects of human involvement, qualified, well-trained, and watchful human translators recognize source-text errors (and yes, source-text errors do occur) because human translators set the source text and its co-text into the context of the experience and knowledge that they have
acquired over the years. A simplistic but appropriate way of expressing this phenomenon is that well-trained, experienced translators can “smell” source-text errors. MT, however, will very likely reproduce a source-text error in the target text because a machine simply cannot check a source text for logical errors.
Human translators will also stick to the same expressions and modi di dire when translating a text. They can transport the client’s source texts’ style and expressional requirements — something that goes far beyond terminology — with reliable consistency from one sentence in the source text to a given sentence in a target text document, and the next page, the next book, or even the next fragment in a content management system (CMS). Well-trained and experienced translators can do this over the course of days, months, and years — exactly as the client wants their target text (the translation) to look. Specifications again come into play here. This aspect of human intelligence becomes especially important when translating small snippets of content management systems.
Section 6 of the Reconfiguration article refers to data as “the new oil.” We strongly oppose this concept. The analogy between data and oil is misleading because oil is a finite resource, whereas data is an investment asset that is not decreasing every day. On the contrary, the amount of data is exponentially increasing every day.
In addition, if we talk about “data as the new oil,” we have to raise the question of whether it really makes sense to train an MT with third-party data. Imagine you are founding a new car company and want to use MT for your translation process. Then, imagine that you can use training data (e.g., translation memories) from other automobile companies to train your in-house MT. Wouldn’t it be more than likely that, in the end, “your” MT speaks the language (we’re not only referring to terminology) of the other car companies, rather than that of your own car company? If you use other companies’ corporate language to train your language-processing tool (for example, an in-house MT), will you really receive a machine-translated target text that matches all of your own corporate-language requirements in the target language? We doubt it.
In the translation context, we think it makes much more sense for companies to set up a company-intranet-based MT that has been trained with an owned, checked and revised, translation memory data if they want to offer raw machine translation output to their colleagues’ zero-specification translation requests.
Another problem we see with zero-cost translation is the question of the EU’s General Data Protection Regulation (GDPR), which sets strict regulations on saving and processing personal data. We think that the unreflective use of zero-cost translation can contradict the GDPR, and hefty fines might be looming when this EU law is infringed. TikTok was fined $750,000 in July of this year for not having its full privacy terms and conditions in Dutch.
The other problem we see with the use of zero-cost translation is the protection of a company’s intellectual property. Companies in today’s business world are in fierce competition over market shares, savings, and profits. It is more than questionable whether it is a good idea to upload your company’s R&D knowledge, financial contracts, or management data onto a given internet platform that offers zero-cost MT services. Your whole business success, and millions of euros or dollars, might not only be depending on a semantically/pragmatically correct translated target text, but also on the strict confidentiality of your intellectual property (IP). This process creates an enormous risk because you could be giving up all your IP out of hand. This can produce disastrous outcomes, such as when your company data is leaked into the world wide web, as happened in Norway in 2017 (slator.com/translate-com-exposes-highly-sensitive-information-massive-privacy-breach).
The last straw that breaks the zero-cost translation’s back is the question of liability. If you are producing business communications, you must always consider that you are responsible, not only for the products you offer to your clients, but also for the accompanying documentation — whatever format this may be in.
Figure 2: How much work can machine translations truly handle?
Figure 2: How much work can machine translations truly handle?
Let’s imagine a legal dispute about financial or reputational losses or, even worse, injuries or death, because of semantic/pragmatic errors or mistranslations. There are many real-life examples of business-communication mistranslations with severe consequences. In the course of the lawsuit, it turns out that the financial report, the press release, or the manual was translated with zero-cost machine translation, and that nobody performed a post-editing of the translation. Hence, the raw MT output was used, unchecked and unedited, for business communication.
In Germany — and supposedly in many other countries, too — there is a law called Produkthaftungsgesetz — ProdHaftG (Product Liability Act) that makes every manufacturer responsible for the safety of their product. We doubt that a raw MT output established with zero-cost translation would stand up to the obligations of this law, in the case of a lawsuit — simply because you neglected the duty-of-care principle.
The conclusion is that using zero-cost translation in a business context with regard to liability and legal consequences, such as compensation for damages, should be reflected upon and its suitability should at least be questioned.
Modern translation scenarios in the professional translation industry center around ISO 17100, ISO 9001, specifications, terminology, quality management, liability, and translation-service provision. It is beyond any doubt that machine translation will fundamentally reshape the translation industry in the current and upcoming decades. However, we believe that, in the business context, in-house or TSP-hosted machine translation, directly followed by human post-editing, is far more likely to occur than zero-cost machine translation on a random MT internet website. Professional documentation and professional source texts serve a purpose in the source language. The translation skopos defines the purpose that the translated target texts must serve in the target language and target culture. This purpose underlies the specifications.
Current systems for machine translation are based on the mechanical manipulation of data. They do not understand the purpose of a translation, and they do not respond to questions about why they translated the way they did. Unless there is “real” artificial intelligence, human translators will play a significant role in every professional translation process for the foreseeable future.
The bottom line: We believe that we have presented solid evidence that, although data is very important to the translation industry, the mixed model described in the Reconfiguration article is here to stay, along with professional human translators, until we reach the Singularity and everything changes in every industry and profession, not just ours.
For a list of references used to support this article, please visit ttt.org/translation-economics-and-rebuttal
Translation Economics of the 2020s
A journey into the future of the translation industry
in eight episodes
by Jaap van der Meer
RELATED ARTICLES