Terminology Glosses: Neural/neuro-

On August 3, 2017, Facebook announced it had transitioned to neural machine translation (NMT). Only a few months before, at the end of September 2016, Google had said that Google Translate services were being improved using NMT. Additionally, in November 2016, Microsoft published a blog about how the company was “now powering all speech translation through state-of-the-art neural networks” and also making its work available to all developers and users through an API. All three announcements underlined a new significant step toward full automation in translation and localization.

I remember coming across the adjective neural and the prefix neuro– in high school biology classes a long time ago. This is how its definition probably looked back then: “Relating to a nerve or the nervous system,” according to Collins Dictionary, and this is also how it would have fit in our ideal termbase. Had we had to choose a category from our set of metadata, it could have been “nervous system,” “biology” or, more recently, “neurobiology,” “neurosciences,” or something along those lines.

At that time, information technology was making its automated baby steps and translation was entirely manual, unimaginably far from being a human action in a translation management system. But let us leap forward into the present. While information technology is delving into artificial intelligence, translation has dived into machine translation: rule-based at first, then example-based, phrase-based, statistical and now neural. This is why neural machine translation and its acronym NMT are the terms we are adding to our ideal termbase today.

For clarification and completeness, we definitely need a definition for this new term and this is how it is defined, rather vaguely, in Google’s NMT system: “An end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems.” But, what is neural machine translation really about? Here is a more detailed definition proposal based on a miscellaneous collection of internet documents: “A machine translation approach relying on artificial networks that link digital ‘neurons’ in various layers, each one sending its input to the next layer. This structure is able to utilize each word and its surrounding context at a higher level of abstraction than the traditional methods, and then to look for the closest match in the target language.” In the article “Facebook finishes its move to neural machine translation,” John Mannes explains that with NMT, “The interpretation of a sentence becomes part of a multi-dimensional vector representation…”

Apart from the scarcity of sufficiently granular definitions, what catches the attention with this term is the semantic shift of the word neuron and of its adjective neural, which has now become an information technology modifier, with seemingly little or no connection with the high school biology classes we remembered above. But, let us dig deeper and look for more information on neural networks or models of neural computation. In particular, Wikipedia describes the latter as “attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof.” 

From a terminology management standpoint, neural is another remarkable example of how terms are borrowed across interdisciplinary borders quite easily. Having an entry for neural in our termbase would require a whole set of definitions, some of which are polysemic. However, there is no polysemy if we enter the whole term. On the learning side, for those who want to learn more about NMT and try it, there is an open source NMT system available out there: Open NMT. It comes complete with a series of tutorials.

In “The Translation Industry in 2022,” Isabella Massardo and Jaap van der Meer state that Google Translate already translates a number of words per day that “not even all human translators combined could compete with.” They also predict that technical translations will entirely be performed by machines by 2029, thus relegating the translators to the role of post-editors. There will, however, be sectors where machine translation will still be too difficult: literature, poetry, ironic or humorous texts, just to name a few. An April 2017 post from Johnson, The Economist’s blog about language, claims that “neural translation systems aren’t ready to replace humans any time soon.” Machines are not ready to understand a literary author’s intentions and culture whereas in technical, financial and legal documents, small mistakes are not acceptable.

I devoted a little time to check the prefix neuro-, which is originally a synonym of neural, also defined as “relating to nerves or the nervous system,” and tried some combinations. The most interesting are by far neuroimpressions and neurotranslation. In both, the prefix maintains its original meaning. According to “Neurotranslations: Interpreting the Human Brain’s Attention System,” neurotranslations “aim to provide fresh insights into the brain’s attention system, interpreting the data from the human neuroscience of attention. Their primary purpose is to explore the general dynamics of how the brain pays attention to information and acts on it, based on its contents.” In both cases, these are valid candidates for our ideal termbase. Our choice of categories will remain linked to biology and the neurosciences confirming how at the moment only one of the synonyms, neural, is specializing in new domains.