Unpacking the Black Box
State of the Machine Translation Union
John Tinsley
John Tinsley is co-managing director of Iconic Translation Machines, which joined the RWS Group in 2020. He has worked with machine translation over the last 16 years in both academia and industry. In 2009, he received a PhD in Computing from Dublin City University, after which he co-founded Iconic. He now leads Iconic as a dedicated technology division of RWS Group.
t’s the start of 2021 — a time to look back and a time to look ahead. This time of year, we all love a good list. Whether it’s a retrospective on things that have happened in the previous 12 months, or predictions about what’s going to happen in the coming year, they are staple of articles far and wide, and this one will be no exception!
But rather than doing one or the other, we are going to do a little bit of both. We will take a look back at three key developments in the field of machine translation (MT) in 2020, and what we think we’ll see happen in 2021.
From a high level, MT research and development continued to advance at pace in 2020, if not quite at the pace of previous years after the initial flurry of activity around neural MT. In addition to the academic mainstays, research has continued to be prominent at big tech companies like Google, Facebook, and Microsoft, with Apple recently joining the fray. Language service providers have also been investing in machine translation technology to bring it in-house, with some notable M&A activity over the past 12 months.
What has quite noticeably changed based on discussions with buyers, service providers, and technology providers is the attitude toward MT and the rate of adoption. It’s more positive — and it’s growing fast.
The conversation around MT has evolved from questioning whether it’s good or not, to accepting that it is indeed good. In turn, that has led to more in-depth discussions on quantifying exactly how good it is, and where it still has room for improvement. This new line of questioning allows potential adopters to think more concretely about where MT can fit most effectively into their many and complex workflows.
This leads nicely into our list and the headline topic that was not only a big feature of the last year, but will continue to be prominent into the next: quality and evaluation.
1. Quality and evaluation
“Human parity” — two words that raise the ire of translators and developers! But claims of MT reaching so-called human parity led a consortium of researchers across Europe and the United States to assemble “A Set of Recommendations for Assessing Human-Machine Parity in Language Translation,” published in the Journal of Artificial Intelligence Research. This was a welcome development.
Despite the widely accepted fact that automatic metrics for MT evaluation are not fit for purpose, they are still widely used in the absence of an effective alternative. Unbabel is the latest party to attempt to address this challenge with the release of Comet, an open-source framework for MT evaluation. I’m curious to see if it catches on.
A big trend to watch out for this year and beyond is not just quality assessment, but quality prediction. That is to say, methods and tools that estimate the quality of MT output in real-time, without having to compare it to an existing translation. Predicting quality is easy… but predicting quality reliably is easier said than done. This topic is being heavily researched in academia at present, with lots of prototypes floating around. We’ve yet to see this be fully productized, but it could be on the horizon sooner rather than later.
2. Doing more with less
MT research and development follows supply and demand trends when it comes to selecting which languages developers focus their efforts on. A by-product of this is that those language combinations left until later tend to have less data available for training models. The diplomatic term for these is low resource languages. Addressing this data shortage is a trend that has consistently gained popularity over the last few years.
Some popular approaches include data synthesis, which essentially involves the creation of MT training data using… MT (it’s not as crazy as it sounds). Another approach is known as unsupervised MT, which involves training MT models with very small amounts of data to get the process started, and iterating from there. This is quite a common approach in general machine learning, so it will be interesting to see how it evolves in the context of machine translation.
Last but not least, multilingual MT is a model that can actually translate between more than one combination of languages. Think of it like pivot translation — we can already translate between, say, Irish and Hungarian by translating from Irish to English first, and then from English to Hungarian. You can consider multilingual MT as a more elegant version of pivoting, whereby the relationships between all the languages are learned, rather than it being a two-pass process.
It’s an exciting approach because, in theory, we’re not limited by how many languages we can include. This approach is of interest to the likes of Google, which specializes in one-size-fits-all at scale, and they’ve published papers with some frequency on the topic.
3. Context is king
A familiar refrain from users of MT is to question why a certain word or phrase was translated correctly in one instance, but translated in a different way elsewhere. The answer is context. In MT, each sentence is translated completely independently of any other sentence that has either previously been translated, or is upcoming in the document. As such, we are potentially missing key information that would allow us to produce more consistent translations.
A potential resolution to this problem is context-aware MT — a process by which more information from the wider text is taken into account when translating specific pieces. The potential here is intriguing.
There are still some important questions to answer such as how much context to take into account (the more you take, the more complex things become), and whether or not it is always helpful. It might ultimately depend on the use case. Nevertheless, this line of research is active, and will grow in popularity over the coming months and years so keep an eye on it.
The bottom line: machine translation is very much in a “watch this space” moment, so make sure to keep an eye on this column to see what’s making waves!
RELATED ARTICLES