PROFILE

Olga Beregovaya
The limits of technology
In nearly three decades as a language service professional, Olga Beregovaya has made a name for herself as an illustrious thought leader on machine translation (MT) and artificial intelligence (AI).
Last March, she was among the linguists in MultiLingual’s “Women Driving the Language Industry” feature, and was also named vice president of Women in Localization at the beginning of this year. As she was also recently appointed the vice president of MT and AI at Smartling, we caught up with Beregovaya in January to hear her thoughts on the debate over human parity in MT.
Editor’s note: This interview has been edited for clarity and conciseness.
Advertisement
MultiLingual: Could you tell us a little bit about your work and career — that is, some of the background information leading up to where you are today?
Olga Beregovaya: My educational background is initially in structural linguistics and Germanic studies. I’ve been in the language technology industry for 25 years — plus now, actually — predominantly working in the machine translation and natural language processing (NLP) space. And for 20 years of that I’ve been in senior leadership and executive roles — driving both execution and vision.
I’ve been on the client side, I’ve been in one of the super-agency language service providers (LSPs), and now I’m really excited to have joined the Smartling team and drive AI and MT strategy for Smartling product development — Smartling being one of the world’s leading enterprise translation platforms.
ML: You mentioned that you studied linguistics and Germanic studies — how did you get into MT?
OB: I didn’t necessarily start studying translation and languages per se, but more of language structure — structural linguistics. That really helps structure and wire one’s brain for the NLP space, because you know how languages are structured. My transition was very simple, because I had the structural knowledge. And then I needed to add computational knowledge to that. That was pretty much the trajectory.
ML: For this issue of MultiLingual, we’re focusing on human parity in MT. As one of the preeminent thought leaders on AI in this industry, could you start by telling me a little bit about what it means exactly when people talk about human parity ?
OB: In the past, MT was really evaluated based on adequacy and fluency. And to date, the predominant ways of thinking about MT quality would be the likes of the dynamic quality framework, or the multidimensional quality metrics, which go into very granular details of our grammars, semantics, and basically slicing and dicing every sentence in every way possible.
Now, with the advent of, not only neural MT, but even more so generative AI, we really need to change the way we think about human parity and take into account the fact that the translation is not a one-to-one translation anymore. The way that new multilingual generative models work, I think human parity measured in edit distance, statistical metrics, or translation error rate, it’s just one of the dimensions. Because even if the translator wants the target to be identical to the source, the translator can be making more changes than are necessary for the target language monolingual speaker to actually make sense of the content, use it, and relate to it fluency-wise.
ML: Do you think that human parity is an achievable goal within our lifetime? Or do you think it’s something that’s still pretty far off?
OB: I think I would have given a different answer before the release of ChatGPT. Because obviously that was probably the biggest splash in the AI community and the NLP community for years. In terms of generative AI — in terms of generating text and natural language — I think that we’re extremely close to human parity, if not having already hit that mark.
Now, that’s generating — that’s not translating one-to-one. At the current time and date, MT still chokes on homonyms. In general, just based on the way neural machine translation is designed, meaning can be completely lost when it comes to longer, syntactically and grammatically complex strings, like in a string that’s dense on subjunctive, passive voices, and so forth. Even in the modern time and day, one still needs to be very mindful of the source when planning for machine translation.
You will still get different results when you’re working on more structured content and user-generated content or transcreation copy. That’s yet to be addressed. But my prediction would be that including novel machine translation approaches, and, again, natural language generation approaches and automated post editing as a part of the translation process will hopefully help change this and address that.
I’m optimistic, but I’m not 100% confident that metaphors, irony, euphemisms, and any other figure of speech will be addressed in our lifetime. At which point in time, will we be able to confidently say this metaphor was captured or this euphemism was conveyed properly? That’s where I still put a big question mark.
Advertisement
ML: And how do you think translators and the industry as a whole will respond to those improvements in MT?
OB: I mean, as translators, there is only one way you can react when your job is threatened, right? With the current state of neural machine translation, evaluations still show that post-editors need to put significant effort into tweaking machine translation, especially as it applies to different language groups. What you would do for, say, Nordic languages, and what you would do Slavic languages would be completely different.
So, as it stands now, there is a significant percentage of strings and significant percentage of texts within these strings that still require post-editing. Now, obviously, neural machine translation is evolving. And once the issues that I mentioned earlier have been addressed, a translator will have a big question mark as to, “What do I do next?”
My answer is that the human in the loop will eventually move over to a validator and reviewer role. A certain portion of texts and content will likely still require creative services and will still require specialized translation. My prediction is that, just like in other areas — like validation of training corpora for conversational AI and similar NLP tasks — the AI will do more of the heavy lifting and translators will become more recipients of content rather than creators of content.
Advertisement
RELATED ARTICLES

Innovators Journal: Disentangling blockchain, NFTs, metaverse, Web3, and Web 3.0
If you’ve been trying to wrap your head around the concepts of blockchain, Web3.0, or the metaverse, look no further. Nimdzi’s Nadežda Jakubková has you…
→ Continue Reading
Life Sciences 2022 — A Year in Review
elcome back to The Lab, where we take a look at what’s cooking in life sciences localization. As we are approaching the end of 2022,…
→ Continue ReadingWEEKLY DIGEST
Subscribe to stay updated between magazine issues.