Post Editing

The first issue I worked on at MultiLingual, nearly a decade ago now, covered machine translation (MT). At that point, I knew very little about the subject, but as I read over the articles, trying to will myself to make heads or tails of the content, it occurred to me that machines translated texts not unlike introductory-level second-language students: by using frameworks borrowed from a first language to transfer word-by-word ideas in overly literal ways.

Context seemed at that point out of reach for MT: being able to account for lesser-known idioms, harnessing the ability to remember the meaning of sentences or nouns that came previously. Over the years, I read countless other articles about MT, case studies reaching for better outcomes, articles lauding big data, articles lauding narrowing the subject matter so that context was built in.

At parties — particularly parties attended by IT nerds — I would whip out my working knowledge of MT in order to explain how Google Translate worked. Surprisingly, this actually impressed people at times. Or perhaps IT nerds are easily impressed by a woman using phrases such as “source corpora” and “statistical algorithms” between house music sets.

Now that context is being analyzed by neural models, albeit imperfectly, I’m having to adjust my party spiel. It’s less straightforward, actually, to explain neural MT. Some listeners may have even stumbled across articles claiming Google Translate is slowly becoming sentient because it speaks its own language (fake news alert). People’s attention wanders once they realize the secrets behind Google Translate are not actually blockbuster movie material.

Fortunately, whatever your level of knowledge of MT, our writers provide challenging and informative updates on current developments in the field. So read up, and go impress someone at a party.