Let me start with a disclaimer: I’ve known Spence Green, the creator of Lilt, for a little while now. In fact, in the last few weeks before he and his team launched their product, I helped them make the tool fit more naturally into the process of the translator. So I’m not unbiased. But, firstly, who is ever truly unbiased? And, secondly, I would be just as giddy about the prospects this tool holds if I were seeing it for the first time today. How do I know? I watched how folks responded to it at the recent ATA conference when Lilt was first unveiled to the public, and they were giddy.

Let’s start at the beginning, though. For many in the translation world, machine translation (MT) and post-editing are inextricably connected. No matter how good an MT output might be, it cannot be trusted for publication-ready quality without having a human post-editor evaluate the accuracy and correct the translation. There are some exceptions, such as the Microsoft knowledgebase, but even that is post-edited, albeit with the P3 (post-published post-editing) process, a form of end-user post-editing that is strongly advocated by Microsoft’s Chris Wendt.

Although it might have gone almost unnoticed in the MT camp, professional translators’ real use of MT is increasingly integrated into existing processes. True, there are still the “traditional” post-editors who work primarily on raw MT, but as any translation vendor who has tried to hire one can tell you, they’re hard to find. Why? Well, it’s a process that the typical translator wasn’t trained for, and it generally doesn’t match the expectation that translators bring to their job. Recognizing both this situation and the existence of valuable data even in publicly available general MT engines, translation environment tool vendors looked at ways to bring that data into the workflow (aside from just displaying full-segment suggestions from machine translation systems that often aren’t particularly helpful).

Here are some examples:

A number of tools, including Wordfast Classic and Anywhere, Trados Studio, Déjà Vu and CafeTran, use auto-suggest features that propose subsegments of machine-translated suggestions (which invariably are more helpful than the whole segment). In some cases, such as with Wordfast and Déjà Vu, these even come from a number of different MT engines.

Déjà Vu uses MT fragments to “repair” fuzzy translation memory (TM) matches.

Star Transit uses a process called “TM-validated MT” in which the communication goes the other way: Content in the translation memory is used to evaluate MT suggestions. A similar process is being developed for OmegaT right now as well.

Lift, Kevin Flanagan’s PhD project at Swansea University, used MT to identify subsegment matches in TMs so that even a TM with very little content can produce valid subsegment suggestion (Flanagan now works for SDL, and his technology will surely see the light of day in various SDL products).

In fact, there are too many other creative and productive uses of MT beyond post-editing to list them all here.

Translators and their community have warmly welcomed these developments (though larger language service providers have taken less note because — at least so far — there really is no process in place that allows for measurements and monetization). But they all have one limitation in common: the underlying MT is static. This means two things in our context: the phrase table within the MT is not automatically and immediately updated with the translator’s choices (note that SDL is presently working on a process to account for that), and the automatically generated MT subsegment suggestions come from the initial MT proposal, which does not adjust itself to whatever the translator might already have entered.

Enter Lilt. Lilt uses Phrasal, an open-source statistical machine translation (SMT) system developed by the Stanford Natural Language Processing Group (you can download the source code and find information about it at htpp:// Here’s what distinguishes the way Lilt employs Phrasal from other SMT solutions:

Every finalized translation unit is directly and immediately entered into the phrase table and considered in further MT.

There is no difference between MT and TM — even imported translation memory exchange (TMX) files are entered “only” into the MT engine’s phrase table, where they are treated preferentially.

With every word the translator enters while working on the individual translation unit, a new query is sent to the MT engine to adjust its suggestions to whatever has already been entered.

All this is presented in a browser-based translation environment interface that is shockingly simple — the user documentation consists of less than three pages and essentially covers everything. The file formats include Microsoft Office files, XLIFF and SDLXLIFF, TXML (Wordfast Pro 3’s translation format), text, XML (without any possibilities to modify the extraction process), HTML and InDesign IDML. Files are organized in language-combination-specific projects (presently EN<>ES, and EN>FR and >DE are supported, with more in the immediate pipeline) to which existing TMX resources can be assigned. The import process (which can be started by dragging a file into a drag and drop area or through a file selector) takes a bit longer than most other translation environment tools, since all initial MT suggestions are already loaded, but once it’s imported the segment-to-segment processing is done rapidly.

For every segment that you open, you will see the initial MT suggestion and possible alternate terms for the currently suggested word. The currently highlighted word can be entered with a keyboard shortcut (Enter or Tab), or the whole suggested segment can be accepted with Shift + Enter as the keyboard shortcut.

In this example, the translator might choose to enter data manually. This causes Lilt to suggest new translations based on what the translator has entered so far (see Figure 1).

The corpus on which the MT engine was trained (consisting of public sources such as the United Nation’s corpus and Opus) is also and simultaneously available for concordance search. It opens in a left side panel with a double click on the term for which a concordance search is to be executed.

The concordance search lists the complete segments of the corpus (truncated segments are displayed in full by clicking on them) that are preceded by automatically extracted terminology, as with “Navigation” in Figure 2.

When searching for phrases with more than one word, it is possible to enter the remaining words manually into the search bar and receive auto-complete suggestions for existing entries as you type (Figure 3).

One shortcoming of the concordance search is that there is no information about the source of the data. Possibly that’s because there was little such information within the actual underlying data, but it would be good to remedy that at some point. For instance, the first concordance record in Figure 2 comes from a previous translation in the same project, and while it’s good that it is given preference, it would be helpful to have that information displayed more clearly.

The data you enter into Lilt, by the way, is kept confidential. Nothing is shared, and there is not even an option to share at this point. You’ll notice how keenly Green and his team are dialed into this concern when you read the documentation leaflet, which addresses the privacy concern before anything else.

As a company, Lilt has received a $650,000 first-round investment from XSeed Capital, and it will be looking for more financing soon. While there is a paid application programming interface (API) that will provide some revenue stream (I can’t wait to see Lilt’s technology in other tools), there is no fee for using the tool — at least for the next few months. Fortunately, there is relatively little risk in using the tool for the time being as it’s possible to export every project as a TMX file once it’s translated — and hopefully in the future also as an XLIFF file (the internal translation format Lilt uses).

Overall, Lilt is a translation environment tool of a different kind. While other tools pride themselves on their wealth of features, Lilt is Spartan in its interface, its features and its name. Granted, there will eventually have to be some additional functionality — I’m thinking, for instance, of quality assurance, bare-bones project management facilities, and additional languages and file formats. But it’s surprising to realize how effectively translation work can be done on the basis of the powerful and interactive Big Data backbone that Lilt runs on.

One of the reasons why Lilt is so different is because it’s built by an industry outsider who does not assume many features as givens — just the fact that there is no traditional TM or termbase makes this more than apparent — and yet is very interested in learning how professional translators work. Both in his work at Stanford and now at Lilt, Green has been tracking and communicating with many professionals to see how they operate. I first talked to Green more than a year ago, when he was still finishing his PhD, and at the time he said that “there’s a big disconnect between what’s been done with translation technology and what can be done.” It’s a statement we can all agree with, but it can be hard to see what exactly needs to be done when we’re in the middle of it and blinded by our preconceived notions. An outsider is exactly what we need, especially one who is willing to listen to insiders.

Now, I don’t think Lilt’s newly unveiled technology is the last word on how MT can help increase translator productivity, but it certainly is a big step, one that I suspect will be more productive and produce higher quality results than traditional post-editing for the vast majority of projects that professional translators work on. And that makes me giddy.