Over the last ten years, terminology management in simultaneous interpretation was a topic I had the occasion to discuss with over 200 colleagues. In seminars, we often did hands-on testing of interpreter-specific terminology management programs like Interplex or LookUp as well as programs of general purpose like Microsoft Excel and Microsoft Access. All in all, I got a broad picture of their tendencies and preferences. Every so often, the use of terminology databases and translation memory (TM) systems in the booth has come up, which is why I finally decided to take a closer look at this question.
An interpreter’s terminology, as opposed to that of translators and terminologists, is meant for the moment only; terms that are noted come from a certain situation and have been used by a certain group of people. What counts is that a term helps communication run smoothly and that everybody knows what the speaker is referring to. An interpreter’s terminology may include improvised terms that would certainly not be understood outside the particular situation or group of people; there may even be “false” terms that would never make their way into a dictionary, such as when people talk about frogs even though the actual animals being discussed are toads. They may also include more general target language terms being used to translate a rather specific source language term. While Germans could easily talk about a Handgabelhubwagen (hand forklift truck) all day, Spanish-speaking people might tend to keep it a bit more general and say carretilla, confident that everybody knows what is meant anyway. Translators and terminologists are far less free to generalize (and interpreters, in a way, are meant to generalize if the situation calls for it), as the recipient will not necessarily share the situational knowledge like participants in interpreted communication do. So all in all, interpreters’ work is more situation-oriented, which is why, ideally, their terminology includes quite a number of situation-related information, referring to a certain conference, customer, speaker or particular circumstances.
Generally speaking, interpreters’ terminology databases tend to be far less sophisticated than what “proper” terminology systems have to offer. Interpreters tend to prefer simple terminology tables, which occur in varieties of text documents, spreadsheets or simple databases. A table’s structure of columns and lines is easy to grasp at one glance so you don’t get lost in the heat of simultaneous interpretation, and it does not require much of an effort to be handled technologically. After all, interpreters should have their minds on the session and not on their glossaries. Interpreters are also less keen on things like granularity or elementarity of their data structure. Usually, there is one column per language and perhaps some further columns indicating the subject area, customer, conference and so on. Interpreters might as well put definitions or synonyms right next to the term, as long as it helps them find the right information in the right moment. They usually only write down what is really relevant to them anyway, so if you work with English, German and Spanish and you enter a new term in English and German, but know the Spanish term perfectly or would be able to guess it from the English one, you possibly wouldn’t bother entering the Spanish one. On the other hand, very simple and general terms are often noted in glossaries if the interpreter feels this particular term might not occur to him or her under the circumstances of reduced cognitive resources in simultaneous interpreting.
Therefore, when it comes to practically using a term database in the booth, the crucial functions are quick searching, quick entering and intuitively finding one’s way through the display of search results. If ever an interpreter has some spare cognitive capacity to look up a certain term while doing simultaneous interpretation, this will only work with an extremely intuitive interface, which ideally does not involve mouse-clicking, but only blind-typing the first characters of the word and then getting the results displayed in a way that requires no further clicking, scrolling or pressing keys. During preparation as well as in the booth, interpreters furthermore need to filter for terms of a particular meeting, customer or subject matter, and many interpreters prefer to print (or display) the most important terms for memorizing purposes.
TMs in the booth
It is quite clear that TMs are made for translators — they are not called interpretation memories, for a start. Although interpreters and translators do obviously have some things in common, the purpose of their work, their workflows and working environments are quite different. However, though common TM cannot be expected to support a conference interpreter’s workflow, many people still ask about it, be it only for the simple reason that they work both as interpreters and translators and want to combine the data from both activities. So I decided to have a closer look at three TM systems, Across 5.0, memoQ 5.0 and SDL Trados 2009, and see in which way they suit an interpreter’s needs.
Usually, interpreters find “real” terminology management systems too cumbersome. With their different data levels, TMs are far more complicated than what interpreters usually use. MultiTerm and memoQ have an entry level, then a language level (one per language) and, below this one, a term level, where the different terms belong. In the case of synonyms, there is one element for each term on this level. Across also has an entry level, language level and term level, although the integration of the language level is slightly different from that of the other two. But then Across has a different architecture than most other TM systems altogether, saving all terminological entries into one database. This one-database approach makes it easier to search or manage your data as a whole.
This structure of different levels used in terminology management would mean, for example, that the term level English could contain two entries, dashboard and instrument panel, the language level German could contain Instrumententafel and Armaturenbrett, and the Spanish index could contain three terms, tablero de mandos, salpicadero and plancha de a bordo. Since each term is a separate entity, it can then be assigned customers, conferences and so on individually, and you can clearly define which term is preferred by which customer; in which source you found which term; and which definition refers to which language. This enables the user to accommodate information in a much more differentiated manner than in a simple table structure with a one-concept-one-line approach.
One of many interesting features common to all three programs is the possibility to embed pictures (Figure 1), something very useful for interpreters who often need to grasp the meaning of a term in little more than one second — I just had a discussion with my Scottish colleague the other day about the difference between a German Printenmann and a Weckmann, and I only really managed to bring the message across using a search engine’s image function.
Those interpreters who are tempted to import their existing interpreter-style glossaries into a proper terminology management systems can do so in all three programs. They all offer assistance to assign the column heading of the old glossary to the data fields of the new database and it worked fine when I tried it. This, however, may vary according to the structure of the source document. If, in addition to the great variety of data fields offered by the programs, one wishes to create additional, customized data fields, this is perfectly possible in Across and Trados. memoQ’s data structure cannot be altered in the offline version. You will have to switch to the web-based terminology management solution qTerm, which then also enables you to access your terminology from any connected computer. Across and Trados also offer web-based alternatives.
The relatively complex structures of term databases also make it more complicated to enter new terms quickly, something essential for an interpreter preparing for a 200-term presentation about dental surgery half a day before the conference begins. Usually, you have to click your way through the different levels before entering the term and finishing an entry. However, all three progams offer quick entry functions, which are faster. Interestingly, in MultiTerm this quick entry function can also be accessed from Microsoft Word.
In general, all three programs let you filter by customers, subjects or conferences. However, after having filtered your data according to your needs, you are not just one mouse click away from neatly printing this on a sheet of paper. But then, printing lists is just not what these programs were made for in the first place, so for the moment one must accept workarounds in order to print glossaries. Exporting CSV files is possible in all three programs. Furthermore, MultiTerm offers export templates for mono- and bilingual glossaries or dictionaries to Word.
And now, what about the crucial quick search function for the booth? Interpreters, as opposed to translators, need to fill their knowledge gaps within seconds. Software can help them do so, but it can also become an additional cognitive burden during conference interpreting, in which case its usefulness is quite limited. MultiTerm offers full text search in all data fields — use wildcard for searching for word beginnings — exact search in term fields only and a fuzzy search where typos, missing accents or special characters like a German umlaut or ß don’t jeopardize the whole search operation. Across has the same search functions, but the full text search is hidden somewhere in the filter area. It is also possible to search only in the definitions texts. memoQ has no fuzzy search and does not ignore missing accents. This may or may not be relevant, depending on the user’s languages. At least there is a search function for the beginning of words, so that you don’t necessarily have to enter the whole exact term in order to find what you are looking for.
In general, terms can be looked up pretty well in all three programs. However, neither offers a mouse-free, blind and intuitive search function like interpreter-specific tools do: both in LookUp and in Interplex, once you are in the search mode, you can just type a string of characters and hit Enter to get the results displayed, and after having made one query you just simply type the next search string without having to delete the old one and without using a key combination like Ctrl+F again. LookUp even shows a hit list containing the first characters you have typed while you are writing, and the hit list gets shorter with every additional character you enter.
As to the display of search results, in Across and Trados they are shown in a monolingual display list. In MultiTerm, you can customize the display of the hit list and so create a bilingual hit list, which gives the user the chance to find the right word without further clicking. In memoQ, this bilingual hit list is standard (Figure 2).
One special feature that deserves to be highlighted both in Across and in Multiterm are their internet dictionary search functions. Across’ crossSearch (Figure 3) enables you to search several internet sources in parallel (IATE, linguee.com, leo.org, pons.de, Wiktionary, SYSTRAN and the like, as well as others that may be added by the user). The fact that the program automatically searches for the language pair that is currently active in the translation process makes the results highly relevant. Which means, however, that you can only use this function from an open translation process.
In MultiTerm, you can use the Multi-Term Widget to access your own database as well as Google, Bing, Linguee and Wikipedia. Further online sources can be added individually. Language pairs and shortcuts can be preconfigured and the Widget can be accessed from all kinds of programs such as Word, Excel, Outlook or PowerPoint, or even from the taskbar.
It must not be forgotten that terminology management is only accessory to TM and translation. Chances are that these functions could also serve interpreters’ purposes.
It sometimes happens that manuscripts for speeches are translated beforehand and then interpreted in a kind of simultaneous sight translation. This means that in addition to reading the translated speech, the interpreter must listen to what the speaker is actually saying, in case he or she diverges from the manuscript, and adapt the translation accordingly. In this case, it might turn out to be useful to have a parallel display of both the original and the translation in order to easily navigate the target text without losing track of the corresponding segment of the original (or vice versa). I have never tried this myself in the booth, but all three programs can obviously open, display and search text pairs (source text and translation). However, in Across, imported TMX files cannot be viewed and read as two parallel texts, though texts you have aligned or translated yourself can, but they can easily be searched if you are looking for a particular expression. Here again, as in terminology management, Across, as opposed to other TM systems, follows the one-database approach, meaning all TMs are saved in one big database. This makes it quite easy to quickly search for an expression in all your TMs. You can also filter all your TM data for a particular attribute in order to get all the segments pertaining to a particular text, but they won’t necessarily be displayed in a cohesive order.
One can also import existing TM files and use them as reference material. The EU, for example, offers the whole Acquis communautaire as bilingual alignments (TMX files), which makes a perfect reference text corpus that can be imported into a TM system and searched comfortably with the two language versions being displayed in parallel. The only inconvenience is that especially EU interpreters hardly ever work with two languages only, and handling four, five or six text pairs might become a bit of a hassle. Copying and pasting the most relevant legal acts into a spreadsheet, although they never end up being perfectly aligned, can then turn out to be the more feasible alternative. But then in other conference settings, well-aligned TMs can be a wonderful reference and should be considered in all these cases where conference texts are translated by translators using TM beforehand.
Alignment and term extraction
One of the most time-consuming tasks when preparing for an interpretation job is reading reference material, presentations and manuscripts in order to take in both content and terminology. This task might become easier by aligning bilingual documents in order to make it easier to keep track of which English segment corresponds to which German one, or even extract terminology automatically. Even without a perfect bilingual terminology extraction, having a list of term candidates might help the interpreter to concentrate on the meaning of a text while reading it and then dealing with the terminology separately. When you receive some last-minute document in the booth and have no time to even scan it, even an automatically extracted term list might help.
All three programs include alignment modules for the usual text formats, though PDF is always a problem. The alignment results are more or less satisfactory; in my testing they always required considerable manual re-editing. This is a crucial point, as alignment is only interesting as long as it is more useful than just the two texts copied into two spreadsheet columns, where I can read in parallel and do full text searching as well.
When it comes to terminology extraction, Across has a monolingual extraction function, which can only be used from within an active translation project and only via the predefined workflow terminology and translation. The server version (which, as opposed to the Personal Edition, is not free of charge) also includes a bilingual extraction, which I have not tested.
memoQ also does monolingual extraction and this function can be used by simply selecting it from the menu. In Trados, extraction is offered as an additional tool (extra charge), which then includes monolingual and bilingual extraction.
If you see term extraction for the first time, the result is impressive. You enter a text and the machine gives you a list of terms. However, these lists or glossaries require some re-editing, correcting erroneous assignations and deleting irrelevant terms. The quality of the extraction result depends a lot on the characteristics of the text, so it is hard to say in general if it is more efficient to quickly read over a document and write down the relevant terms manually or use an automatic translation. Apart from the technicalities, it surely also depends on the preferences and working methods of each interpreter. But it is definitely worth a closer look.
TM and terminology management programs do indeed offer some interesting aids for interpreters. More often than not, what makes these programs less suitable for interpreters are just minor aspects of handling the data. Hence, it might be worthwhile for TM vendors to consider integrating some more interpreter-friendly features, like a really booth-friendly search function and comfortable glossary printing, to address a whole new group of potential users. On the other hand, if one takes a closer look at those programs with all they have to offer, it also becomes clear that there are already some very handy features that interpreters might also want to use and that the fact that they are so little used by interpreters might to a certain degree have to do with a lack of familiarity. If ever one wanted to further explore this path to the interpreter’s booth, it might as well turn out to be beneficial to translators and interpreters alike. Translators probably wouldn’t mind using booth-friendly, more intuitive functions either.
The whole subject of speech recognition and checking the spoken word against pretranslated text or terminology might be a future asset, and possibly not only on the interpretation side. With the spoken word spreading into all areas of our lives, video tutorials here and podcasts there, translating or subtitling spoken messages will become more and more important to translators as well. Thus, although desks and booths sometimes seem like different planets, we are not such distant relatives after all.