For professionals in the subtitling industry, working conditions have changed drastically since the turn of the century. Subtitling companies and freelance subtitlers alike have been forced to find better ways of producing more high-quality work in more cost-effective ways and significant resources are being devoted to developing new tools to achieve this.
The digital age has removed many boundaries in our society and we now expect to be able to communicate with anyone, across cultures, countries and time zones. Globalization and the explosion of content online has further increased the need for translation in general, and the status of translation has shifted from being a luxury in the 1990s, to a commodity in the 2000s, to a utility in our decade. So says the 2013 TAUS report “Are Translation Industry Leaders Up to the Challenge?” A testament to this is the fact that according to a Google office blog from April 26, 2012, “the translations produced by Google Translate in a day are roughly equal to the entire annual output of human translators worldwide.”
Subtitling over time
Subtitling is the most popular audiovisual translation mode in the world, and one whose growth has been consistent, thanks to its advantages: it is the quickest, most economical method and is suitable for any type of programming. Technological developments over the years have inevitably affected subtitle production and the role of the subtitler has evolved accordingly. The part of the work that has remained largely the same has been the translation of source language utterances into written subtitle text in the target language, however this is about to undergo a major change also, as predicted by a 2008 European Commission study on dubbing and subtitling practices in Europe.
In the early days of subtitle preparation, the process of creating subtitles was entirely manual. The first step was for technicians to “spot” the subtitles by writing the in times and out times on the dialogue list, or script. Subtitle translators then wrote their translations, usually by hand, according to how much time was available. These translations were then transcribed by a technician or stenographer and inserted. The advent of desktop computers revolutionized the process by enabling one person to carry out the whole operation, from spotting to translation to adaptation of the translation for subtitling, thanks to specialist subtitling software that was developed from the 1980s onward. In the early days, the software was DOS-based, and a typical subtitling workstation would consist of a VHS player with jog shuttle, a TV monitor, a caption generator and a PC with subtitling software. Timing was done using a hand-operated clicker, or the keys of the keypad, to indicate the in and out times of the subtitles.
Although it was commonplace for one person to carry out the whole process, some companies chose to have separate people for timing and translation. However, increasingly, the distinction between translation and adaptation for subtitling disappeared, with a single translation subtitler taking care of both processes in one.
With the rise of Windows-type interfaces and the introduction of internal caption generators, subtitling software entered a new era. There was no longer a need for an external VHS player thanks to digitization and internal movie-player software, and software developers started to think of ways to make the subtitler’s life easier. Clients began to move to tapeless workflows, which negated the need to deliver burnt-in subtitle tapes, which in turn meant that to serve a client directly, expensive Digital Betacam machines were no longer needed. This, along with the internet and broadband revolution, led to the leveling of the playing field for the subtitling industry. It was no longer necessary to have a lot of expensive hardware and software to deliver subtitles directly to an end client, nor was it necessary to be located in the same town as the client. The same was true of the workforce.
Today’s subtitling preparation software can include voice recognition software; automatic import and splitting of dialogue lists or scripts into subtitles; automatic timing of subtitles in blocks; and automatic retiming of subtitle files to match a different video standard or cut. However, there is widespread recognition that subtitler input is still crucial. In the words of Andrew Lambourne, business development director at Screen Subtitling Systems: “While automation tools are of benefit in reducing costs, a good developer will ensure that there is room for creativity, and recognizes that human skills are still essential for the most difficult material.”
Despite the continuous technological advances in subtitling software development over the past decade, the translation aspect of the subtitler’s work is still very much the same as it has been since the birth of subtitling approximately 70 years ago. Subtitlers still translate source audio into their target languages manually without any automation to help them, apart from the use of spelling and grammar checkers. Viewed in the broader context this is particularly odd, as computer-aided translation (CAT) tools and machine translation (MT) have been in widespread use in the traditional text translation industry for almost two decades now.
The advent of the DVD in the mid-1990s was a milestone in subtitling history, as it heralded the spread of subtitling to traditional dubbing and voiceover countries and a significant increase in content volumes requiring interlingual subtitling. The most important development in this period was the emergence of a new workflow for multilingual subtitle production: the template workflow. This involves using a master subtitle file in the source audio language (template) for translation in all the target languages required, with or without altering the subtitle breaks and timings of the source file. This decoupling of the translation work from the technicalities of the work of a subtitler, aided by the removal of the need to ship physical media, arguably led to a return to older workflows and allowed for the global expansion of the subtitling professionals’ market to include all translators worldwide, who now needed very little training in the art of subtitling in order to be able to offer their services in this translation domain also.
Many subtitling companies working in international markets expanded rapidly, taking advantage of the opportunities globalization and the centralization of the market created. Subtitle volumes continued to increase as new digital formats made their appearance and large multinational subtitling companies centralized the language transfer of increasing amounts of audiovisual content for the entertainment market. This growth, coupled with the use of the template workflow by these companies, resulted in the creation of vast archives of parallel subtitle data in many languages, which now play a central role in yet another major development in the subtitling market, as we shall see below.
However, despite the availability of an established expanded network of subtitling professionals armed with sophisticated software, the content explosion in recent years has created a demand for subtitling services that cannot be satisfied using (professional) human effort alone. This has resulted in several strong trends in the industry. Fansubbing has achieved a status of its own: it is discussed in academic conferences and is the subject of PhD dissertations, while many databases of fan-made subtitle files are available online (for example at OpenSubs). Subtitling freeware abounds, while crowdsourcing and collaborative online subtitling solutions such as Amara and Viki are on the rise, and are even being adopted by major content owners and distributors.
Outside the entertainment world, significant resources are being devoted to developing automation tools for application to other types of audiovisual content. For example, the transLectures project aims to develop innovative ways to automatically transcribe and translate academic repositories of video lectures; the SAVAS project is focusing on automating subtitling for the deaf and hard of hearing for news and other programming in several European languages; and the EU-Bridge project is working on combining the above plus more, for applications in parliamentary sessions, mobile telecommunications, lectures, meetings and so on.
There is a growing need for subtitle professionals and subtitling service providers to increase the amount of content they can translate, and improve on the turnaround times for their services. For this to happen, more automation is needed, and cloud-based language technologies can help provide this.
The SUMAT case study
MT is the grandmother of all language technologies and in turn the most disputed and debated. Its use has caused and continues to cause major controversy. Despite all the arguments on both sides, one thing is certain: users demand the availability of translation in all forms and as real-time as possible, and even raw MT abounding in errors can serve the purpose of gist translation. The fact that 200 million users turn to Google Translate each month surely indicates that MT has already achieved mainstream status.
The few attempts to bring MT into the subtitling industry have mainly been the result of funded research projects, such as MUSA, eTITLE and so on. While these attempts did not result in the adoption of MT by the industry, they tested the limits of what was then state-of-the-art technology and paved the way for more work on the subject. One of the main reasons why early attempts at building MT systems for subtitles failed was the lack of parallel subtitle data of professional quality, without which it is not possible to adequately train statistical machine translation (SMT) systems. Most large archives of professional subtitle data are the property of subtitling companies and their clients. The SUMAT consortium addressed this problem by involving four major subtitling companies, which took on the role of data providers and also evaluators and user partners in this European Union (EU) funded project.
This three-year project, a consortium of nine companies and a Europe-wide collaboration between the industry and academia, aims to build a cloud-based service for the MT of subtitles in nine languages and seven bidirectional language pairs, thus addressing the languages spoken by approximately two-thirds of the EU population.
The service will offer users from individual freelancers to multinational companies the ability to upload subtitle files in a number of industry-standard formats, as well as in plain text format, and to download a machine-translated file in the same format, preserving all time codes and formatting information where possible. Users will be offered the option to upload or machine translate several files simultaneously, be notified once the process is over and have the MT files downloaded straight to a specified folder in their local environment. There will also be different access level options for client personnel with differing roles, such as translators or project managers, who, for example, might want to have files delivered straight to their translators’ inboxes and be notified by e-mail when this has happened. Each user (company or individual) will have his or her own history record, with the ability to perform searches about previously uploaded or translated files, languages and so on. The main benefit to SUMAT users of having the service in the cloud is 24/7 availability and a fast and easy option for deployment. This will also be offered to corporations that wish to embed the service in their in-house tools, workflows and cloud platforms via a simple API integration.
Subscriptions will be flexible and scalable in order to satisfy different user needs and will be based on a software-as-a-service model, meaning it will be based on the volume of use of the system. Depending on the status of each user’s account, users can have their own MT engines trained or tuned on their own parallel data, contributed upon registering, or after collecting a sufficient number of post-edited files by using the service for a while. Finally, all users benefit from the ability to access the latest and most updated version of the generic MT engines, a version that will be maintained and retrained on the basis of user feedback and further training data collected over time. This way the service will serve both large corporations that have sizable volumes of material to translate, and also smaller companies and individual subtitlers wanting to speed up their work. These smaller companies typically do not have the resources or data available to build their own MT systems in order to remain competitive in a rapidly-changing marketplace.
A large majority of the work has now been completed and the project is in its third and final year. Over seven million parallel subtitles in seven language pairs have been harvested and prepared for system training, and a further total of over 15 million monolingual subtitles have also been collected for language model training. The relevant SMT engines have been trained, and systems gradually refined through various techniques adapted for the correction of recurrent errors. Various trials have been carried out pairing the professional subtitle data collected during the project with vast amounts (approximately 110 million aligned segments in total) of freely available data of either professional or amateur quality. These have already been compared with one another in terms of their quality output, so that the best systems are selected for the online service.
A demo of the service (Figure 1), using the MT systems selected at the end of the first evaluation round, has been live since November 2013 and can be accessed through the project website www.sumat-project.eu. The online service prototype is expected to be ready in January 2014 and will be stress and user tested until March 2014 for final release at the end of the project.
The evaluation experiments that have been completed so far have confirmed the consortium’s hypothesis regarding the value of the professional quality data. This is shown in the output of SUMAT systems, with various levels of accuracy achieved in different language pairs as expected. The analysis of the post-editors’ output revealed that, on average across all language pairs on an ascending quality scale of 1-5, over 50% of the MT output received a quality score of 4 or 5, meaning little post-editing effort was required for the text to reach publishable quality. Productivity gains were not measured in the evaluation phases carried out so far, but this is planned for subsequent phases, scheduled to last until February 2014.
The subtitling post-editor
The subtitler, once a translator only, has gone through a journey encompassing translator and spotter, translator of templates, hard-of-hearing subtitling specialist, respeaker and adaptor. The next obvious step could be post-editor.
Given that the traditional text translation industry has successfully embraced MT as a powerful aid to productivity, only a Luddite would ignore its potential for the subtitling industry. The differences between the two fields are not so great as to render MT useless when applied to subtitling. Of course, the output from MT engines is not usable for broadcast or DVD without human intervention, although it may be enough for people consuming web content who merely wish to know the gist of what is being said. It is important, as with all translation, to know what is “good enough” for the end user, even if, for most media, some post-editing of the MT output will be necessary.
Who will do this post-editing? It is not customary for the translation professionals who work in the subtitling industry to have experience of translation fields where CAT tools and MT are commonplace. Industry and academia seem to have come to the conclusion that existing subtitlers will need to be trained as post-editors. However, another possibility would be to take the trained post-editors from the text translation industry where they have been working for many years and train them as subtitlers. What is clear is that professionals in subtitle post-editing roles need subtitling-specific skills also, with all their domain-specific features.
There is no question that there is a significant skills gap due to the comparatively late adoption of this technology in the subtitling industry. Some, including Anthony Pym in his 2011 paper “Democratizing Translation Technologies: The Role of Humanistic Research,” point to the similarities between post-editing and revision work, and these warrant further investigation, as they may prove to be a helpful resource when teaching post-editing skills. Revision in the subtitling world is typically undertaken by more experienced subtitlers, who perform a thorough check of a colleague’s work for translation accuracy as well as grammar and style, and some of these skills are transferable. What has become apparent from our experiments and others like them, though, is that while translation and revision require some similar skills, the problems that occur in a human-translated text are not comparable to those which arise in MT output. For MT to work in subtitling, the necessary skills need to be acquired or taught. The good news is that machines, since they use statistical processes, are consistent in their mistakes, whereas different humans will make different mistakes. One would expect that it should be fairly straightforward to be able to recognize and expect such mistakes, so that the cognitive process involved in their correction diminishes and becomes quicker. Indeed, some initial evidence from the SUMAT post-editors indicates that they did observe an increase in the speed of their work after a couple of files, as a result of having digested the types of mistakes to expect of a machine. This facilitated decisions on what and how to post-edit.
Another, more opaque issue is the extent to which subtitlers’ attitudes and experience influence their post-editing speed and skill. They may react with instinctive rejection, suspicion, curiosity, engagement and finally realization that it speeds up their productivity. It is worth looking at the profiles of each translator in order to identify trends that are linked to ways of working. It seems, for instance, that particularly fast translators are more likely to dislike post-editing as they initially expect it to slow them down, something which we will attempt to address during the next evaluation round.
One SUMAT finding was that a complete understanding of the process and the limitations of the MT output beforehand usually led to a smoother process overall. A typical early comment from post-editors was: “the machine should know that,” or “that should be easy to fix.” With better education and a deeper understanding of the power of MT and its restrictions, we see not only an improvement in attitude and the experience of post-editing, but also the realization that MT is not a threat. MT is a tool to remove repetitive work from the subtitler’s day. Subtitlers now have a series of tools to offer possible translations and concordances in previously translated texts, and eliminate repetitive and time-consuming actions.
An open dialogue with subtitle translators who are being asked to assume the role of post-editors is necessary, not only with a view to training them in the skill of post-editing through translating scientific knowledge to daily practice. Their responses, different by language and also by professional profile, also serve as valuable feedback for the continuous improvement of the MT systems and their user interfaces. Their input, in the form of linguistic analysis and refinement of the training data, as well as editing of machine translation errors and identification of patterns, is crucial in eliminating or minimizing mistakes in the MT output. As Pym says, the translators themselves are the ones best suited to “investigate the human aspects of translation technology and hence the ones that pinpoint more easily what it is that can make this technology truly revolutionary.”