Thereâ€™s a meme going round about â€˜amazing scribesâ€™ at the recent ICANN meeting who transcribalated (donâ€™t ask) spoken text onto screens. What attendees appeared to see were â€˜scribesâ€™ steno-translating at the speed of speech so that everyone could read a speakerâ€™s translated content in real time on a large screen. Margaret Marks has rightly blown the whistle on this amazing feat: the steno-typists were in fact taking down text over the earphones from simultaneous interpreters.
Interestingly, the only languages used officially at ICANN were French and English, and the French translation facility was provided by the governmental Agence de la Francophonie which has a vested interest in a multilingual future for domain names and all that. But why werenâ€™t more languages on offer at a time when ICANN is getting criticism about its ambitious and potentially expensive new Strategic Plan, and the fact that its mandate is to report to the U.S. Department of Commerce? In a world that is coming to doubt the exclusiveness of Lex Americana in these matters, showcasing a bit more of the amazing scribery of real time interpretation/ translation would perhaps help soothe the passions of multipolarity mavens.
As for the role of stenotypy in the communication process, those ICANN delegates who commented on the scribes appear not to have attended parliaments or assemblies where the record is always generated by steno-typists, court hearings of all sorts (the Nuremberg Trials were a pioneer in this, combining cabled language interpretation, rapid steno transcriptions and overnight text translation in many cases; but no big screen displays), and other venues where steno-typists (using rapid chorded keyboards) catch words on the fly and drum them into understandable text. Closed captioning on TV (where steno typists key in the speech stream) is another application area with a future, at least in the U.S. where there are legal measures for providing multi-channel language streams for the sensorially challenged.
Naturally everyoneâ€™s been imagining how to link up a translation rig to a steno-typistâ€™s output to provide the sort of effect that wowed the ICANN watchers. This sort of platform would provide the kind of instantaneous translation that users of instant messaging systems or chat rooms have been dreaming of ever since the Internet came along. In fact CompuServe and others started introducing early instant online translation in the mid 1990s for constrained sorts of dialogs. As with everything else in translation automation so far, sometimes such dialogs work brilliantly but often they donâ€™t.
Donâ€™t forget, though, that many journalists and researchers using more than one language can take down real time conversations over a phone line in one language and type them into notes in another. Reader, Iâ€™ve been there. On a good day, you can input 1,500 words an hour this way, but you need to be a touch typist to avoid egregious errors. Applying this skill in public debate forums, you could probably keep it up for 45 mins a time (c.f. interpreters who sometimes do 30 min shifts); but most of all you would want to get paid twice â€“ once as a steno-typist and once as a translator.
Iâ€™d be surprised if interpretation training course developers hadnâ€™t already spotted a market potential in developing this ear-to-text trans-language skill in a wired world. Text tool makers could probably provide a few add-ons to accelerate auto-spelling correction were necessary, or helping in on-the-fly formatting.