Artificial Intelligence

Seventy Years of Machine Translation

The legacy of the Georgetown–IBM experiment

By Rodrigo Fuentes Corradi

I

n 2024, the possibilities of artificial intelligence (AI) in the language industry seem endless. But the AI future that we see so clearly today actually began in the middle of the last century. This year marks 70 years since the first public demonstration of machine translation (MT), which arguably sparked the language AI revolution.

The first undertaking to solve what became the MT challenge was a response to the Cold War. Led by Léon Dostert — a Georgetown University professor and pioneering linguist who developed interpreting systems at the Nuremberg trials — and Cuthbert Hurd — head of the Applied Science Department at IBM — the Georgetown–IBM experiment aimed at automatically translating about 60 Russian sentences into English. The carefully chosen sentences were derived from both scientific documents and general-interest sources in order to appeal to a broad audience.

On January 7, 1954, the team gathered at IBM’s New York headquarters to demonstrate their progress. According to John Hutchins’s article on the topic, though the experiment was small-scale — with an initial vocabulary of just 250 lexical items and a set of only six rules — it was ultimately able to “illustrate some grammatical and morphological problems and to give some idea of what might be feasible in the future.”

Advertisement

The Original AI “Hype Cycle”

The event was reported on by the press both in the US and abroad, and it garnered considerable public engagement, given both the growing interest in computers and the political backdrop. US government funding for more experimentation was soon made available, with a prediction — partly based on the excitement generated — that MT systems would be capable of translating almost everything within five years.

However, the reality was that it would take a whole lot more patience than first anticipated. The subsequent years were to prove bumpy due to the complexity of the Russian language and the technological limitations. According to Hutchins, “After eight years of work, the Georgetown University MT project tried to produce useful output in 1962, but they had to resort to post editing. The post-edited translation took slightly longer to do and was more expensive than conventional human translation.”

Government funding came under increasing scrutiny, culminating in the creation of the Automatic Language Processing Advisory Committee (ALPAC) and its 1966 report, Languages and Machines: Computers in Translation and Linguistics. The report highlighted slow progress, lack of quality, and high costs. It noted that research funding over the previous decade amounted to $20 million, while real government translation costs stood at only $1 million per year.

More damaging were the criticisms that the methodology of early experiments, perhaps in the enthusiasm for attention and investment, was not credible. The small-scale demonstration did not robustly test the MT system, as the selected test sentences were expected to perform well. The report stated that there was “no immediate or predictable prospect of useful machine translation.”

The initiative, born in the Cold War, was placed on ice. Most MT proponents were understandably disappointed.

Advertisement

The Foundation for Future Solutions

The ALPAC report pulled the rug out from under MT efforts for the next three decades. However, while the report effectively paused investment, it also shone a light on a potential hybrid solution — one that reintroduced humans into the equation. The report described “a system of human-aided machine translation, relying on post editors to make up for the deficiencies of the machine output,” which set the stage for MT post editing (MTPE).

Eventually, computer processing power increased, allowing a resurgence of MT innovations. MT quality started to improve as research shifted from recreating language rules to applying machine learning techniques through combinations of algorithms, data, and probability. This became known as statistical MT (SMT).

In the mid-2010s, deep learning and artificial neural networks enabled neural MT (NMT), resulting in dramatically improved translation accuracy and fluency. NMT models have now facilitated the widespread use of MT in the language industry.

The Georgetown–IBM Legacy

The Georgetown–IBM experiment and subsequent ALPAC report laid the foundation of MT technology. It also clarified the importance of human-led translation and MTPE, which has emerged as a credible response to global enterprise content challenges. Chiefly, these early experiments managed to push the theories of the early pioneers into the realm of practical and public demonstration, thus illuminating their value, if not their actual viability. Even if the topic was to remain dormant for a few decades to come, the Georgetown–IBM experiment played a key role in the development of MT as we know it today.

Seventy years on from those initial attempts to solve the language challenge, the emergence of large language models (LLMs) and generative AI (GenAI) has caused another stir. LLMs and their future productization will now drive innovation with features such as MT quality estimators, content insight capabilities, and summarization. As we enter this new era, what exactly can we learn from the Georgetown–IBM experiment?

Well, to kick-start the language AI story, those early pioneers needed to engage with the public imagination, draw attention to their cause, and find support and benefactors. With current public discourse around GenAI mired in uncertainty, outreach programs will likely contribute to changing attitudes and increased usage.

Moreover, persistence and patience are indispensable. Successes in early AI language experiments proved elusive, to say the least. But look where we are now. MT optimization and GenAI advances will depend on determination and growing human expertise. The challenge is not new for translators and linguists, who seem to have technology adoption and innovation in their professional DNA. Only by being proactive in the face of AI-driven changes will we achieve the next level of progress.

Rodrigo Fuentes Corradi has worked in the language industry for the past 25 years, specializing in machine translation technology and human processes and capabilities.

Advertisement

Related Articles

Leibniz-Institut für Deutsche Sprache, machine translation (MT), National Council on Interpreting in Health Care, NCIHC, Nimdzi, SDL, Seattle Mariners, Slator
international

Weekly Shorts | March 5, 2021

By MultiLingual Staff

Stimulus funding for translation? While Washington, DC works on an economic stimulus bill for the entire United States, lawmakers in Vermont are drafting one of…

→ Continue Reading