Education after Singularity

Allow us to lift a little bit of the veil of our magazine production process. As we receive articles and perspectives of the relevance and quality we would like to share with our readers, questions may arise as they pass our desks. During the production of issue about singularity, AI, and MT, it was, “If the projected singularity comes true, and translators are indeed under threat of losing their jobs, how will this affect existing educational structures and programs?”

One hypothesis is that translators will be adjusting their skill set to becoming something akin to cultural post editors. Another is that the overall productivity simply increases, as it did in factories with the invention of the steam engine, sparking the industrial revolution. 

We reached out to several educators to ask if the curriculum for language learning, translation, and localization degrees has already been changing. If so, how? And what do they project it will look like in 5 or 10 years? If the need for translators truly disappears, then why would anyone still study translation? 

These are some of their perspectives and essays.


Peng Wang, Ph.D.

Peng Wang, Ph.D.

Course Designer and Instructor Localization Institute

According to Wikipedia, technological singularity is “a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.” If there is a continuum to verify its truthfulness, I don’t anticipate extreme cases will happen. Since civilization began between 4,000 and 3,000 BC, we have experienced many paradigm shifts, from agricultural to industrial to artificial intelligence (AI). Humans are masters of balancing their relationship with what they do and the tools that they use. A key factor that differentiates our current evolution from past ones is the speed and scale of change. 

“Exponential growth” brings great uncertainty and concerns. One fundamental question to ask is, “What is the point of AI existence?” Well, AI exists to help solve human problems. In that sense, translators will not lose their jobs, but their jobs will be redefined. Several new characteristics have surfaced: 

  • They will take a more holistic approach. For example, they might move from a pure translation role to translation-related linguistic tasks like neural linguistic programing (NLP).
  • An essential part of their jobs is learning, both about their internal subconsciousness and external connections with more complex human needs.
  • An essential skill for a translator and linguist is machine learning (ML) and AI.
  • An essential aspect of their jobs is associated with quality and responsible AI in particular, whether content created by AI aligns with human intention.

AI is revolutionizing education. With the impact of ML, learner autonomy will be greatly increased. In this context, human learners need to interact with other humans and build their shared environment, which is the physical reality on which more abstract knowledge is built. In this context, community building and AI democracy can be directly implemented in current education systems. To address the question, “Has the curriculum already been changing?” I would say I see great potential and will be happy to build a community to co-design a future in this direction, for us and for the next generations. 


Dr. Miguel Duro Moreno

Dr. Miguel Duro Moreno

Professor, Translation & Interpreting Department
Universidad de Málaga, Spain

David Bellos, a French into English translator and Princeton scholar, summarizes splendidly in his book Is That a Fish in your Ear? Translation and the Meaning of Everything (London, Faber & Faber, 2011) the first babble of machine translation. After World War II, the US made every effort to keep the nooks and crannies of the nuclear bomb as a top-secret exclusive monopoly of their own. The Americans were very much aware that the Soviets wanted the nuke more than anything else in this world, so they set themselves to the tedious task of combing every scientific publication written in Russian that could be remotely related to the use of atomic energy. They were anxious to know how far the Soviets had gone in their research and how much they knew. It was the beginning of the Cold War. Once the Americans started reading Russian journals, they soon realized that ahead of them lay a huge, almost unsurmountable hurdle: The volume of texts to be translated from Russian into English was so vast that either they needed to hire an army of reliable Russian-English translators or… they needed to invent one very specific machine that could do the job for them as finely as the humans but much faster. After having appraised very positively the results of Alan Turing’s enigma machine, they opted for the second. Warren Weaver, a senior official with the Rockefeller Foundation, wrote in July 1949 a memorandum that contained the following excerpt: it is “very tempting to say that a book written in Chinese is simply a book in English which was coded into the ‘Chinese code.’ If we have useful methods for solving almost any cryptographic problem, may it not be that with proper interpretation we already have useful methods for translation?”

Language was then very much conceived of as a code that could, and, consequently, should be decoded or deciphered. The language devices developed by the Americans, which drew upon that strong theoretical language-as-a-code approach, ended up being a pie in the sky incapable of solving the following minor meaning problem:

The pen is in the box


The box is in the pen

Even now, more than 70 years down the road, DeepL’s English-Spanish rendering for the second piece of that problem (The box is in the pen) remains semantically meaningless: ‘La caja está en el bolígrafo’ (the actual proper translation is only given by DeepL as the third alternative, ‘La caja está en el corral,’ an utterance only understandable, by the way, by a small chunk of the Spanish-speaking world).

Much as machine translation (and interpreting) has made enormous progress since Warren Weaven’s translation conundrum, the so-called Fully Automated High-Quality Translation (FAHQT) remains a challenge still today. However, one might wonder if the projected singularity can make translators and interpreters lose their jobs, or if humans will be able to use a babel fish device similar to the one described by Douglas Adams in his 1979 novel The Hitchhiker’s Guide to the Galaxy (Harmony Books, New York), or even if translation and interpreting university degrees will be made redundant by devices, machines, or robots one day.

Perhaps, but not anytime soon. McKinsey Global Institute’s report entitled Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation (New York, 2017) found that, “One of the biggest remaining technical challenges is mastery of natural language processing — understanding and generating speech. These capabilities are indispensable for numerous work activities but, despite great progress in areas such as machine translation, machines still have far to go to achieve human levels of performance.” A few years earlier, Carl Benedikt Frey and Michael Osborne had published a report entitled The Future of Employment (Oxford, Oxford Martin Programme on Technology and Employment, Oxford University, 2013) where they listed the 700 jobs most likely to be replaced by an automatic device (a machine equipped with artificial intelligence capable of thinking and acting independently) in an undetermined future. According to them, translators and interpreters have a 38% chance of being replaced by a machine in the future since their jobs are only ranked 264th (right among packers and packagers and home health aides).

Systran (1968), a machine translation company, launched an automated translation service that drew upon a selection of dictionaries. The results, even today, are far from satisfactory.

Eurotra (1987), a European Economic Community machine translation project based upon grammatical transfers, needed a whole weekend to translate from English into Danish the following piece of text: Japan makes computers. It was discontinued right after that.

IBM’s Translation Manager (1992), an expensive ($6,300) but very limited machine translation software, followed suit but, given its poor performance, was left to drift unsupported in 2002.

Then, along came Google Translate’s neuronal revolution (2016), Microsoft’s Bing Translator (2017), Skype Translator (2017), Linguee’s DeepL, Amazon Translate (2018), some of which offer nowadays some very palatable translation (and even interpreting) products for increasingly dense and difficult texts, although cannot replace (still) the human brain.

Why? Depending on the approach one uses to respond to this question, a plethora of answers might be offered. Let us just resort to the strongest yet simplest one: connotation. Language users do not only employ their superpower (yeah, language is a superpower) to speak or write or read or listen or translate or interpret, etc., in order to denote something (i.e., to serve as the precise linguistic meaningful expression of the notion of something) but also, and very much often, to connote something more or something else (i.e., to convey meaning in addition to the exact explicit sense that is linguistically given to something through denotation). Computers are excellent at dealing with denotation but lousy at managing connotation. To handle connotation, one needs to know (and, if needs be, rebuild) the settings where a specific language speech utterance or written text was generated, and that is something that the human brain does beautifully well by putting to work its two hemispheres but also something that machines are incapable of replicating (at least, so far).

Following the industry (as usual), university degrees across the world have been adapting their translation, interpreting, and localization programs as quickly and masterfully as their resources have allowed them to do so. Nowadays, a university student in translation uses a cat tool or a machine translation tool in their learning/training process as naturally as typewriters were used 40 years ago to produce neatly done translations, or as pens and notebooks were employed 60 years ago for the same purpose. Translation, as a process, is a technology that adapts itself very well to the tools it is done with.

Translation university education has also been evolving to acclimatize itself to what the market needs. It is true that it always goes behind the industry, but it is equally true that critical thinking is what feeds theory, and that there is nothing more practical than a good theory (even if one wants to disrupt it). In five or 10 or 15 years, universities will continue to train translators, linguists, engineers, businessmen and businesswomen, lawyers, economists, etc., all of whom will reshape the language industry landscape, whatever it will be.

One might as well wonder that if the need for translators truly disappears, then why anyone would still study translation? One could equally ask oneself the same about law, engineering, or economy. Economists’ careers will, according to Frey’s and Osborne’s list of 700 jobs likely to be replaced by automata in the near future, vanish into thin air before translators’ (economists are ranked 282nd); law firms will soon follow suit (lawyers are ranked 115th); while engineers, ranked 63rd, will hold their jobs a bit more, but not as long as audiologists (ranked 5th), mental health and substance abuse social workers (ranked 4th), emergency management directors (ranked 3rd), first-line supervisors of mechanics, installers, and repairers (ranked 2nd), and, finally, recreational therapists (ranked 1st!).

As noted by Jorge Luis Borges (Las versiones homéricas, 1932), “No enigma is so consubstantial with the modest mystery of written language as the one put forward by translation.” Human translation and interpreting have been in the trade since time immemorial and are not likely to disappear any time soon unless a robot or a babel fish has the power to grasp the subtle, delicate, and elusive connotations included in the Mexican mole poblano recipe, in a poem written by the Spanish mystic St. John of the Cross, or in an Argentinian tango when they are required to render them into any language (and culture) other than Spanish and the Spanish-speaking world.


Prof. Dr. Uta Seewald-Heeg

Prof. Dr. Uta Seewald-Heeg

Anhalt University of Applied Sciences Köthen, Germany

Artificial intelligence enables software and machines to perform tasks that in our traditional mind set require human intelligence. Copywriting and translation are two of those human tasks.

In the last few years, there has been tremendous progress in the development of AI-driven machine translation and copywriting. Software with machine learning algorithms achieves impressive results primarily with languages for which training data are available in huge quantity and, above all, high quality. Nevertheless, even those machine-translated texts are not free of errors. Machine translated documents that seem to have at first glance impeccable quality often hide errors that only can be detected by a trained translator. And when we look at MT results into languages for which less training data is available, as for example into Ukrainian, the results so far are much less impressive than translations from English into German. Here, human intelligence is still needed.

In the field of technical documentation, numerous studies including university theses and students project works have demonstrated that the use of MT results in significant efficiency gains, if professional workflows are installed, texts are written for translation, and machine translation is followed by subsequent post-editing. If we have a look at workflows like these, we observe that the work of translators has already changed. Workflows like these are strong evidence for the need of qualifications and profiles of future translation professionals different compared to what they were several years ago or to what they are still currently in many places. Translation professionals will have to adjust their skill set.

In the future, translation professionals will need to be highly interdisciplinary-trained experts. Post-editing is one of the skills that is already needed today. And at a closer view, post-editing itself requires a variety of qualifications. In addition to very good translation skills, mastering CAT tools and their functions is just as important as the ability to build regular expressions in order to be able to describe patterns so that recurring errors can be efficiently corrected by standard replacement routines.

At Anhalt University of Applied Sciences in Saxony-Anhalt in Germany, for years we have been regularly adapting our curriculum to changing requirements. In many discussions with LSPs, MT developers, and providers, it became clear that translation experts with an understanding of IT fundamentals — for example, the way (N)MT systems provide translated results  — as well as active IT expertise are needed. Therefore, in our undergraduate study program “Specialized Translation — Software and Media” translation technology and methods in computational linguistics as well as fundamentals of computer science on a beginner’s level are integrated in the curriculum. In our modules on localization technology, CAT tools, as well as machine translation and post-editing are an integral part of the study program. We offer several modules treating different aspects of localization, like for example a second-year module in which file formats and their processing as well as the configuration of parser components take the focus. On the Master’s level in our study program “Software Localization,” students additionally become familiar with different working environments of CAT tools, with exchange formats like TMX, SRX, TBX, XLIFF etc., gain a certificate in post-editing and go technically into depth when localizing graphical user interfaces. On the other hand, we meanwhile include as well transcreation in our curriculum. Whereas most of the technical texts need efficient post-editing, marketing texts and immersive in-game texts of video games often need transcreative competence. 

In the next 10 years, the need of traditionally trained translators will decrease drastically. Academic institutions and people responsible for the development of curricula will therefore either have to focus on the technological profile of graduates and train language specialists with NLP competence who will need as well a proficient mastery of CAT tools and have post-editing expertise. 

Alternatively, curricula will have to train language experts that will have transcreative competence. Graduates with such a profile are needed especially when it comes to the translation of marketing material and texts for video, tabletop, or card games. 

Study programs for both target groups will have entirely different emphases.


Pete Smith, Ph.D.

Dr. Pete Smith

Professor of Modern Languages University of Texas Arlington

am very proud that a number of leading localization programs have been developed in the US — at the University of Texas Arlington, which trains undergraduates, and at our colleague schools such as BYU, Kent State, and other campus locations. We are also proud to partner with leaders in localization education at the graduate level such as the Middlebury Institute of International Studies. These college majors and minors have been very responsive to industry trends and woven a wide variety of professional skills and software systems into their teaching. Critical and current themes such as NLP and MT are each taught in great detail at UTA, and in these other leading schools as well!  Localization and language services majors and minors are among the most industry-responsive and forward-looking curricula today in US colleges and universities. My colleagues and I bring industry experience, attend and learn from professional conferences and visionary leaders, and our classes and degrees change almost every semester based on what we learn about industry needs and trends, I can testify.

But at the same time, it is important to note — as I have said from the stage at TAUS — that higher education has its own broad educational mission, its own boards of regents and governors, and its own accountability to accrediting organizations as well as to its funders both public and private. In most cases, this mission is broader and longer-term than what might be assumed in the word “training,” and it is something of a fallacy to consider a college education as simply a “training pipeline.” That in and of itself is not our mission! Each time I hear about the “gaps” that the language services industry finds in our curricula or students, I honestly cringe a little bit.  

The late Vartan Gregorian used to say that a university education is a bridge between where you are today and the farthest you can possibly go in the future, and we sell ourselves short if we envision a short span leading to only one of today’s job openings. In addition to the ability to train an MT engine or use a CAT or design tool, colleges and universities develop future skills most prized by employers today and in the future: analytical thinking, learning and coachability, communication and collaboration across languages and cultures, complex problem solving, creativity and innovation, among others. 

What does the future of our industry hold? You cite one of the ideas of translators becoming cultural post-editors, but I actually prefer a richer version of the future that one of my MIIS colleagues, Adam Wooten, has sketched out in his writing and publication — the idea of a future centered on NLP‑ and AI-savvy linguists and cultural specialists, a vision that preserves at its core our deep knowledge and experience in languages and cultures in a rapidly evolving technology world. It is a view of the decade ahead in driven less by an engineering-centric worldview, based more in the thought leadership that learning scientists and linguists should and will play in creating the future state.

In his later years, Professor Gregorian argued that this tension inherent in US higher education — a broader and longer-term educational preparation vs. training for the “real world” of employment — wasn’t necessarily either or. Modern, responsive university programs can do both, he argued. And I would argue that our best localization programs today can and do achieve both of these ends.


Dr Joanna Gough

Dr Joanna Gough

Lecturer in Translation Studies University of Surrey

Artificial intelligence (AI) has seen significant advancements in recent years, leading to debates and raising many questions about the future of translation as a profession and translator training. The success of large language models and Open AI’s GPT-3 have sparked heated discussions and raised the question whether a boundary has potentially been crossed and if singularity has become a plausible possibility. Despite the progress, it is my belief that true AI capable of understanding language and exhibiting real-life situational judgment required for effective communication and translation is not yet a reality. Until this level is achieved, human translators will continue to play an important role in multilingual communication.

We must remember that, whilst the goal of translation is the achievement of (sort of, or a degree of) equivalence, as my CTS colleague Félix do Carmo points out, translators are trained to deal with non-equivalence — something that machines are not! This means translators solve problems, whilst machines (currently) predict the next word. This is sufficient in some instances, but inadequate in others. And where it is not enough, we will need humans to deal with the complexity of language and diversity of cultures. 

In the future, I think it is unlikely that businesses, organizations, or content owners/creators will relinquish the responsibility for translation solely to AI, unless the content is trivial. Valuable and impactful content needs a human touch, so we will still need people who can make judgments in highly contextualized multilingual situations, who know what words or expressions to use to make audiences connect with a brand, use a service, or buy a product. This is also crucial in live communication scenarios such as government meetings, business negotiations, or medical consultations where human translation and interpreting services supported by technology and post-editing machine translation (PEMT) are necessary. The goal is to make sure that communication across languages and cultures is accurate, effective, and reliable and to ensure that human and AI approaches are used in combination, to achieve the best results in a given scenario.

Therefore, the translation profession, as we know it, has to evolve to embrace this change. It is possible that we will need fewer translators but more of other, perhaps more hybrid professionals with a unique blend of skills. My concern is that in the future, PEMT as a profession may face challenges in attracting skilled individuals. Many individuals who are linguistically talented and have a passion for translation may not be motivated to undergo training specifically for PEMT. The tasks involved in PEMT do not provide opportunities for skill development, job satisfaction, or creative fulfillment, which may deter educated polyglots and creative individuals from pursuing it as a career. Already-trained translators might take PEMT jobs out of necessity, or if they have other, more creative jobs to compensate, but I do not predict a PMT job description will attract many students. This could result in PEMT being performed by untrained bilinguals, unless the rates for PEMT significantly increase, making it an unchallenging but financially attractive career option.

Inevitably, translator training courses need to adapt to the changing realities. My concern is that many translation programs may be at risk of declining unless they are continuously kept up to date to meet and anticipate industry demands. Many universities are already reshaping their courses, adding or changing modules to allow students to specialize, diversify or hybridize. At Surrey, for example, apart from an extended technology training beyond CAT and MT, we offer modules such as Human-Computer Interaction; Writing and Rewriting for Translators; Translation for the Creative Industries; Computational Thinking for Translators; and Business and Management in Translation or Respeaking. We fully embrace technological developments but focus on skills where humans excel. We think that the transformation and reshaping of translation/multilingual communication courses will likely have to go further and faster over the next five years to deal with the rapid changes in the industry and to enable students to understand the bigger picture as well as giving them relevant skills. One important aspect in this is that all (future) translators will need to engage with data-driven technologies including AI, acquire a critical understanding of how these technologies work or don’t work, and are able to work with them effectively and efficiently. 

However, despite the ongoing efforts to reshape and transform translator training programs, there remains a concern that the translation programs may still experience a decline. This could be prevented by increased industry participation in reskilling and upskilling translators, and a willingness to create more appealing work prospects and better working conditions, especially for those at the bottom of the supply chain. The abundance of information online about the industry could also make it difficult to attract language-skilled individuals to join, given the relatively unattractive entry-level employment, limited career progression, and increasingly low pay prevalent in the freelance sector. Given that many translation courses have been (re)designed to meet the needs of the commercial sector, it is important that the industry takes a more active role in supporting translation centers and in increasing the status of the human workers. This could be achieved by setting higher standards for the recognition of value the human resources bring to the supply chain and demanding that clients pay accordingly. This would definitely make the profession more attractive, make sure the translation courses thrive, and continue training the linguists our industry will still need, alongside other emerging skills and roles as needed. There are already many positive examples of academia-industry collaborations in the UK and elsewhere in the world. For example, Surrey CTS has, for many years, collaborated with Sandberg and this year will be collaborating with Nimdzi on the Business and Management in Translation module. Many individual industry professionals have delivered talks, seminars, and workshops as well as mentored our students. However, I believe there should be a more structured approach to such collaborations in the future, with more strategic partnerships with industry partners across the translator training sector.

The scenario of a decline in translation programs in future does not, in my opinion, mean translation studies as a discipline will also decline. We must remember that commercial translation and related technologies is only part of translation studies as a discipline, which has long traditions rooted in literature, cultural, and linguistic studies. While the commercial aspect of translation has grown and become more prominent in recent years, it should not overshadow the rich history and diverse applications of the discipline. It is important to maintain a balance between the practical and the academic to ensure that future generations of translation professionals are equipped with both the necessary technical skills and a deep understanding of the cultural, historical, and linguistic context of their work. Equally, the translation industry plays a vital role in enabling communication and understanding across different cultures and languages. It is therefore crucial to maintain the relevance and importance of translation studies to ensure its societal value is also recognized and sustained.

Lastly, let’s bring the discussion back to the debate on AI and singularity in the context of language, translation and translator training. Imagine a future world in which all texts are generated and translated by AI. To me it is a haunting prospect. Humans possess a remarkable capacity for creativity, especially in language development and its evolution. My teenage kids already speak a different version of English than me. However, AI currently lacks the ability to evolve language as it doesn’t exist in an environment which shapes a language. It relies on a continuous input of new data created by humans. If humans stop evolving language, it would likely have consequences for our ability to communicate, share ideas, and build new knowledge since language evolution is important for both individual and collective learning and growth. The loss of the ability to evolve language could lead to a stalling of linguistic and cultural evolution, and thus impact the overall evolution of the species. Language is what sets us apart as human beings and gives us a sense of belonging and connection to a particular community, time, and place. If we allow AI to take control over our language, then indeed, we could be entering singularity and with it, lose the sense of that belonging and connection. If this scenario materializes, then yes, the need for translators will be eliminated as we will see the emergence of a massive, ubiquitous cloud of processed, homogenized multilingual content regurgitated back through billions of iterations, allowing for easy generation of text in multiple languages at no cost. But then translators will be in the same boat as many other professions, so I think the future is unpredictable for all knowledge workers. But for now, keep calm and carry on working together to train not only AI systems, but also the next generations of professionals who will assist in bridging cultural gaps, boosting businesses, and fostering unity among people.


Max Troyer

Max Troyer

Associate Professor of the Translation and Localization Management program Middlebury Institute of International Studies at Monterey

Asking someone what keeps them up at night is often a good ice breaker, and one of those issues for many localization, translation, interpretation, and language learning educators (and many other fields) is if or when AI will change how we teach, how students learn, and what kinds of careers exist for our graduates. I think it’s helpful to keep in mind that many industries have gone through this before. All you have to do is search for a list of “historical careers replaced by technology” and you’ll see that this is not a new issue. Many of us were probably hoping AI would put lawyers and accountants out of business, but leave the translators alone.

Is it our turn after all? Well, YouTube and even the New York Times are rife with videos and articles saying that knowledge workers, writers, doctors, lawyers, programmers, and so many more are at risk of being displaced by AI powered by large language models — in a nutshell, AI that’s really good at writing beautiful fluent prose. Something tells me you’ve recently been hearing a lot about ChatGPT, an OpenAI product that follows on the heels of DALL·E 2, a deep learning model that generates images from carefully written prompts. This is where the so-called “human in the loop” comes in. According to doomsday prophets in the media, if we’re not careful, writing prompts is exactly what knowledge workers will be reduced to doing on a daily basis.

Just a few months ago, if you had talked to me about AI, I would have said we’re still waiting on the singularity to create intelligent machines that can rival humans to put knowledge worker jobs seriously at risk. With large language models, while I don’t believe we’ve reached the singularity, I definitely feel that we’ve reached a point of no return and they will only get better with time.

I first investigated ChatGPT when I read that major schools were already trying to ban it and initially wasn‘t too concerned, until I gave it a try myself.

Localization training

I didn’t think ChatGPT was going to affect my courses. My focus is on training future localization professionals, and giving them the skills that will allow them to have successful careers. These skills include knowing what tools and processes companies are using to localize their content, how we can get their content into the tools we use for translation, and finally, how we can best assist the translators who will be translating the content.

With these as my goals, I thought I was immune to ChatGPT. But then I realized that all of my courses consist of a final project culminating in a blog article and short presentation. I can already see the prompt students will use: “You are an expert in localization. I will give you a poorly written article. You will give me back a 1200-word essay written to the highest standards.” Need to create all new final assignments? In a surprise twist, I realized while students would still need to complete a localization-related project, the writing and presentation aspect would no longer be as big of a challenge. This could be very beneficial for our many international students, who don’t have English as their native language. ChatGPT could potentially put our “writing center” out of business and generally raise the bar on the quality of our students’ writing.

I’m currently working on a module for my Audio-Visual Localization course, and I told ChatGPT it was a subtitling expert and asked for 10 tips for checking subtitles. While I was not surprised by 9 of the tips, the tenth was embarrassingly not on my radar: “Test the subtitles with different audiences to get feedback and improve them accordingly.” What’s funny to me is that I love the idea of a focus group, but I didn’t think to apply this to subtitles! If students apply ChatGPT to their final projects, it’s quite possible it will add some things they weren’t considering or expecting — and assuming they understand what ChatGPT chose to add, this will strengthen their overall learning experience!

Translation training

I have a master’s in translation, and while I’m not a practicing translator, I can’t help but see how powerful ChatGPT could be for helping facilitate translation. While there are many schools, mine included, that believe there’s still value in teaching translation skills “from scratch,” schools will inevitably start out by incorporating CAT/TMS tools in addition to MT post-editing. For years, we’ve been saying “at least only a human can perform post editing,” but it turns out ChatGPT is really good at editing! Hallelujah I say to that. Most translators will tell you post-editing is really not fun for humans and likely results in a lot of missed errors because they are bored out of their minds. Things that can be handled by machines maybe just should.

Similar to the prompt above, a post-editing prompt can even include a register such as “I’m going to give you a poorly written article. You’re going to rewrite it by fixing the spelling, grammar, syntax, and flow. Your article will be written for a fifth grade reading level.” This works for many languages, and if you ask ChatGPT, it will admit some languages are stronger than others and that its best languages are Spanish, French, German, Italian, and Chinese. That’s a pretty good start. And while ChatGPT cannot produce Egyptian hieroglyphs or braille (yet!), it can render text in Morse Code. It’s a matter of time when MT providers will automatically run the output through ChatGPT API or equivalent as the final polish before the text is delivered back to the customer.

Interpretation training

As far as I know, interpretation training has not been radically influenced by AI, but obviously remote interpretation has started to infiltrate curriculums. Where AI may be eating away at the interpretation market is real-time captioning. YouTube, Twitch, Zoom, and many other video services now support some variation of live subtitling. Zoom already offers the ability to use MT to translate the live subtitles, and other services will likely be rolling this out soon. 

Nothing is stopping video services from also applying ChatGPT to their workflows so the results of either the voice recognition, the MT output, or both are used to create polished, if not 100% accurate subtitles. Will conferences start using opera-style SURTITLES™ to display a transcript and/or translation(s) at the bottom or top of the stage? If the customer wants to provide an in-ear simultaneous interpretation, what’s stopping them from using a text to speech AI engine such as Speechify?

I happen to be married to an interpreter who has done some teaching, and according to her, one of the most challenging aspects of developing a course is to find original speeches that have never been published, lest students find and practice them — especially right before an exam. ChatGPT seems to be really good at generating original speeches, and even if it’s plagiarizing something from somewhere, it also seems to be really good at paraphrasing and reordering sentences so they’re unrecognizable. Similarly, students could use ChatGPT to generate endless practice material at varying levels of difficulty. They could also use Speechify and adjust the playback speed to practice simultaneous interpretation.

Language learning

A lot of the same benefits of ChatGPT for interpretation also apply to language learning. Language instructors can write prompts to generate infinite dialogues between two speakers in almost any setting, at almost any language level, and constrained to a CEFR level such as A1, B2, or C2 (source). Why purchase a textbook when ChatGPT can generate dialogue between two people placed in common everyday settings at varying degrees of complexity? You can also ask ChatGPT questions about language and it will give you a definition and examples in context. With some tweaks to the user interface, ChatGPT could power an interactive language learning textbook! One of the coolest features of ChatGPT is you can always just tell it to “Tell me more…” and see the results.


Is there anything AI cannot do? Some experts believe AI is not very good at common sense or human values, but Paul Pallaghy in Medium gives plenty of examples that GPT-2, GPT-3, and ChatGPT do pretty well at accurately responding to common sense problems. Based on his estimates, ChatGPT is 95% accurate for common sense, which is about equal to humans. 

To me, as an educator, this means we need to immediately commit to learning as much as we can about what large language models can and cannot do, and find out how to incorporate tools such ChatGPT into the classroom — even if that means showing students how to write effective prompts. This is just the next level of the Common Sense Advisory-coined term, the “augmented human,” and it means that we’re not quite ready to put language professionals out to pasture. 



Subscribe to stay updated between magazine issues.

MultiLingual Media LLC