Artificial Intelligence

Language Is No Longer Human

By Manuel Herranz


he concept of language has always been an integral part of what makes us uniquely human. For millennia, the ability to communicate through words has solely been reserved for us: the master species, Homo sapiens. Language separated us from other animals, and it was key to our evolution and ability to develop abstract thinking — starting from humble origins, when language was just a tool for describing very concrete and mundane facts and events. Anthropologists say that the first human utterings probably conveyed simple meanings such as “danger,” “food,” or “I like you.” Humans carried language with them in every migration from eastern Africa, which explains why every culture — even the most isolated — speaks a language. Today, there are approximately 7,000 languages spoken worldwide. 

But in December 2022, language seemingly ceased to be a uniquely human trait. For the first time in history, something that is not human communicated with us through fluent language. That something was ChatGPT. While humans can communicate with some animals via spoken commands and sign language, communication with generative artificial intelligence (GenAI) is entirely novel. An artificial dialogue system with a huge attention span, refined with human feedback and numerous layers of neural networks for safety, provides the experience of “talking to a person.” Our brains react as if we were indeed conversing with another member of our species. Thus, we must face the fact that language is no longer (exclusively) human. 

GenAI has fooled us just like scientists have fooled apes, dogs, dolphins, whales, and even bees with artificial noises, smells, and sunlight to study how they communicate. Humans have fooled animals into signaling the location of food, sending alarm signs, and believing another member of their species was ready to mate. Every large language model (LLM) powered by a chatbot has fooled (most) users into believing the information provided by such systems was valid. And even if it contained inaccuracies, we were amazed at the wonder of articulate, machine-created sentences that could pass as human — albeit one who is bad at fact-checking. After all, humans are equally good at producing disinformation and incorrect schools of thought.

Instinctively, evaluations (such as Noam Chomsky’s The False Promise of ChatGPT) soon moved from fluency to factual consistency and reasoning as the benchmark for non-human language. ChatGPT was not created to reason, but rather to generate language (remember generative pre-trained transformers). The attempts to introduce some reasoning capabilities in the race to artificial general intelligence came later. This is in contrast to the stated target for all of us in the language industry, which has been to provide clear and perfect language versions — with the emphasis on fluency, not reasoning. Nobody expected translators to “reason.” They were expected to translate and to occasionally “adapt” the message to a local audience. LLMs have proven quite capable of adapting content, as well, such as to specific dialects, audiences, and tones.


Humans don’t master languages well

Heinz (Henry) Kissinger, the architect of international politics and American foreign affairs for several decades, passed away a few months ago. He was born in Fürth, in the heart of Bavaria, to a Jewish family that managed to escape the Nazi regime and settle in New York in 1938. His younger brother, Walter, was also born in Germany, and the brothers had only a basic knowledge of English when they arrived in the United States. While Walter eventually spoke American English with no accent, Henry was widely mocked for his distinctive German accent, with parodies on the radio and on Saturday Night Live. Monty Python also wrote a funny song about him with cutting lyrics.

The two brothers’ differences in linguistic competence are attributed to various reasons. According to psycholinguist Steven Pinker, who defends language as an instinct, the age factor was key: When the brothers arrived in the US, Walter was still a child, while Henry was already a teenager. Walter took pleasure in joking that he spoke English without an accent because, as the younger brother, he had to listen more.

Learning a new language is a long and winding road. What is easy and fun for children becomes a source of adult frustration. Many language learners reach a point where they decide to stop and accept their level of proficiency. The linguistic phenomenon known as “fossilization” occurs when a learner decides to settle for a particular level of proficiency and makes it permanent. Several factors can contribute to fossilization, including age, desire to maintain a previous identity, environment, and willpower.

Henry Kissinger’s German accent was in line with his personality and strategic thinking. Could he have shed his German accent in his youth? Yes. But his English was fossilized, a trait that made him seem more authoritative, effective, and exotic. In 1974, he spoke to the press in Bonn, Germany, alongside German-speaking Willy Brandt. When Kissinger took the floor, speaking first in German and then in English, he said with a laugh, “I speak no language without an accent.”

What paradoxes surround Kissinger! He didn’t master either of his two languages perfectly, but he dominated international politics. Like an operatic or comedic character, his entire life as a diplomat was a starring role in a foreign drama. In the great theater of the world, the elite typically do not master more than a handful of languages, even if they excel at oration.

Language as a commodity

This article seeks to redefine the limits of what makes us human — uniquely human — and language is not one of those qualities any longer. The language industry has been working on automation for a long time, so language professionals are among the most qualified to reflect on this fact.

Humans are beginning to trust AI applications almost blindly sometimes. In translation, a “human touch” — meaning a post-editor or proofreader — is recommended when it comes to AI-produced language, but this is not much different than reviewing our own work or asking a colleague for collaborative drafting. And AI applications impact more than just how we generate content or translate it. They are not just impacting how we communicate with machines, either, or how these machines interpret and process our human languages (something natural language processing (NLP) has been working on for decades). AI language applications are influencing how humans communicate with other humans, as well.

I foresee a not-so-distant future when machines will facilitate much communication between humans, fine-tuned to our preferences. It is a well-known fact in economics that once industrial processes are applied to a product or service, we enter economies of scale, and the product or service begins to be commoditized. It is not a strange concept — translation price per word has been commoditized for decades, as has original content creation and editing. Many factors are involved in “reasoning,” which is what truly separates us from other beings and machines. Language is one of those factors, as we reason through language. Other factors — such as instincts, a sense of morals and purpose, information, and past experiences — are quite immaterial, depending on the author we follow.

For just over one year, content generation has been prone to automation by LLMs. While LLMs will only be a piece of the puzzle in a true or general AI, they will be an important component. Let’s emphasize the remarkable scalability of these AI-powered processes compared to their human counterparts and shift our mindsets to language as a product. Think of it like a conversational product — a negotiation, translation, cultural adaptation, and decision-making tool.

One area where AI outperforms humans when it comes to language is in marketing. Advertisements are essential to any successful brand strategy, but crafting compelling ads requires creativity and insight into consumer behavior. This task can be challenging because people’s preferences change rapidly, making it hard for businesses to keep up. Traditionally, language service providers (LSPs) that translated marketing content had to rely on “in-country” specialists and face availability issues, with all the typical problems of a human-based process that cannot scale. Fortunately, AI algorithms powered by machine learning techniques can enable companies to generate personalized ad campaigns based on customer demographics, purchase histories, and online behaviors. Some of the latest AI advancements accept “live” input to gear the LLM to produce specific types of content without retraining the whole system (retrieval augmented generation, for instance). We are heading into an era in which the transfer of information to customers and users worldwide can happen seamlessly with AI systems.

Another example of AI in a traditionally human space is conversational AI, also known as chatbots. Chatbots use NLP, which allows them to understand and respond to human requests accurately. This is one of the features we enjoy the most, as they provide us with the feeling of intelligent responses (it is not called “conversational” for nothing!). AI chatbots simulate conversations between machines and humans using messaging platforms or voice assistants like Amazon Alexa, Google Assistant, and Siri. These bots can handle multiple concurrent conversations, operate continuously without breaks, and process massive volumes of messages quickly, rendering traditional call centers obsolete. A study conducted by Juniper Research in 2018 predicted that chatbot usage would increase sixfold by 2023, demonstrating the growing demand for this technology. Obviously, this research is obsolete, and the figures are now much higher.

This type of engagement can be useful not only in call centers, but also for the legal system and in education, creating ripples of change in society as we know it. In this case, we do not only have human-like language being generated, interpreted, and processed by machines, but we also see how systems can affect how humans communicate with each other.


Translation is one clear domain in which AI is excelling beyond human capabilities. While it is not easy to control LLMs’ generation capabilities so that they become “true and faithful translators,” questions about accuracy and terminology control can be resolved by neural networks. Retrieval augmented generation is beginning to play a very promising role as the final post-editor, and I see the potential for translations to be published without a final review with such systems. The European Commission took a bold first step last year.

Scalability is a critical factor determining the success of AI-driven language generators, given the ever-increasing volume of text produced globally on a daily basis. According to some reports, around five exabytes of new digital content are created every two days. Manually handling such quantities would require colossal workforce resources, whereas a rough calculation is that ChatGPT 3.5 was trained on approximately 20,000 years of human reading, providing it with insights on practically all subjects. AI algorithms can manage petabyte-scale datasets, ensuring rapidity, consistency, and cost-effectiveness. Furthermore, AI systems can learn and adapt automatically as new linguistic patterns emerge, whereas human performance tends to plateau once acquired skills become routine.

When we say that AI will “assist” us, we mean that AI will provide information retrieval and use language as a delivery method. It will not provide any of the other ingredients for us to make a decision. If translation memory systems were the first step in commoditizing translated words, LLMs are the tools in the commoditization of language.

The impact

AI presents unprecedented opportunities for automation, streamlining operations, and enhancing productivity. Some analysts claim that it will ultimately contribute to economic growth. However, a report published by McKinsey Global Institute suggests that AI could displace approximately 800 million jobs worldwide by 2030, requiring a corresponding creation of new positions entailing higher levels of social and emotional intelligence, problem-solving skills, and creative thinking. I’m not sure the transition is going to be a smooth one. This is not like transferring manual labor from the fields to the factories. Not everybody is good at emotional intelligence, has problem-solving skills, or is a creative thinker.

No industry is safe from it, and so reskilling and upskilling initiatives must become a priority to prepare individuals for future careers in an increasingly automated environment. Governments, private organizations, and educational institutions should invest in developing curricula focused on teaching individuals how to interact effectively with AI rather than competing against it, including communicating with other humans via AI. Such programs should prioritize soft skills training, equipping students with practical knowledge applicable to diverse industries.

Despite the potential job losses, AI will certainly bring some positive changes to society, as well. For example, are we ready for a neutral judicial system in which AI judges — uninfluenced by politicians or the media — make decisions (and explain them) based on facts and jurisprudence? In the realm of education, intelligent tutoring systems will leverage NLP algorithms to help pupils comprehend concepts they find difficult. These systems will offer customized instruction tailored to each student’s requirements, providing immediate feedback, correcting errors, and offering suggestions for improvement. In this case, humans will learn basic skills from machines, undergoing “training” in knowledge management and reasoning. Additionally, AI can enhance assessment methods by grading assignments and exams objectively and consistently, saving educators valuable time and effort.


Some people say that implementing AI in education may raise security and privacy concerns if it collects sensitive personal data about students. It may introduce bias — as if some educational establishments were not famous for their political or religious bias. The truth is that we, as humans, will always take machine output with a pinch of salt, not because the language produced sounds unnatural but because the AI author has no physical or real-world experience. Policymakers must establish guidelines regulating the collection, storage, and utilization of educational data to protect individual rights while addressing legitimate academic objectives.

My goal with this article is to be consciously provocative while avoiding doomsaying or blind positivity. AI represents both a challenge and an opportunity for humans, fundamentally altering how we live, work, and learn. It is undoubtedly transforming the landscape of language generation and how we use language and get paid for it. New jobs are appearing, but few individuals possess the combination of language skills, industry knowledge, machine learning background, and management skills that corporations are asking for. That combination of skills and experiences points to what humans are best at: intuition, reasoning, critical thinking, and creativity in unconventional situations. It is imperative to strike a balance between harnessing AI’s potential and safeguarding human dignity by avoiding job displacement and promoting responsible innovation — which is why I’m so vehemently against the “fair use” of copyrighted material to train models.

As language becomes a commodity, all types of communication are prone to AI automation. What truly makes us human is not language. It is the ability to respond to the unexpected. Call it “reasoning” if you wish.

Manuel Herranz is the CEO and founder of Pangeanic, an NLP and translation services company. His background is in linguistics and engineering, and he has degrees from Manchester University and MIT.

Related Articles