How does AI affect the language community? Natural language processing (NLP) is a significant AI activity. Specifically, machine translation (MT), speech recognition and language understanding are all the focus of massive research and development programs distributed throughout the enterprise and academic worlds. There are increases in productivity due to new technologies that have revolutionized our working lives. AI’s achievements are already colossal and are driving global communications to dizzying heights.
Our community has been enriched by entirely new disciplines, spearheaded by engineers without whose work localization, internationalization and globalization would remain pipe dreams. Of course, as with all change, there has been some negative impact. Automation has altered the economics of working as language professionals. And there is the contentious issue of human translators feeding the machine: the very machine that threatens to make professional language skills redundant. On the one hand, AI seems to have a well-earned place in our community, yet on the other, its presence is perceived as a menacing threat. How exactly, then, is the language community dealing with AI?
AI, it seems, is everywhere. Everybody’s doing it. It therefore seems fair to ask, is there an AI community and are we part of it? The answer, in a word, is yes. But such a simple answer is quite misleading.
A disclosure: I’ve been married to an AI researcher for many years. My husband, it so happens, is a refugee from the so-called AI winter of the mid 1980s and early 1990s. This was a period in the UK when Sir James Lighthill published a scathing report on basic AI research methodologies. Coupled with a general pessimism amid the AI pantheon of big-name academics who were survivors of an even earlier wave of disenchantment with AI’s accomplishments, funding shrank and research hit the wall. Although new life would grow from fallow ground, the original impetus within the so-called AI community, effectively in abeyance, was halted and many highly talented people took their skills elsewhere.
John MacLeod, my husband’s fellow Glaswegian and fellow AI winter victim, puts it succinctly in his thick Glaswegian accent, “We flew too close to the sun and ended up in the drink wi’ Icarus and a’ the other punters wi’ big ideas.”
MacLeod was a huge contributor to the AI community but has changed his professional path since those early difficult days. He does, however, point out that Daedalus went on to complete his task of building a labyrinth for King Minos of Crete. Mission accomplished? Perhaps, but whether on time and on budget, the mythologists do not divulge. MacLeod further notes that many other ancient cultures have parallel myths relating the dangers of over-ambitious ideas. “AI is,” he opines, “the biggest idea humanity has come up with. But if you think the ultimate goals are achievable on a von Neumann machine crunching squillions of bits a parsecond, prepare for your feathers to be plucked.”
When the conversation broached the idea of AI and science fiction and the AI apocalypse, MacLeod just laughs. “Where’s the engineering evidence for the robot rebellion?” he asks dismissively. He agrees that we rigorously need to watch our step ethically and must not play with fire. He is more optimistic about our human/computer future than ever, but he does caution that we can expect our ideas about what computation can do to do somersaults. Homing in on the language community and our rapidly-evolving automated world, he believes that when we move beyond conventional computing platforms and develop other tech based on, say, biological architectures, we will possess synthetic communication abilities that will truly master languages.
If there is still life in the AI of the past, as Stewart, MacLeod and many others staunchly maintain, where does that put us in the present day? I learned that there is a prevailing notion among many thinking techies that our present endeavors amount to more of an alliance of different disciplines than a coherent whole. In simple terms, our enterprises, aka corporate businesses, are leveraging technology like the Internet of Things (IoT) and vacuuming up data in volumes that even leave cosmologists mind-boggled. The numbers are indeed beyond astronomical. For example, Google Translate has 500+ million users a day, translating some 150+ billion words a day. Between Google, Facebook, Amazon and Microsoft some 1200 petabytes of data are currently being stored. That’s only four of the biggest! Our friends at Cisco maintain that we are now in the Zettabyte Era of storage — a zettabyte is 1021 or 1 sextillion bytes. Even if we only work with a small fraction of that total in our work as language translation providers, I’m thinking we need to take a cold, hard look at what lies ahead. If today’s numbers fall off the edge of the language universe, where on earth will they be in five years’ time? I’d say, do the math, but you need a supercomputer for that!
Common Sense Advisory’s (CSA Research) founder and chief strategist, Don DePalma, recently addressed AI in the language industry in an article entitled, “Planning for the Onslaught of Artificial Intelligence.” In it, DePalma does not simply address the fears and woes of language workers; he offers great advice to C-Suite bigwigs. It is critical to plan for an inevitable future where automation is the first resort. Is this a Brave New World we face? Well, ask anyone in the food industry, brick and mortar retail, or even a Tesla car plant about robots and automation, and you quickly discover we are playing catch up in the language industry. This is easily explained by the fact that natural language processing is a very hard nut to crack with logic-driven computers.
Language processing has been central to the AI mission since its earliest days. We currently live at a time when data is king. Machine learning (ML) has propelled us forward by astonishing leaps and bounds. But the troubled voices of AI researchers warn us that while vast corpora yield fascinating and actionable insights from data processing, insights are not necessarily knowledge! Knowledge, the fruit of intelligent thinking, is what we strive to endow our brains with from birth. As MacLeod and Stewart ask, “How much thought goes into a translator’s daily work? Is that reflected in MT outputs?” There are metrics, of course, to measure quality, but these are not hard and fast. Will our computers achieve parity with human translated texts, as has recently been claimed? The jury is still out.
Of course, there are as many potential AI apps as there are smart human activities and then some. AI is excellent for diagnostics, prediction, problem-solving and so on. It can be used in countless different fields of endeavor from life sciences to finance to NLP. AI has as diverse a range of applications as our intelligence can devise.
Stewart helped me understand the kind of problems AI seeks to tackle with a simple analogy. He’s a Sudoku freak, so he pointed to a fresh puzzle on his desk. “Here we have a regular space containing a few clues,” he tells me as he points to the given starting numbers in the grid. “Our task is to use these numbers to work out what numbers go in the blank spaces. We do this by applying thought and use a process of elimination to decide with certainty what the solution is. ML can be trained to do tasks like this. In fact, it can achieve much more complex tasks these days. But it’s not thinking that does the trick. It’s number crunching.” He mentions the anguish of chess master Gary Kasparov and Go world champion Lee Sedol as IBM’s Deep Blue and Google’s AlphaGo respectively triumphed over them. These truly deep thinkers were stymied by the tech juggernaut. DePalma’s view is that we better get ready for a lot more Deep Blue and AlphaGo moments. We can’t put the genie back in the bottle, but if we use our wishes wisely, we will adapt and survive. There may be plenty of AI pessimists out there, but those who spend their lives working on developing the big ideas are genuinely optimistic about our future as “post-sapiens,” as Stewart and MacLeod describe us.
But let’s pause for a moment before we get too carried away. AI can learn a salutary lesson from NASA and the James Webb Space Telescope (JWST). The NASA website declares that, “opportunities for collaboration will highlight our common interests and provide a global sense of community.” They certainly have made awesome contributions to progress with the Apollo program, the Mars missions, the ISS and countless other projects. But the JWST has been something of a nightmare as its schedule has slipped and costs have mushroomed. The project was launched before all the technology needed even existed! In other words, the need for invention was built into it from the get-go. Was this a wise move? Given that innovation does not always hit the bulls eye the first time, the cost of fixing problems and the time involved has badly impacted its achievable goals.
So, is there an AI community in 2018? It seems that saying yes involves a genuine attempt to link academia with business, justifying expensive work on some very futuristic-sounding ideas.
However, I had a recent conversation with Francis Tsang, head of international engineering at LinkedIn, who has a solid grasp on AI’s current status in our industry.
Tsang believes that perhaps there is no AI community because there is no clearly marked AI industry. AI in its full capacity is a way of life that we will all need to embrace in order to fulfill its massive potential. AI has been creeping into our everyday lives as human beings and into our work. When it comes to companies, those with their vision well focused on the future have already understood that currently data is the enabler of many AI applications. AI does not replace but rather it enhances all human activities. The more we use AI in our daily lives the more we accept our partnership with the machine. As we move forward, applying AI to our activities will become a skill that will enhance our lives and make us considerably more efficient across the board.
All communities face change and sometimes those changes can be far-reaching and have transforming effects. Research analysts like DePalma effectively warn us not to be caught out like lumbering dinosaurs. I strongly advocate that we in the language community embrace AI and make a strenuous effort to bring members of what passes at present for the AI community and bring them on board. As the pace of technological change gathers more and more momentum, we need to innovate processes, services and abilities that will give multilingualism its rightful place in our globally-networked world.