Past is prologue all over again

Three years ago in 2016, Erik Vogt (my friend and industry colleague for over 20 years) and I ran a webinar called “Past Is Prologue” where we looked at the state of technology — including localization technology — from 1996. I would describe it as an exercise in hindsight analysis.

What trends were at play then that would ultimately materialize into what would become the present? What predictions were people making then about where we’d be twenty years later, in 2016? How well did those predictions pan out? And most importantly: what insights might we glean about our future from such an exercise?

I think it’s a perfect subject for reexploration here in the technology edition of MultiLingual in 2019, as we find ourselves on the cusp of a new decade.

So, we’ll explore the technology landscape back in 1999, especially as it was applicable to the language industry. We’ll look at what the technologists and pundits of the time were predicting and see how they did. Eerily accurate? Hilariously off-target? Something else entirely?

Then, through this process, we’ll try to extract learnings that we can bring along with us as we venture into the next decade.

Welcome back to the future

In 1999, the symbolic significance of the New Year couldn’t have been more potent: not only were we crossing the threshold into a new decade, but into a new century and new millennium! For years, the year 2000 had been used as shorthand to mean, “preposterously far into the future” and was the stock setting for all sorts of futuristic speculation across the optimistic/pessimistic continuum. Either we’d be colonizing outer space, or we’d be reduced to rubble from global thermonuclear war.

But after the New Year’s celebrations, there we were — alive and still on planet Earth. The year 2000, the goalpost for “the future,” was no longer science fiction or even far away. It was barreling toward us.

Part of the feeling of acceleration came from the fact that throughout the 1990s, the world was already experiencing a series of significant and transformative paradigm shifts, fueled by globalization and technological advancements. Personal computers — increasing in power but decreasing in price — went from being niche products to ubiquitous household appliances. The internet was proving capable of opening doors to radically new worlds of possibility, and of creating new business empires.

In 1999, I was earning my paycheck as a localization engineer for a language service provider (LSP), and at that time the industry was reacting to the compound demand of the world’s companies needing to be more global than ever while staying ahead of the curve of rapidly evolving technology.

The pressure to deliver more, cheaper and faster trickled down and drove process innovation and automation in localization. New tools and processes were being developed both commercially and privately to do just that.

In particular, translation memory (TM) was undergoing a significant transformation: changing from a productivity tool used by a few technologically-inclined translators to a standard process expected to be used by everyone across all projects. Costs savings from the application of TM, which were previously enjoyed by translators, were transferred to LSPs and then to localization customers. Concepts like “weighted word counts” and “fuzzy match grids” became baked into the models in which localization services would be bought and sold. One could see the specter of commoditization emerging.

Out in the world, some interesting things were happening.

• The peer-to-peer MP3 sharing service Napster was released.

• Internet Explorer was winning “the Browser Wars” and Microsoft was embroiled in a court battle with the US Department of Justice over allegations of abusing their dominance in the operating system market to create a browser monopoly.

• Apple Computer, Inc., was ridingawave of success under the new leadership of Steve Jobs who restructured the company’s product line. In 1999, Apple began releasing developer previews of their upcoming Unix-based operating system, “OS X.”

• The 802.11 wireless LAN protocol was introduced for home users for the first time under the name “WiFi.”

• Mobile computing and mobile phones were largely considered different categories, the latter being dominated by Nokia. In January 1999, Canadian pager company Research In Motion challenged this paradigm with the introduction of a device called the “BlackBerry.”

• Author and web designer Darcy DiNucci coined the term “Web 2.0” in a magazine article that describes the difference between the current state of the web and the future as “roughly the equivalence of Pong to the Matrix.”

• Jeff Bezos was Time magazine’s “Man of the Year.” Just one year prior, Amazon had purchased the company Junglee and decided to expand their model to sell more than just books.

• There were a record 486 IPOs in 1999 as the “dot com bubble” continued to expand. eToys.com was being valued at $4.9 billion on $100 million of sales, while Toys R Us was making revenues of $11.5 billion and had a real-world valuation of $4 billion.

• Analysts debated the impact that the “Y2K Bug” would have on the world as systems that were designed to only handle two digits rolled past 99. Predictions ranged from the apocalyptic to the mundane. (Incidentally, the year 2038 represents a potentially similar challenge when Unix systems run out of addressable space to display date/time values after January 19).

The retro-predictions

Here’s some of what the pundits and visionaries were saying in 1999:

Web 2.0 is what we now take for granted as the dynamic, interactive and collaborative platform that most of us use every day on a variety of devices — with and without screens. I would argue that DiNucci nailed it on this point. And while W3 standards would arise to address the threat of fragmentation, it was often after fragmentation was already demonstrably a problem.

The Web 2.0 transformation had a significant impact on the language industry, as it completely changed the nature of the localization challenge: content no longer lived in discrete, easy-to-translate, standalone containers. It became more abstract, something to be assembled in just-in-time fashion by the end users, conforming to their contextual needs. And content would no longer only be created by a company and distributed to users; it would get created by the users, too.

What might we learn from this to take into the future?

If we are to believe predictions, then the next phase for the Information Age is Web 3.0, what father-of-the-web Tim Berners-Lee describes as “the semantic web.”

The semantic web is comprised of content that is rich in meaning and independent of human-readable text. The relationship between metadata and content gets flipped — the visible part of content becomes a small part of what’s otherwise a massive iceberg of data.

This transformation represents new fundamental language challenges, since the internet’s current metadata paradigm is based on English text. Some meaning can’t be encoded by English words, and some meaning can’t be encoded by words in any human language. How else might meaning be digitally represented?

These challenges of course represent new opportunities.

Over the last couple of years, I’ve been beating the drum about how companies should think about localization as the endeavor of “being global” (from the start) as opposed to “expanding internationally” (after first-market release). While the business concept of the foreign market isn’t totally useless today, it’s getting there. As a company’s operations become more digital, the practice of translating their collateral to work in another country needs to be replaced by the practice of designing one’s business to work everywhere in the world, regardless of language, culture or geography.

There is enough predictive material in Business @ the Speed of Thought that we could have devoted this entire article just to it. It shouldn’t come as a surprise that Gates had a clear vision of a future where businesses and societies would evolve to take advantage of information technology. It was a future for which his company Microsoft was, and still is, playing a driving role.

The processes he describes are still actively transforming the business landscape today, having spun off the cottage industry of “digital transformation,” a class of professional service that helps companies recast their businesses in the Information Age.

The language industry’s success is related to its support of the global growth made possible by digital transformation. I would assert that we have been and will be most successful as a practice when we’re active participants of the digital transformation process, enabling a company not only to be digital and global, but to be “globally digital.”

Some other broad lessons from the book that I believe we should take with us into the next decade are:

• Customer service will become the primary value-added function in every business.

• The middleman must add value.

• Those who treat information as an asset will succeed.

Reading The Age of Spiritual Machines today is good fun, especially the tenth chapter, which is dedicated to predictions about 2019. While the level of maturity and ubiquity of many of the technological advancements Kurzweil describes aren’t there yet, all the big technology subjects we’re talking about today seem accounted for: self-driving cars, virtual assistants, wearable computers, quantum computing and neural networks.

Like Business @ the Speed of Thought, some predictions seem so eerily accurate that one gets the sense that there’s a form of observer effect at play; that Kurzweil’s book has had an influencing effect on the very industries that it is making predictions about. Perhaps those who read his book used it as a blueprint.

Kurzweil has long been optimistic about the future of language technology, including machine translation, voice recognition and speech synthesis. He describes a world in which world languages are no longer a barrier for global commerce and collaboration.

What a worthy blueprint for our industry!

Back issues of Wired magazine from 1999 proved to be a rich vein for my research for this article. There were at least a dozen articles published in 1999 with insightful retro-predictions. I’d advise you to check out “Advertising Comes to Software” (July 1999), “Networking Everything” (January 1999) and “Tech 99: Buzz vs. Hype” (February 1999) on your own.

“Giving Voice to the Web” particularly intrigued me, however. I had never heard of VXML before, but the idea of standardizing voice content — both as input to be collected and as a modality of content to be communicated — seems more relevant today since the rise of voice-based virtual assistants like Alexa and Siri.

I don’t ever recall using my non-smartphone telephone as an interface to interact with the internet. But who knows? I might have been using this protocol without even knowing it.

For me, this story underscores the fact that the internet continues to become less text-centric, that content does not mean text (think about the surge of video creation and consumption in the past ten years). As a lesson for the language industry, it reinforces that our job isn’t about translating text; it is about making ideas accessible across the dimension of language and culture in all their various modalities.

Today was tomorrow. Today will be yesterday.

I researched and wrote this article through the lens of prediction. My hypothesis was that by looking at past predictions in conjunction with 20 years of “how it played out” data, we could then identify the secrets behind good predictions, use those techniques to make our own predictions and ultimately use the new predictions to help drive our industry.

And after having gone through the process, I still agree with that hypothesis. Sort of.

When reading the likes of Business @ the Speed of Thought and Age of Spiritual Machines, I originally had an uncanny feeling. How could these authors be so right about what would happen, even down to seemingly arbitrary details? Do they have supernatural precognitive powers?

Then it struck me. Many of these authors’ predictions aren’t predictions at all; they’re calls to action. The futures they’re describing have come true not because of any secret foresight, but because as a society, we agreed that these futures were desirable and then worked to build them into reality.

In March 1963, Nobel Prize winning physicist Dennis Gabor wrote in his book Inventing the Future, “The future cannot be predicted, but futures can be invented. ” He makes an assertion that has likely been made throughout history: we are the masters of our fate.

I have no doubt that if we really want flying cars, that will happen. If we really want to colonize Mars, we’ll do it. As for the language industry, the first step in predicting our future may be to decide what it is that we really want.

It’s in that spirit that I’d like to offer my own prediction.

By 2029, the language industry will no longer be focused on solving problems that world language diversity represents for commerce and collaboration. It won’t have to. Through compounding technological advancements, the concept of the “language barrier” will start to fade away.

Multimodal machine translation, content profiling, content enrichment, machine transcreation and other capabilities made possible through technology will be interoperable microservices everywhere communication happens in real-time. The features of language technology will become taken for granted in every platform.

The world’s focus will shift. Instead of seeing the world’s language diversity as a barrier, it will be recognized as an untapped human intellectual asset, the place from which compound ideas and inventions can be created through the intermarriage of culturally unique concepts. Concepts previously segregated by “language barrier.”

Imagine a Spanish-speaking research team finding a viable solution to their problem from a research paper published in Chinese. Or a Hollywood screenwriter looking to develop a compelling protagonist finding inspiration from a third-century Syriac fable.

The language industry, empowered by both machine and human intelligence, will be instrumental in making these marriages possible and in shaping the internet to make them possible automatically.

The Tower of Babel will become a glorious new Library of Alexandria.

Does this sound like a desirable future to you?