Localizing apps for multilingual conversations

Approximately three billion people — communicating in thousands of languages — depend on some type of phone to conduct their lives. Common Sense Advisory’s (CSA Research) annual study of online economic opportunity finds that while it only takes 14 languages for companies to reach 90% of the addressable online opportunity, it takes 40 more to reach 99%.

However, many of these next billion users lack strong reading and writing skills in their native language. Mobile devices may have greatly increased the demand for more localized experiences, but it is the addition of speech that mandates local languages since people much prefer speaking in their mother tongue with personal gadgets or services they use every day.

With these devices so ubiquitous and in the hands of so many people who did not grow up typing on a keyboard, voice-driven apps in long tail languages can’t come soon enough. Venture capital firm Kleiner Perkins estimates that 46% of internet users in India now consume primarily local language content. Additionally, the firm predicts that by 2018, 30% of interactions with technology will be via conversing with smart devices, and that number will rise to 50% by 2020.

However, localizing for multilingual conversations is not simply an extension of translating the written word in user interfaces, for example. Voice dramatically raises the stakes for localization because the input is even less structured than it is for text in the form of dialects, idiolects, accents, speech disorders and ambient sound — and that’s just to name a few. When multiplied by however many languages their organizations hope to support over the next few years, it’s clear that localizers have their work cut out for them.

To help you get started on meeting this next-generation localization requirement, we outline four steps to enable spoken interaction in multiple languages.

1. Expect winners and losers. Speech platforms are rapidly evolving, with some staking out a role beyond mobile devices to allow direct interaction with home appliances, vehicles and robots. Such deployments will drive more localization into a wider variety of languages as manufacturers sell their goods around the world to more and more audiences just entering the consumer class. One or more of these voice-driven platforms could eventually play a role similar to what “Intel inside” represented for PCs. In the meantime, be on the lookout for new contenders to emerge from technology developers in China, India, Japan and Russia.

2. Adopt commercial software for vocal computing. Today’s virtual assistants provide off-the-shelf, evolving platforms for conversations with a variety of devices. Evaluate the alternatives and choose the one that best meets your requirements. If your app is cross-platform, look for one that runs on multiple devices (Google, Microsoft and SoundHound). Choose one that has an application programming interface or software developer kit to allow integration with your app. Review their current state of foreign-language support and quiz prospective suppliers regarding future offerings — right now, Apple and Microsoft are ahead on that front.

3. Raise visibility with colleagues who design your products and services. Stay ahead of the design and development curves by making your case now for multilinguality and locale support to be nonnegotiable requirements for speech enablement. In preparation, provide ongoing training to current and recently hired developers who are responsible for mobile apps, wearable device interfaces or Internet of Things environments. The sooner they develop good habits to support internationalization and localization hygiene, the easier it will be for them to deliver world-ready, voice-enabled capabilities when it comes time.

4. Investigate how content creation and style will change. Content that is intended for conversation is often very different than that which is meant to be read in silence. Voice is hands-free and can summarize large volumes of written text in a much shorter amount of time — humans can certainly speak to machines and listen to their responses much faster than they can type, especially in Arabic, Chinese and many languages spoken in India. For example, companies may need to replace instructions they’re used to delivering through on-screen text with video. Or at least simplify them so that drivers only need to listen — rather than look — as they drive down the highway. Search terms for products and services will probably evolve as prospects and customers shift to longer, conversational-style queries that present more information to be processed by search engines.

Even without multilingual versions of speech-enabled apps on the horizon for your organization, it’s only a matter of time before the requirement becomes real. The ability to employ conversation to tease more intelligence out of digital devices, coupled with the advances being made on the speech platforms themselves, will eventually become too strong to be ignored for many companies. Prepare now to meet this next-generation localization requirement by educating your team and your language services suppliers. At the same time, raise visibility with product and service designers so that they can benefit from localization expertise to support a truly global customer journey when voice goes live.