Anyone who has seen Netflix’s spooky drama series Dark must have been struck by the character Elisabeth Doppler. She is eight years old and communicates using sign language, a trait that makes the plot even more compelling. In fact, Dark has attracted comments for its use of auto-dubbed English from German and its disparity with the subtitling, also in English.
The creators devised a role for a child character who is mute. However, she most certainly can communicate, as we watch her sign the description of a man for a portrait to a police officer investigating the abduction of a school friend. Netflix, of course, offers its streaming service in numerous countries and is a leading developer of multilingual technology. But a question that grabbed my curiosity is, how does sign language fit in with the rest of the language community? A definitive answer is somewhat elusive, but given that mushrooming localization is enabling our devices on a global scale, how do we help facilitate services for those who use nonverbal communication?
First of all, how many sign languages are there? It could have been one or it could have been 100, for all I knew. In truth, I’d never thought about it and I’ve been in the language industry for many years, working with spoken and written language forms as well as Braille.
However, there seems to be no definitive answer because there are a number of definitions of what constitutes a sign language. Wikipedia lists around 300, but these include hearing-impaired sign languages as well as auxiliary and manually coded languages. Using a different approach to classification by listing sign languages by number of users, we arrive at a total of close to 150. Clearly our friends in the ethnology community have their work cut out in determining a more accurate number.
But one aspect of my initial interest in investigating sign languages stands out: signing is global, and hence multilingual. What is the implication for language services and the language community?
Skeptics might ask, what evidence of demand is there for sign language services? Well, what demand is there for disabled parking spaces? These are now so ubiquitous because without an uprising of disabled drivers, their needs have been catered for admirably by the community at large. Tune in to Netflix or any other service and there is an option to turn on closed captioning. Broadcasters understand there is a need for this service because users of sign languages are an accepted part of any responsible society. We frequently see the presence of signers at conferences and other similar gatherings, for example.
What evidence, then, is there that cultural and business services have risen to the challenges of meeting the language needs of the deaf or hard of hearing?
Starting with the legislative aspect of language provision, both the UN and the EU have mandates in place to cater to sign language users. The UN passed its Convention on the Rights of Persons with Disabilities (CRPD) over ten years ago.
It is well worth noting that the CRPD pledges to protect disabled people as well as to provide for their needs. This aspect of caring for people who have special needs is well worth promoting in our community. After all, we are becoming more and more active in protecting endangered languages and we can surely enhance our efforts on behalf of those with a language disability.
The EU has already been proactive in ensuring that the diverse tongues spoken across the continent are legislated for. The EU has legislation mandating multilingualism — and they also have legislation declaring the inclusion of sign language recognition.
One beneficiary of EU funding is Spreadthesign, a European Commission Leonardo da Vinci project whose goal is to share sign languages from different countries. A massive bonus is that the service is free for global users. They tacitly are encouraging their community of users to collaborate and accordingly to grow. The dictionary of signs that they are compiling must be recognized as a massive boost in enabling deaf people to travel abroad for work or study. They specifically mention their goal of improving language skills. This was an aspect of sign languages that I had not considered, though I have always been acutely aware of quality issues in many years of work in the language community.
In the US, a site offering a similar service is HandSpeak. They offer services for the hearing impaired across as broad a range of regular activities as can be imagined.
In the education sector, I was gratified to find that there are companies that provide instruction both for the hearing impaired and businesses that are seeking a more inclusive clientele. One that stands out is Meeting Tomorrow, headquartered in Chicago. They offer a full range of audiovisual products and services, including sign language training. I particularly liked their community approach in appealing to regular businesses: “Being able to meet the needs of the hearing impaired better than competitors can give many different types of companies an advantage over the competition. It’s also simply a nice gesture.” In the cutthroat business world, I found the idea of making “a nice gesture” very refreshing.
Another impressive sign language service provider is KinTrans. They are in their infancy, it seems, and only offer their service in English and Arabic. They are a software as a service enterprise and use an automated sign language translator that can output text or voice in real-time. They mention that their target user-base in North America is 13,000,000. The current US population is around 325 million and the number of languages spoken is in triple digits. Although 80% are English speakers with Spanish in second place at 12%, the prospects for this developing technology are promising to say the least. Spread that across the globe, and the sign language community could be huge.
There’s also SignAll, a venture that is pushing the boundaries of computer vision technology with their automated sign language translator. Using webcams connected to a PC, they accept input from a person signing and process it into translated text. Input is comprised of a number of different elements from hand movements with all their variants of shape and motion, facial signs and other factors. The idea of converting visual data into text is a lot more complex than it sounds. Natural language processing algorithms have been developed to render the text into proper grammatical sentences. They have designed their hardware and software modularly to accommodate the inevitable changes that we will see sooner rather than later.
What appeals to me about these enterprises and their impressive use of technology is that I can sense cohesion in their communities, which matches the same cohesion we are beginning to enjoy in the language community at large. What then might we expect in the future? I would like to think that enterprises showing more inclusiveness will design their products and services with sign languages in mind. As the reach of technology blazes new trails across the globe, numerous opportunities will arise. At present, voice-activated technology can only meet general needs. Yet with virtual reality now with us, it is realistic to expect visual language capabilities to also become a part of our daily lives.