BSC develops How2Sign, AI database for Sign Language Translation

Enlisting the help of artificial intelligence, the Barcelona Supercomputing Center has developed a database called How2Sign for the automatic translation of sign language. Lead by Amanda Duarte, PhD candidate and researcher at BSC’s Emerging Technologies for AI group, How2Sign is expected to debut this June at CVPR — Conference on Computer Vision and Pattern Recognition, ranked first among AI conferences by Google in 2020.  

Duarte, who completed her master’s in computer engineering in Brazil, outlines the goals of her work on her website: “My research aims at giving sign language users further access to information. Specifically, my work focuses on developing systems that enable automatic translations of online content into sign language representations.” 

After leading Speech2Signs in 2018 — a project aimed at improving the task of speech to sign language translation, awarded the Caffe2 Research Award by Facebook — Duarte spent more than two years compiling the recordings for How2Sign, which, at its debut, will offer 80 hours of sign language videos. Professional interpreters translate various different types of video tutorials, ranging from crafting to cooking recipes, in American Sign Language (ASL). Of the 80 hours of video, three were recorded at Panoptic Studio, located at Carnegie Mellon University, which is a singular dome-shaped multiview studio equipped with 510 cameras. The videos shot at Panoptic will allow researchers to reconstruct and learn from the interpreters’ three-dimensional postures and movements. 

How2Sign, referred to by Duarte as “the first large-scale continuous American Sign Language dataset,” is intended to give researchers within both the fields of computer vision and natural language processing insight into automatic sign language understanding and production, with the aims of facilitating technological accessibility to over 400 million Deaf or hard-of-hearing individuals around the world. A lack of adequate subtitling and captioning is endemic to video-sharing platforms such as YouTube and Facebook — with Speech2Signs and How2Sign, Duarte aims to solve an important problem that Deaf people face around the world, by providing a system that automatically generates sign language translation of the speech in any given video. 



Michelle Krasovitski
Michelle Krasovitski is a writer based in Toronto. She holds a specialist degree in psycholinguistics from the University of Toronto. Her articles on languages and pop culture have appeared in Business Insider and the Toronto Star.

Weekly Digest

Subscribe to stay updated

MultiLingual Media LLC