Translation Technology

Advertisement

TED, SYSTRAN Partner, Create Multilingual NMT Models

Translation Technology

Beginning with ten languages, SYSTRAN will use TED content to develop neural machine translation models for technical content in a variety of fields.

AI-based translation technology company SYSTRAN announced recently its new partnership with TED to build specialized neural translation models that are based on high-quality translations of TED Talks. These unique models are designed to meet the sophisticated translation needs of multinational companies, educational institutions, government agencies and other organizations by enabling accurate and fluent translations of learning, scientific, business, and technical content in ten languages.

A nonprofit organization whose slogan is “Ideas Worth Spreading,” TED has committed to global language access as one of its core foundations. Organizations in 150 countries participate in the TEDx initiative, which allows groups to apply for licenses to organize conferences made up of local participants, ranging from professors to scientists to writers.

Along with TEDx, the company currently has a major translation initiative of their online resources, with a team of over 35,000 human translators, who have produced almost 175,000 translations and captions in 115 languages. The data from this major cache of language resources will likely enable SYSTRAN to expand their neural translation models to even more languages as well.

“SYSTRAN is TED’s first-ever authorized partner in bringing together TED content and machine learning to develop a commercial product,” said Alex Hofmann, Director, Global Distribution & Licensing at TED. “The fact that our inaugural collaboration in the AI space is focused on neural machine translation models built from translations of TED Talks in multiple languages feels natural and are now available on a licensed basis to help enterprises and organizations meet their most sophisticated translation needs.”

The proprietary models are developed by SYSTRAN, pairing TED’s unique multilingual data and SYSTRAN’s AI expertise, and are an early step in advancing data usage in wider applications. TED requires a license for authorized use of its data for commercial AI and machine learning purposes, and SYSTRAN is the first to obtain such a license. In accordance with SYSTRAN’s core principles of security and data privacy, TED fully preserves its intellectual property and ownership of its data as well as the specialized models. The TED-owned models are available on the SYSTRAN Marketplace, a catalog of specialized models for specific domains such as legal, finance, health, education, science/technology and many more.

“This strategic partnership is about taking our shared goals of connecting people and cultures and facilitating multilingual engagement globally,” said CIO of SYSTRAN, John Paul Barraza. “The human-created translations generated by the TED Translator community are of the highest quality, enabling SYSTRAN to build accurate and fluent translation models for use across a plethora of business and professional applications.”

SYSTRAN conducted double-blind human evaluations on the TED models it built, and the results show improvements in accuracy and fluency over baseline state-of-the-art generic models. The human evaluations also revealed unexpected results, with 41% of the models scoring higher than the human reference translations.

“The current global situation is showing us how inter-connected the different countries and populations worldwide are. Companies are imagining a world with far less boundaries — starting with the way we communicate,” said Jean Senellart, SYSTRAN CEO. “Introducing models to the SYSTRAN Marketplace is an incredible opportunity and will respond to real needs in the translation of educational, business, scientific, and technical materials.”

Tags:, ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Advertisement
Weglot

Related News:

Advertisement

Vuzix Smart Glasses Selected for Translation Solutions

Translation Technology

Expanding its e-Sense productivity solution, Rozetta Corporation has selected Vuzix Smart Glasses for a project to create a COVID-19 solution for hospital healthcare.

As the technology sector continues to develop new uses for machine translation, smart glasses have caught on as a useful, hands-free device to help users from a multitude of industries communicate and seek information. Looking to extend the functionality of smart glasses in construction and health sites, Rozetta Corporation has selected Vuzix Smart Glasses for two translation-related solutions.

A Japanese supplier of translation products and services, Rozetta chose Vuzix to provide its M400 Smart Glasses for project called e-Sense, which Rozetta began as a collaborative effort with Tobishima Corporation, a global construction firm. e-Sense will utilize the Smart Glasses to deliver automatic simultaneous interpretation, remote communications, and support and the recording of voice, text, images, and video.

Rozetta first formed e-Sense as a productivity solution for of construction sites through automatic interpretation, leveraging big data for a multifunctional hands-free system. However, Rozetta also plans to use e-Sense to serve as a tool for risk mitigation at hospitals during the COVID-19 pandemic, making it possible to reduce crowding and limit close contact interactions. Additionally, e-Sense will likely expand to other industries, such as manufacturing, security, aviation, and any industry that could take advantage of its hands-free operation and data gathering.

Separately, Rozetta announced that it would begin a collaborative research effort with St. Luke’s international Hospital in Tokyo, which would deploy the Vuzix Blade Smart Glasses with its T-4PO Medicare product. With the rising need of translation and interpretation between foreign patients and medical practitioners in Japan, the use of an automatic translation system may streamline hospital communications.

“Vuzix Smart Glasses are a natural solution for hands-free language translation, and we are delighted to have an established translation solution provider such as Rozetta using them for two different applications that could each ultimately represent broad usage opportunities for us within their respective market verticals,” said Paul Travers, President and CEO of Vuzix.

Initially supporting Japanese, English, and Vietnamese translation, e-Sense plans to extend its support to other languages.

Tags:, , ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Related News:

Gondi and Hindi Translations Streamlined with INMT App

Translation Technology

CGNet Swara, IIIT Naya Rapipur, and Microsoft Research Lab have made progress on a new Interactive Neural Machine Translation app during lockdown that they hope will enable the Gond Adivasi community to rejuvenate youth engagement in the Gondi language.

Joint efforts among local Indian organizations and Microsoft Research Lab have developed an interactive neural machine translation (INMT) app that translates between Hindi and Gondi. Spoken by over two million people, the Gondi language is used primarily in several Indian states, including Madhya Pradesh, Gujarat, Telangana, Maharashtra, Chhattisgarh, Karnataka, Andhra Pradesh.

Although the language is prevalent in many of India’s central regions, it remains a predominantly spoken language. Furthermore, the language varies based on the state, with many dialects passed down orally over centuries.

The absence of written literature in the language, along with a shortage of local teachers who can instruct in the language, has led the non-profit CGNet Swara to develop ways for Gondi speakers to stay informed despite lack of access to Hindi services.

An Indian voice-based online portal, CGNet Swara has worked with communities in central tribal India by providing them a platform to report on local news through phone calls. Seeking more streamlined communications, the organization partnered with the International Institute of Information Technology Naya Rapipur and a Microsoft Research Lab to create the app to translate between Gondi and Hindi.

“We did one workshop at the Microsoft Research Lab office in Bengaluru in 2019. But the app was developed during the lockdown and is likely to be released later this month,” said Shubhranshu Choudhary, a former journalist who co-founded CGNet Swara. Since the onset of the pandemic, the company has worked with community members from several demographics to collect language data. To date, researchers have yielded at least more than 3,000 words and 35,000 sentences and phrases. The efforts come as organizations globally are making an effort to deliver translated resources to communities in need.

“Gondi is a very good language to use as a case study as it has a substantial speaker base across six states. It is not endangered and yet zero resources are available for the same. Through CGNet Swara, we became aware of the various issues that the Gond Adivasis face, and how access to the language could help the cultural identity of the community,” says Kalika Bali, principal researcher, Microsoft Research Lab.

Along with the translation app, Choudhary also mentioned the Chhattisgarh government’s announcement to shift education in the state to include instruction in tribal languages. To contribute to the transition, CGNet Swara has worked with Pratham Books on an ongoing project to translate 400 children’s books.

Tags:, , , ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Related News:

Automatic Translation Could Come to Twitter

Translation Technology

Twitter has announced that it will expand its translation feature by testing automatic translation with a small group of users in Brazil. The social media giant currently supports built-in translations that allow users to click or tap a tweet written in a foreign language.

For users who set their primary language to English on Twitter, any tweet using a non-English language will provide a translation button. As convenient as this method might appear, it also adds a step to the user experience, requiring manual action for the user to opt into the translation. In order to simplify that process, Twitter hopes to experiment with automatic translation so that all tweets will be automatically available to the user.

In a blog post, Twitter explains:

“To make it easier to understand the conversations you follow on Twitter, we’re experimenting with automatic translation for Tweets in other languages ​​that appear on your homepage. We know that it can sometimes take a long time to translate Tweet by Tweet into different languages ​​and stay on top of what is relevant to you.” (This text was translated from Portuguese by Google.)

The posts will provide an alert at the bottom to notify the user that the tweet has been translated and by whom. Although the company did not state whether it will partner with any machine translation providers, it does mention both Google and Microsoft as possible options. The company could also be weighing the possibility of combining the services to provide the most accurate translation available.

Among some of the issues of the new feature, the default translation could become a frustration for users. One obvious issue currently facing machine translation is the inaccuracy that occasions some translations. Furthermore, bilingual or multilingual users who have can only choose one language as their primary language might prefer a feature that allows them to read tweets in both languages or to continue in the opt-in structure.

Whether these issues will hold back the scaling of this new feature is not clear, but Twitter has not yet set any plans to expand the automatic translation any further than Brazil.

Tags:, ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Related News:

Translation Technology Emphasizes UX with Timekettle’s Earbuds

Translation Technology

The Offline Speech Translation capabilities of Timekettle’s newest M2 Earbuds add to  Timekettle’s WT2 Plus translation technology, along with providing many classic earbud functions, like listening to music, making calls, and asking Siri to tell a joke.

As translation technology becomes more and more relevant in global communication, the industry is witnessing creative strategies to optimize user experience with practical, user-friendly gadgets and apps. Between developments like a tech startup’s Smart Mask and Microsoft Translator’s Auto Mode, innovation continues to thrive and adapt to the demands of a multilingual world.

Adding to the array of translation technology, AI company Timekettle Technologies has emphasized the user experience of its translation technology with the creation of its True Wireless (TWS) M2 Translator Earbuds. When connected to a smart phone, a user can tap the M2 to activate the voice assistant, which then records the user’s speech and provides a near real-time machine translation. The M2 builds upon Timekettle’s earlier attempts at earbuds. Currently supporting 40 different languages and 93 accents, the Timekettle M2 mirrors the language capacity of Timekettle’s WT2 Plus, with the addition of offline language packets available for download. 

Partnering with enterprises like Google, Microsoft, and iFLYTEK, Timekettle has optimized the speed and accuracy of the translations. Furthermore, with the numerous accents the voice assistant possesses, the earbuds do not just translate language, but localize the translations to better capture nuances and idiosyncrasies across multiple languages, dialects, and accents. When prompted, the earbuds will record the user’s speech and produce a near real-time machine translation of the speech, tailoring the accent to the relevant locale. 

Powered by a Qualcomm Bluetooth 5.0 chip and Qualcomm’s aptX Audio Codec Compression, the earbuds also perform many normal earbud functions, like playing music and podcasts, making phone calls, or delivering demands to a phone’s AI. However, with the latest.

Like the WT2 Plus, the M2 includes three different modes, depending on a user’s desired function. Working in conjunction with the Timekettle app, Lesson Mode allows the user to capture speech from a speaker in the room or external audio and translate the speech into the user’s native language. Alternatively, Speaker Mode translates the users spoken speech and plays it through the speaker of the phone. Finally, in Touch Mode, two users would wear one earbud, respectively, and tap the force sensor on the M2 to provide near-simultaneous translated playback.

The latter mode might pose risks during the pandemic, with two users sharing the same earbuds and possibly contributing to the spread of contagious illness. However, Timekettle also has created the ZERO Portable Mini Translator, which plugs into the user’s phone and records speakers of up to four different languages with its four-microphone and adaptive noise cancelling technology.

Another pitfall with the M2 is its inability to translate in-call speech. It currently can only record speech in person and provide the translated playback.

While translation technology is still developing to meet the rising demands for translation services, the push for innovation with focus on a product’s UX design will continue to cultivate user interest, especially when people can move about more freely and take full advantage of the newest developments.

Tags:, , ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Related News:

Speech Recognition Tech Startup Voiceitt Raises $10 Million

Translation Technology

The latest round of funding will help the company continue to improve upon its speech recognition software and bring greater access to communication for those living with speech impairments.

Last week, Israeli-based speech recognition technology startup Voiceitt closed a $10 million Series A funding round led by Viking Maccabee Ventures. The company started out initially aiming to develop a mobile app that uses AI to recognize and translate unintelligible and atypical speech in real time.

With dozens of medical conditions that afflict the population at high rates annually — including cerebral palsy, Parkinson’s disease, and stroke — the latest funding will help Voiceitt develop the software to serve those with speech impairments by providing better access to communication alternatives.

“Voiceitt provides a new dimension of independence and quality of life for people with speech and motor disabilities, and a compelling tool for those who care for and about them,” Voiceitt co-founder and CEO Danny Weissberg said. “With the impact of the COVID-19 pandemic, our objectives are not only to support the individual’s in-person communication, but also to assist healthcare professionals and support the continuum of care for their patients.”

Part of Amazon’s Alexa Accelerator in 2018, Voiceitt earned the Alexa Fund investment, along with funding from venture capital groups like Cahn Capital, Microsoft’s M12, AMIT Technion, and Connecticute Innovations. The company also received support from several nonprofits, including AARP and The Disability Opportunity Fund. It has since partnered with the state of Tennessee, working with healthcare services, speech therapists, and organizations that serve speech-impaired individuals.

One of the issues that Voiceitt is focused on addressing in this new round of funding is the limitations on the technology due to each person’s specific vocal patterns. Since each impairment can come with a particular set of needs, new data to train the software will be key to move the technology forward.

“Everyone’s impairment is different, but there are certain similarities within a particular group of speech impairments,” said Stas Tiomkin, Voiceitt’s co-founder and CTO. “We work very hard to collect speech samples and build a generic acoustic model that then gets customized.”

Although Voiceitt looks to create an app that successfully integrates into the lives of those living with speech impairments, company leaders recognize the technology goes far beyond an app. “As we continue our growth, we are committed to our mission of making speech accessible to all,” Voiceitt co-founder and executive vice president Sara Smolley said. “Our long-term vision is to integrate Voiceitt’s customizable speech recognition with mainstream voice technologies to enable environmental control through a universally accessible voice system. Voiceitt’s versatile technology can be applied in a range of voice-enabled applications in diverse contexts and environments.”

 

 

 

 

 

Tags:,
Journalist at MultiLingual Magazine | + posts

Jonathan Pyner is a poet, freelance writer, and translator. He has worked as an educator for nearly a decade in the US and Taiwan, and he recently completed a master’s of fine arts in creative writing.

Related News:

Translation App Now Features Auto Mode

Translation Technology

Translation app companies are continually developing new features that will better facilitate experiences abroad for users. Microsoft Translator’s Auto mode is the latest addition to its UX design.

two man chatting white sitting on brown wooden chairFor travelers and expatriates planning their post-COVID journeys abroad, using the right translation app will play a major role in communicating for directions, housing arrangements, contracts, conversations with new friends, and just about anything involving language.

Choosing which translation app will work best for communicating is a very personal choice that depends on several variables and necessities. Among considerations like number of languages, ability to translate both audio and text, and user interface, user expedience often takes precedence. After all, conversations rarely seem worth the effort when each party needs to spend time waiting for the app to process speech or literature.

With convenience one of the keys to successful translation apps, many companies are seeking the best ways to create and update their translation app with the new features and machine translation software to aid in the process. Some translation apps like Google Translate have special features that allow users to snap pictures of text that the app will then translate.

Microsoft Translator also recently announced a new functionality called Auto mode that will remove a step in the translation process. The app’s designers recognized that for many relying on their phones to translate conversations, the phone can create as many problems as it solves.

For example, users often must speak into a translation app, tap translate, and await the results. The process leaves users obsessing over the functionality of the app, rather than keeping the focus on the conversation itself.

The new Auto mode feature will allow users to choose the languages and tap the microphone only once. Auto mode will then register each party’s speech, and turn green to signal it is ready for a response. Removing the need to tap translate with each new statement, the app may provide conversational users a more convenient way to make friends or ask for directions while traveling abroad.

While the update is currently available only for iOS users, Microsoft is working on a version that will be compatible for Android users as well.

Tags:, ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Related News: