Translation Technology

Advertisement

Vuzix Smart Glasses Selected for Translation Solutions

Translation Technology

Expanding its e-Sense productivity solution, Rozetta Corporation has selected Vuzix Smart Glasses for a project to create a COVID-19 solution for hospital healthcare.

As the technology sector continues to develop new uses for machine translation, smart glasses have caught on as a useful, hands-free device to help users from a multitude of industries communicate and seek information. Looking to extend the functionality of smart glasses in construction and health sites, Rozetta Corporation has selected Vuzix Smart Glasses for two translation-related solutions.

A Japanese supplier of translation products and services, Rozetta chose Vuzix to provide its M400 Smart Glasses for project called e-Sense, which Rozetta began as a collaborative effort with Tobishima Corporation, a global construction firm. e-Sense will utilize the Smart Glasses to deliver automatic simultaneous interpretation, remote communications, and support and the recording of voice, text, images, and video.

Rozetta first formed e-Sense as a productivity solution for of construction sites through automatic interpretation, leveraging big data for a multifunctional hands-free system. However, Rozetta also plans to use e-Sense to serve as a tool for risk mitigation at hospitals during the COVID-19 pandemic, making it possible to reduce crowding and limit close contact interactions. Additionally, e-Sense will likely expand to other industries, such as manufacturing, security, aviation, and any industry that could take advantage of its hands-free operation and data gathering.

Separately, Rozetta announced that it would begin a collaborative research effort with St. Luke’s international Hospital in Tokyo, which would deploy the Vuzix Blade Smart Glasses with its T-4PO Medicare product. With the rising need of translation and interpretation between foreign patients and medical practitioners in Japan, the use of an automatic translation system may streamline hospital communications.

“Vuzix Smart Glasses are a natural solution for hands-free language translation, and we are delighted to have an established translation solution provider such as Rozetta using them for two different applications that could each ultimately represent broad usage opportunities for us within their respective market verticals,” said Paul Travers, President and CEO of Vuzix.

Initially supporting Japanese, English, and Vietnamese translation, e-Sense plans to extend its support to other languages.

Tags:, , ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Advertisement

Related News:

Advertisement
SDL Tados 2021

Gondi and Hindi Translations Streamlined with INMT App

Translation Technology

CGNet Swara, IIIT Naya Rapipur, and Microsoft Research Lab have made progress on a new Interactive Neural Machine Translation app during lockdown that they hope will enable the Gond Adivasi community to rejuvenate youth engagement in the Gondi language.

Joint efforts among local Indian organizations and Microsoft Research Lab have developed an interactive neural machine translation (INMT) app that translates between Hindi and Gondi. Spoken by over two million people, the Gondi language is used primarily in several Indian states, including Madhya Pradesh, Gujarat, Telangana, Maharashtra, Chhattisgarh, Karnataka, Andhra Pradesh.

Although the language is prevalent in many of India’s central regions, it remains a predominantly spoken language. Furthermore, the language varies based on the state, with many dialects passed down orally over centuries.

The absence of written literature in the language, along with a shortage of local teachers who can instruct in the language, has led the non-profit CGNet Swara to develop ways for Gondi speakers to stay informed despite lack of access to Hindi services.

An Indian voice-based online portal, CGNet Swara has worked with communities in central tribal India by providing them a platform to report on local news through phone calls. Seeking more streamlined communications, the organization partnered with the International Institute of Information Technology Naya Rapipur and a Microsoft Research Lab to create the app to translate between Gondi and Hindi.

“We did one workshop at the Microsoft Research Lab office in Bengaluru in 2019. But the app was developed during the lockdown and is likely to be released later this month,” said Shubhranshu Choudhary, a former journalist who co-founded CGNet Swara. Since the onset of the pandemic, the company has worked with community members from several demographics to collect language data. To date, researchers have yielded at least more than 3,000 words and 35,000 sentences and phrases. The efforts come as organizations globally are making an effort to deliver translated resources to communities in need.

“Gondi is a very good language to use as a case study as it has a substantial speaker base across six states. It is not endangered and yet zero resources are available for the same. Through CGNet Swara, we became aware of the various issues that the Gond Adivasis face, and how access to the language could help the cultural identity of the community,” says Kalika Bali, principal researcher, Microsoft Research Lab.

Along with the translation app, Choudhary also mentioned the Chhattisgarh government’s announcement to shift education in the state to include instruction in tribal languages. To contribute to the transition, CGNet Swara has worked with Pratham Books on an ongoing project to translate 400 children’s books.

Tags:, , , ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Advertisement

Related News:

Automatic Translation Could Come to Twitter

Translation Technology

Twitter has announced that it will expand its translation feature by testing automatic translation with a small group of users in Brazil. The social media giant currently supports built-in translations that allow users to click or tap a tweet written in a foreign language.

For users who set their primary language to English on Twitter, any tweet using a non-English language will provide a translation button. As convenient as this method might appear, it also adds a step to the user experience, requiring manual action for the user to opt into the translation. In order to simplify that process, Twitter hopes to experiment with automatic translation so that all tweets will be automatically available to the user.

In a blog post, Twitter explains:

“To make it easier to understand the conversations you follow on Twitter, we’re experimenting with automatic translation for Tweets in other languages ​​that appear on your homepage. We know that it can sometimes take a long time to translate Tweet by Tweet into different languages ​​and stay on top of what is relevant to you.” (This text was translated from Portuguese by Google.)

The posts will provide an alert at the bottom to notify the user that the tweet has been translated and by whom. Although the company did not state whether it will partner with any machine translation providers, it does mention both Google and Microsoft as possible options. The company could also be weighing the possibility of combining the services to provide the most accurate translation available.

Among some of the issues of the new feature, the default translation could become a frustration for users. One obvious issue currently facing machine translation is the inaccuracy that occasions some translations. Furthermore, bilingual or multilingual users who have can only choose one language as their primary language might prefer a feature that allows them to read tweets in both languages or to continue in the opt-in structure.

Whether these issues will hold back the scaling of this new feature is not clear, but Twitter has not yet set any plans to expand the automatic translation any further than Brazil.

Tags:, ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Related News:

Translation Technology Emphasizes UX with Timekettle’s Earbuds

Translation Technology

The Offline Speech Translation capabilities of Timekettle’s newest M2 Earbuds add to  Timekettle’s WT2 Plus translation technology, along with providing many classic earbud functions, like listening to music, making calls, and asking Siri to tell a joke.

As translation technology becomes more and more relevant in global communication, the industry is witnessing creative strategies to optimize user experience with practical, user-friendly gadgets and apps. Between developments like a tech startup’s Smart Mask and Microsoft Translator’s Auto Mode, innovation continues to thrive and adapt to the demands of a multilingual world.

Adding to the array of translation technology, AI company Timekettle Technologies has emphasized the user experience of its translation technology with the creation of its True Wireless (TWS) M2 Translator Earbuds. When connected to a smart phone, a user can tap the M2 to activate the voice assistant, which then records the user’s speech and provides a near real-time machine translation. The M2 builds upon Timekettle’s earlier attempts at earbuds. Currently supporting 40 different languages and 93 accents, the Timekettle M2 mirrors the language capacity of Timekettle’s WT2 Plus, with the addition of offline language packets available for download. 

Partnering with enterprises like Google, Microsoft, and iFLYTEK, Timekettle has optimized the speed and accuracy of the translations. Furthermore, with the numerous accents the voice assistant possesses, the earbuds do not just translate language, but localize the translations to better capture nuances and idiosyncrasies across multiple languages, dialects, and accents. When prompted, the earbuds will record the user’s speech and produce a near real-time machine translation of the speech, tailoring the accent to the relevant locale. 

Powered by a Qualcomm Bluetooth 5.0 chip and Qualcomm’s aptX Audio Codec Compression, the earbuds also perform many normal earbud functions, like playing music and podcasts, making phone calls, or delivering demands to a phone’s AI. However, with the latest.

Like the WT2 Plus, the M2 includes three different modes, depending on a user’s desired function. Working in conjunction with the Timekettle app, Lesson Mode allows the user to capture speech from a speaker in the room or external audio and translate the speech into the user’s native language. Alternatively, Speaker Mode translates the users spoken speech and plays it through the speaker of the phone. Finally, in Touch Mode, two users would wear one earbud, respectively, and tap the force sensor on the M2 to provide near-simultaneous translated playback.

The latter mode might pose risks during the pandemic, with two users sharing the same earbuds and possibly contributing to the spread of contagious illness. However, Timekettle also has created the ZERO Portable Mini Translator, which plugs into the user’s phone and records speakers of up to four different languages with its four-microphone and adaptive noise cancelling technology.

Another pitfall with the M2 is its inability to translate in-call speech. It currently can only record speech in person and provide the translated playback.

While translation technology is still developing to meet the rising demands for translation services, the push for innovation with focus on a product’s UX design will continue to cultivate user interest, especially when people can move about more freely and take full advantage of the newest developments.

Tags:, , ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Related News:

Speech Recognition Tech Startup Voiceitt Raises $10 Million

Translation Technology

The latest round of funding will help the company continue to improve upon its speech recognition software and bring greater access to communication for those living with speech impairments.

Last week, Israeli-based speech recognition technology startup Voiceitt closed a $10 million Series A funding round led by Viking Maccabee Ventures. The company started out initially aiming to develop a mobile app that uses AI to recognize and translate unintelligible and atypical speech in real time.

With dozens of medical conditions that afflict the population at high rates annually — including cerebral palsy, Parkinson’s disease, and stroke — the latest funding will help Voiceitt develop the software to serve those with speech impairments by providing better access to communication alternatives.

“Voiceitt provides a new dimension of independence and quality of life for people with speech and motor disabilities, and a compelling tool for those who care for and about them,” Voiceitt co-founder and CEO Danny Weissberg said. “With the impact of the COVID-19 pandemic, our objectives are not only to support the individual’s in-person communication, but also to assist healthcare professionals and support the continuum of care for their patients.”

Part of Amazon’s Alexa Accelerator in 2018, Voiceitt earned the Alexa Fund investment, along with funding from venture capital groups like Cahn Capital, Microsoft’s M12, AMIT Technion, and Connecticute Innovations. The company also received support from several nonprofits, including AARP and The Disability Opportunity Fund. It has since partnered with the state of Tennessee, working with healthcare services, speech therapists, and organizations that serve speech-impaired individuals.

One of the issues that Voiceitt is focused on addressing in this new round of funding is the limitations on the technology due to each person’s specific vocal patterns. Since each impairment can come with a particular set of needs, new data to train the software will be key to move the technology forward.

“Everyone’s impairment is different, but there are certain similarities within a particular group of speech impairments,” said Stas Tiomkin, Voiceitt’s co-founder and CTO. “We work very hard to collect speech samples and build a generic acoustic model that then gets customized.”

Although Voiceitt looks to create an app that successfully integrates into the lives of those living with speech impairments, company leaders recognize the technology goes far beyond an app. “As we continue our growth, we are committed to our mission of making speech accessible to all,” Voiceitt co-founder and executive vice president Sara Smolley said. “Our long-term vision is to integrate Voiceitt’s customizable speech recognition with mainstream voice technologies to enable environmental control through a universally accessible voice system. Voiceitt’s versatile technology can be applied in a range of voice-enabled applications in diverse contexts and environments.”

 

 

 

 

 

Tags:,
Journalist at MultiLingual Magazine | + posts

Jonathan Pyner is a poet, freelance writer, and translator. He has worked as an educator for nearly a decade in the US and Taiwan, and he recently completed a master’s of fine arts in creative writing.

Related News:

Translation App Now Features Auto Mode

Translation Technology

Translation app companies are continually developing new features that will better facilitate experiences abroad for users. Microsoft Translator’s Auto mode is the latest addition to its UX design.

two man chatting white sitting on brown wooden chairFor travelers and expatriates planning their post-COVID journeys abroad, using the right translation app will play a major role in communicating for directions, housing arrangements, contracts, conversations with new friends, and just about anything involving language.

Choosing which translation app will work best for communicating is a very personal choice that depends on several variables and necessities. Among considerations like number of languages, ability to translate both audio and text, and user interface, user expedience often takes precedence. After all, conversations rarely seem worth the effort when each party needs to spend time waiting for the app to process speech or literature.

With convenience one of the keys to successful translation apps, many companies are seeking the best ways to create and update their translation app with the new features and machine translation software to aid in the process. Some translation apps like Google Translate have special features that allow users to snap pictures of text that the app will then translate.

Microsoft Translator also recently announced a new functionality called Auto mode that will remove a step in the translation process. The app’s designers recognized that for many relying on their phones to translate conversations, the phone can create as many problems as it solves.

For example, users often must speak into a translation app, tap translate, and await the results. The process leaves users obsessing over the functionality of the app, rather than keeping the focus on the conversation itself.

The new Auto mode feature will allow users to choose the languages and tap the microphone only once. Auto mode will then register each party’s speech, and turn green to signal it is ready for a response. Removing the need to tap translate with each new statement, the app may provide conversational users a more convenient way to make friends or ask for directions while traveling abroad.

While the update is currently available only for iOS users, Microsoft is working on a version that will be compatible for Android users as well.

Tags:, ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Related News:

Automated Localization AI to Join Kaltura REACH

Translation Technology

Kaltura REACH will integrate SyncWords’ automated localization to improve Kaltura’s video AI captioning accuracy and speed with hopes of increasing viewership.

automated localizationSyncWords announced recently that it has joined the Kaltura Video Technology Marketplace. SyncWords provides video captioning and subtitling in 100+ languages, and will now provide Kaltura customers with automated translation of both live and on-demand video.

The news comes at a time when multimedia localization is on the rise.

As automated localization has become increasingly foundational in reaching a global audience and increasing viewership, the partnership will reduce significant barriers like high costs and long turnaround times for content creators. With machine translated subtitles, organizations that caption videos could now order translations and see them automatically in the Kaltura player.

Throughout the translation process, SyncWords’ media localization AI analyzes content and metadata, assembling the translations into UX-optimized subtitles — theoretically while assigning word-level speech pace for timing accuracy. SyncWords’ automated subtitles are currently used in the corporate and academic worlds as well as by broadcasters and over-the-top providers. Over-the-top service offers streaming media directly to viewers via the internet.

Kaltura created the REACH video captioning and enrichment suite in order to combine automatic machine-generated transcription with professional human captioning and translations. Using automated speech recognition, an algorithm will determine language and provide machine-based captions for indexing and search.

“Our main goal is to offer our customers best-of-breed technology that is served directly from their trusted video platform,” says Liad Eshkar, VP of business development at Kaltura. “We see growing demand for affordable solutions for localization and personalization of videos.”

 

 

 

 

Tags:, ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Related News:

SDL Partners with Reynen Court

Business News, Translation Technology

According to a report released by Business Wire, Reynen Court LLC platform has partnered with SDL Machine Translation. Reynen Court provides law firms and legal departments a secure platform for AI and other legal technology, helping them adopt and manage cloud-based software applications. Reynen Court has a product catalog of legal tech vendors in case management, analytics, legal research, and document creation. Speaking about the new partnership, the CEO of Reynen Court said, “Machine translation is among [our customers’] top investment priorities.”

While machine translation does not replace human services, it can expand language capacity when it comes to translating documents and files, most notably in the field of multilingual e-discovery. This is of particular use when lawyers are dealing with large cases, and they aren’t sure what’s going to be relevant in the discovery process. As Katie Botkin writes in a Best Lawyers article:

“E-discovery is the process by which a lawyer searches electronic documents and data for potential use in a case. They might be Word documents, PDFs, emails, spreadsheets, images, databases, or a host of other file types — potentially millions of words to sort through. Discovery can include looking at the metadata of electronically stored files, if it’s relevant. This would tell you, for example, when a particular file was created and what its origins are.

Wrangling all these documents can be tricky, and if any or all of the data is in a language other than English, the difficulty increases exponentially. It’s not just a matter of translation — it’s a matter of keeping the translations paired with the original files. Multilingual e-discovery is both a language challenge and a management challenge.”

Machine translation is only one of many tech advances involving the legal field — courthouses are, of course, already employing technology in legal proceedings. Accommodating for social distancing measures, a courthouse in Champaign County, Illinois has begun adjusting its translation services to incorporate headsets and microphones that transmit to the client.

 

 

Tags:, , ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Related News:

$6 Million New Funding For KUDO Remote Interpreting Platform

Business News, Interpretation, Translation Technology

KUDO, a New York-headquartered, cloud-based virtual interpreting platform, has been flying high since the beginning of the year as travel bans and work from home policies have decimated the in-person conference space. Virtual interpreting is exactly what it sounds like — rather than in-person or over the phone, interpreters work within an online platform of choice. Virtual interpreting includes remote simultaneous interpreting, which is what KUDO does. Simultaneous interpreting is necessary for fast-paced conference settings where participants may need to keep up with presenters in their native languages. Some previous popular remote interpreting solutions required waiting for interpretation, such as over-the-phone health care interpretation.

With the sudden worldwide switch to online conferences, KUDO found itself in an unexpected position. With demand outpacing supply, the company grew to 7,500 simultaneous interpreters in over 70 languages and needed cash to grow. Led by Felicis Ventures, ID8 Investments, Global Founders Capital, Advancit Capital and AirAngels injected $6 million into the company.

Fardad Zabetian, CEO and Founder of KUDO, Inc.

“We are going to use the investment to further develop the product by growing our engineering and customer success teams,” says Fardad Zabetian, founder and CEO of KUDO in an exclusive interview. “Our fully-equipped virtual multilingual conference room in the cloud makes you feel like you are at the United Nations.”

Ewandro Magalhães, Chief Language Officer at KUDO, highlights that “KUDO offers features like parliamentary voting and polling, document distribution, and sign language interpreting, which makes it one of the most inclusive web conferencing solutions on the market today.”

A year ago, virtual interpreting was described by pundits and analysts as the solution in search of a problem. In times of COVID-19, it has now become the solution to the problem, with constant improvements and new features. Just this week, virtual interpreting provider Boostlingo is releasing three new features, which will go live July 24. Boostlingo can now integrate with Zoom and also provides real-time quality of service monitoring. Another feature that will go live on Boostlingo this week is four-way video conferencing. While the company had these features in its pipeline before the onset of the pandemic, their development was prioritized as a response to changing requirements in the virtual interpreting space due to COVID-19.

 

Tags:, , ,
+ posts

MultiLingual creates go-to news and resources for language industry professionals.

Related News: