San Francisco, CA—August 2, 2021— Unbabel, an AI-powered Language Operations platform that helps businesses deliver multilingual support at scale, today announced the launch of MT-Telescope – a new tool that enables developers and users of Machine Translation (MT) systems to deeply analyze and understand MT quality performance. Building on Unbabel’s automated quality measurement framework COMET, MT-Telescope is an open source tool that for the first time lifts the hood on MT quality analysis and provides unique granularity and quantitative insights into the quality performance of MT systems.
“At Unbabel, we constantly work on developing, training, maintaining, and deploying MT systems at a rapid pace and to high quality standards. This challenging need drives our research and development objectives, especially in the domain of quality analysis and evaluation,” said Alon Lavie, VP of Language Technologies at Unbabel. “MT-Telescope helps our LangOps specialists and development teams make smarter decisions for customers about which MT system better suits their needs, and enables the MT research community to easily use best practice analysis methods and tools to rigorously benchmark their advances.”
Typically, MT quality measurement metrics such as COMET, BLEU, or METEOR provide an overall quality score for a data set. MT-Telescope takes this quality scoring a step further by exposing the underlying factors behind performance, and zooms into a fine-granularity analysis of translation accuracy down to individual words, terminology and sentences.
“Our research shows that one of the biggest needs in applying machine translation is insight into its usability, an area where current methods fall short,” comments Dr. Arle Lommel, senior analyst at CSA Research. “Guidance-focused evaluation that focuses on how well MT suits particular use cases will help extend the technology to new areas and increase acceptance of machine translation-based workflows.”
In addition to the greater degree of granularity, MT-Telescope has an intuitive visual browser interface that lets non-technical users to compare two MT systems and assess which is the better fit to meet their objectives. MT-Telescope’s visualizations provide comparison across three key areas:
A comparison of quality scores for subsets in the data, such as named entities (i.e. product or brand names), terminology (i.e. distinct phrases) or segment length (i.e. the length of the translated sentence)
A side-by-side error analysis of each overall MT system, allowing for substantive contrastive comparisons
A visualization of the distribution of quality scores between the two systems
In addition to Unbabel’s COMET, MT-Telescope can ingest and compare MT systems based on a variety of quality scoring metrics such as Google’s BLEURT or Johns Hopkins’ Prism. To download MT-Telescope and read more about it, click here.
Unbabel eliminates language barriers so that businesses can thrive across cultures and geographies. The company’s language operations solution blends advanced artificial intelligence with human editors, for fast, efficient, high-quality translations that get smarter over time. Unbabel integrates seamlessly in any channel, so agents can deliver consistent multilingual support from within their existing workflows. This makes it easy for enterprises to grow into new markets and build customer trust in every corner of the world. Based in San Francisco, Calif., Unbabel works with leading customer support teams at brands such as Panasonic, Microsoft, Booking.com, and Udemy, to communicate effortlessly with customers around the world, no matter what language they speak. For more information, visit www.unbabel.com.