During its 10-plus-year journey to the cloud, memoQ, a leading translation management system (TMS), has seamlessly migrated to the cloud to meet customer demands and overcome geographic dispersion. We had Balázs Kis, memoQ’s co-CEO, talk about the strategic alignment with Microsoft, the integration of generative AI, and their commitment to user-centric innovations. Discover how memoQ addresses data and cloud security, how users’ ideas and suggestions are incorporated into their product, and how Balázs Kis sees the future of translation technology, AI, and how to incorporate LLMs into our lives and work.
Balázs, first of all, can you provide a brief overview of memoQ and its technology challenges? What motivated memoQ’s management to consider cloud migration?
memoQ cannot exist outside of the cloud. Our journey to the cloud has been underway for 10 years or so. But to answer the question, two main factors made it imperative to fully adapt to a cloud-based operation:
- Customers, even large ones, have for a long time required us to “host” their systems. So, we rely on a cloud infrastructure to make memoQ TMS cloud systems available to clients.
- memoQ has always been a geographically dispersed organization; there is no “central” office where all co-workers gather. Most of the time there is no other way than the internet to communicate and collaborate, thus it is pointless to have on-premises servers.
How does cloud migration align with your overall business strategy?
In the classical sense, we have never been “migrating” to the cloud. Most of our systems have never existed outside the cloud, and memoQ simply cannot have a business strategy that does not rely on the cloud. We indeed started with a product that used to have the classic “client-server” architecture, but for many years now, memoQ has been transforming into a regular SaaS company — with success if I might add.
How did the partnership between memoQ and Microsoft emerge?
It was organic and gradual. Previously, we were hosting our systems in various independent data centers where we rented physical machines. Years ago, we started to migrate to Azure, where we rented the first few resources without express assistance from Microsoft or their partners. Once our usage became noticeable (to Microsoft and its partners), they sought us out and the cooperation “officially” started as well.
How do you see the cloud enhancing operational efficiency within your organization and your clients?
Microsoft tells us that we belong to the “digital native” customer segment. The truth of this statement also means that I cannot really answer this question. We transitioned not to the cloud, but to Microsoft Azure at a time when both our organization and our customer base were growing rapidly (it still is, but not at the same pace), and luckily, we never had to experience what our operational efficiency would be without the cloud. Since memoQ is a geographically dispersed organization, it could not operate at all without keeping most or all of its information systems in the cloud.
When talking about cloud-based operations, we cannot escape the data security topic. How do you address security concerns related to cloud migration and operation?
Our Services and IT teams have been continuously working with Microsoft to tighten the security of our systems in the cloud. Since migration is not at the center of our attention, we focus on defending both our “internal” and customer-facing systems. We are continually learning about various security-related Microsoft technologies, and we are implementing them, too. These technologies encompass a wide variety of security operations such as monitoring, access control, and protection against malware and other attacks. This is important from the perspective of our customer base, some of which are quite apprehensive when it comes to hosting data in the cloud. But as it happens, we can protect our customers’ data very efficiently within our cloud infrastructure. In contrast, in an on-premises system, our customer is also exposed to the same range of attacks (if the system is connected to the internet), but it is entirely their responsibility — and cost — to protect the systems.
Building a technology ecosystem: What do you consider the most important milestones and challenges throughout the years?
I think the most important milestone came when we realized, more than a decade ago, that we would need to host memoQ systems for our customers. Before that, the memoQ translation management system had your garden-variety “client-server” architecture. At the time of memoQ’s inception, around 2006-2008, this was also revolutionary, at least at a price point that was affordable to translation companies. We decided that if we were going to host memoQ systems, we needed to create a proper multi-tenant cloud environment. From that point, we began to use remote servers at scale, which also brought about a transition of internal systems. As they say, the rest is history.
If we were to make the transition today, we wouldn’t need to develop the multi-tenant architecture ourselves… but, that is also part of our history. As a result, memoQ is one of those rare translation management systems that you can both use in the cloud and it is also available for on-premises deployment.
A significant new milestone arrived when we integrated generative AI into our product. Released in the first half of November 2023, memoQ 10.4 uses Azure OpenAI to implement a new generation of machine translation that adapts the users’ existing resources to new translations. This reduces menial post-editing work while leaving more time for the kind of translation that can only be done by humans(i.e., translation requiring either creativity or subject field research, or both).
User experience will remain a top priority for technology providers. How do you manage user insights within your company?
memoQ uses several sources of information when it comes to user insights and user experience. In-application telemetry provides automatic information collection about user habits and preferences; we conduct regular user interviews; we maintain a so-called memoQ Ideas Portal where ideas for new or updated functionality can be submitted and voted for; we run specific UX tests (A/B tests and the like); and we have recently started benchmarking ourselves against accessibility standards, too.
You wrote an insightful blog post about machine translation (MT) and LLM models evolution. What future do you see in the industry and for memoQ?
It is becoming increasingly clear that large language models don’t have the capacity, even when used in tandem with machine translation, to replace human input and oversight in translation and localization. This is especially the case with premium translation where human life or livelihood might depend on the accuracy and the nuances of the translated content (think of life sciences, large machinery, automotive fields, finance or legal translations – and the list goes on). For almost one year now, humanity has been learning how to relate to – and how to use – large language models. For example, we have learned that while we cannot rely on large language models for their information content, they are very good at producing and adapting text. If large language models receive reliable information from the outside (in the prompt – this is called in-context learning), they can fluently formulate text about it.
Regarding memoQ’s future, AI depends on high-quality, well-curated, authentic data for training. In translation, the place where language data is checked, double-checked, and approved, is the translation management system, one like memoQ?
If you don’t have the training data, from systems like memoQ, you risk losing the AI itself in a few years. memoQ is especially well positioned in this field because of its advanced data management features (LiveDocs the corpus memory, TM+, and terminology management, all in one place). All this means is that in the future, for an organization, it might become even more important to have an enterprise-level TMS as well as people who know how to translate and can exercise oversight over any AI systems that might be part of the localization process.
In the context of environmental sustainability, how might technology adapt to reduce its carbon footprint and contribute to more eco-friendly practices?
An organization like memoQ is extremely focused on optimizing its use of AI and computational resources, in general. Users of AI can be eco-friendly if they understand when to use and when not to use a specific piece of technology, LLMs in particular. In the past, we talked about a concept called “minimum automation”, which is a principle of using the least resource-intensive means to achieve an automation goal. Here’s where education and information play a crucial role. An independent software vendor and an integrator must understand both when they need to throw an LLM at a problem, and likewise when the problem can be solved with much simpler means.
At the same time, a service provider like Microsoft has the power to work on the resource consumption of the technologies that have the highest environmental impact. Hardware makers can also contribute by reducing their dependence on rare materials and by creating devices that demand less energy.
If you would like to read more about the memoQ – Microsoft partnership, visit the Microsoft Customer Story page.