— Supported by LILT —

Spence Green
An AI Vision For Localization


Stanford University


My faith and my relationships with my family are the most important things in my life.

I’m an avid scuba diver but don’t speak German. However, I appeared in a feature on Prosieben (German Discovery channel) about technical diving.

The business world can’t get enough of artificial intelligence (AI) today. But over a decade ago, the technologies of tomorrow were the domain of researchers and scientists. One such scientist was Spence Green, at the time a PhD student when he met John DeNero. While working on Google Translate, the two formed a vision of what the future might look like: a world unbound by language barriers.

That vision has yet to be realized. But since LILT’s founding in 2015, it’s closer to being within reach than ever before. Green took the time to share how LILT came to be, where it is today, and what the coming years could bring. After all, the world has no shortage of problems, and sometimes, the only ingredient that’s needed for a solution is the right tool.

Can you outline your vision for LILT, and how have you worked to achieve that vision since the company’s founding in 2015?

[LILT Chief Scientist John DeNero] and I met while working on Google Translate in 2011. I was his intern as a PhD student at the Stanford AI Lab. We’d both gone to grad school to work on AI-based approaches to language translation because we believed that this technology could have a profound impact on society. Even by 2011, Google Translate had revolutionized consumer access to information: No one was using phrase books when they traveled anymore. They used their phones instead.

We wondered why more products and services weren’t available in all languages. So we talked to the Google localization team and learned, to our great surprise, that they didn’t use Google Translate at all. They didn’t even use it in a post-editing capacity. We were shocked.

So the original vision for LILT was simple: use AI to make all products and services available in all the world’s languages.


The key problem that prevented mass adoption in the enterprise sector was that AI systems didn’t adapt to the business context. In those days, there were engine customization services, but they were expensive and required a significant amount of data and expertise to use. We realized that if the AI system could simply learn from a professional linguist while they worked, then the AI could both help the linguist and become much more accurate for fully automatic use cases like support pages.

So we started a project at Stanford on real-time, contextual adaptation of AI systems. Researchers had worked on this idea unsuccessfully since the late 1960s, but we got it to work in 2014.

LILT came to the market as a translator/LSP software product in 2015 and is now an enterprise LSP with an AI technology suite. Why did you pivot?

We are research scientists, and we wanted to build a technology company. We thought that if we gave state-of-the-art AI to LSPs, then they could transform the economics of their businesses. It was very easy to show that accelerating the translation process was a huge lever for gross margins, which is the key business metric for most LSPs.

Our first few LSP customers showed extraordinary results, including a very large project that we did with an early LSP partner for GetYourGuide, who presented the case study at LocWorld 2016. The case study showed translation throughput about fives times the industry rule of thumb. The LSP chose LILT because the scale, budget constraints, and timeline were impossible to achieve with conventional localization processes.
We exceeded all of the project objectives.

Then the reality of this industry set in. Technology is usually dictated to LSPs by their customers, so they struggle to standardize on a particular toolset. They don’t have loads of cash to invest in R&D and technology enablement. Many translators were refusing to use AI in those days, and the LSPs had neither the time nor the expertise to train the translators. The problems went on and on.

We tried selling the technology to enterprise customers and having them dictate it to their LSPs. We’d show the agility and efficiency gains to the customer, and they would order a proof of concept with their LSP. But there was a conflict of interest: the customer wanted the LSP to use the technology to reduce prices and turnaround times. The LSPs didn’t want that, so they’d ultimately sabotage the proofs of concept.

The nadir came in the summer of 2017 when SDL sued us, claiming they had invented real-time adaptation. They were poorly read and didn’t know the literature. Big companies do this to kill innovation.

We prevailed. But by late 2017, we were exhausted and short of cash. LILT was going to die. The only path forward was to become an end-to-end enterprise solution so we didn’t have to deal with the LSPs. So that’s what we did. In early 2018, our first few enterprise customers signed, and we started growing quickly.

Many localization buyers are skeptical of “end-to-end” or “all-in-one” localization vendors. Why do you think that is, and what would you say to them?

Historically, some LSPs used TMS to lock in a customer to services. Customers found that they lost out twice: TMS innovation tended to be slow, and it became very costly to switch service providers if quality degraded. This created an opening around 2010 for a new wave of TMS vendors who pitched as “vendor neutral.”

I also think that localization leaders tend to want control over their programs. Owning their TMS, and making it vendor neutral, is a strategy to assert this control. However, I think that localization leaders may overlook the bind that this puts TMS providers in. On one side of the platform, they must build enterprise features, and on the other side, they must support LSPs. These are very different user personas requiring different features and different technical support. TMSs that start lean and focused eventually bloat, making room for the next TMS-du-jour, which will eventually bloat in turn. The cycle simply repeats itself.

I think that a better model in modern technology is Amazon Web Services (AWS). AWS offers a full portfolio of products and services designed with common principles, interfaces, and security infrastructure. The customer is free to choose from that portfolio and to incorporate technology from other providers, even competitors like Microsoft and Google. This is an extraordinarily empowering model for the customer that in fact gives them maximum control.

LILT’s model is to be a complete localization portfolio for the customer. We call it the Contextual AI Platform because it enables context-specific customization. If the customer wants to use a TMS or an AI model from another provider, then LILT enables them to do so. I’d argue that the localization leader has more control in the LILT model than in the TMS model because we’re building for one persona: the localization leader.

The localization industry is maturing. Part of this maturation is to move from idiosyncratic ways of working to models that work well in other areas of technology. I encourage localization leaders to take inspiration from the AWS model as they transform their programs.

How should localization leaders think about applying LILT’s broad portfolio?

Enterprise engineering and IT departments communicate in the language of use cases. A use case is a scenario for the application of a system. For example, our customer Canva has the use case of localizing visual design templates in all their supported languages. Michael Levot, who leads the team at Canva, thinks about applying tools to solve this use case.

I think that use cases should be the core of localization program design. Then, the leader can select from a portfolio of tools to solve that use case. This contrasts with the traditional localization approach of centering the program on the LSP+TMS choice.

LILT has a portfolio of tools for solving hundreds of use cases. And we also interoperate with other technologies to empower the leader to solve their use cases efficiently.

What is your vision for the future of the localization industry?

I think the goal of the next few years should be to make every product and service available in every language. The internet is this incredible distribution mechanism: Within a few weeks of formation, a business can serve customers in nearly every market on earth. That is a miracle.

About 20% of internet users are Chinese speakers, but just over 1% of the top 10 million websites are available in Chinese. Nearer to my heart, 8% of internet users are Arabic speakers, but only 0.5% of the top websites are available in that language.

How lucky are we to have AI systems and modern software to solve this problem! This accomplishment will be transformational for business and for society.

This is why we started LILT. The problem isn’t solved, but we now have the tools to solve it.

On the topic of the future, what are you most excited about on LILT’s roadmap?

In June, we launched LILT Create, which solves a use case like this: The Japanese marketing team is never satisfied with localized content. They always want to make changes and revisions, and this process is slow and time-consuming. We asked: what if we could use AI to generate new, contextual content in Japanese using historical localization data? We built it, and it cuts cycle time by over 90% for regional teams. The applications are endless, and we’re expanding to more use cases.

Before the end of the year, we’ll release the V3 of our Contextual AI model. We just released V2 in March, so the pace of innovation is rapid.

We’ll expand Contextual AI to other modalities, namely some speech and video use cases.

I’m also really excited about building AI products based on the MQM standard. It seems like the industry is finally converging on MQM as the standard quality framework. As more leaders adopt MQM, we can build robust systems that enable more sophisticated workflow automation.

I’m so proud of our customers. The technology we developed in 2015 was fundamentally a generative AI system, and our first enterprise customers adopted it in 2018. Many of their peers thought they were nuts for taking a career risk like that.

Alessandra Binazzi set up her AI program at ASICS in the fall of 2018 to meet the tight seasonal shipping deadlines of the product catalog. Loïc Dufresne de Virel moved his entire program to generative AI in late 2019 to meet aggressive scalability targets. Sergey Parievsky deployed LILT at Juniper in early 2020 on highly technical content that didn’t seem like an obvious fit for AI. These are just a few of the pioneers making generative AI the core of their programs long before ChatGPT. People at the major localization conferences have talked about AI for years. But very few people were converting the talk to action. We owe the success of our company to the farsighted leadership of our early customers.

LILT has evolved dramatically over the years. How has your schedule changed, and what does an average day look like for you now? What are you focused on?

I want to emphasize that LILT was founded by and is still led by two AI researchers. I think that’s incredibly important for customers to consider when evaluating partners. John is still on the faculty at UC Berkeley, and he teaches the largest computer science course in the United States. We want to see everyone excited about AI and computer science, and customers have come to value the honest, objective opinions that they get from us.

John leads the research team, and I manage the business. That has always been our arrangement. My days start early as we have employees around the world, so I start meetings at 7 a.m. Right now, my main focus is on customers and supporting our customer-facing teams. That’s about a third of any given day. Another third is recruiting. I still interview every person who joins the company.

The final third is with our product and research team, defining the future of our platform. Product velocity is very high right now, and I’m ensuring that customer feedback is getting back to the team quickly and accurately. At around 6 p.m. every day, I run. I find that running is my best thinking time.

I’m also traveling a lot these days because we believe that localization is still a relationship business. Zoom is fine for maintaining relationships but harder for building new ones. I’m most inspired when I’m with our team, supporting customers in their workplaces.

What would you want everyone to know about LILT?

First, every single word that we’ve produced since our founding in 2015 has gone through an AI system. No one else has that much operational expertise with AI. Customers can rely on us for the full set of technology and services to solve most modern localization use cases.

Second, we have an amazing team. Within our company we have everyone from doctoral-level researchers to localization industry veterans to tech-company product designers. I think that customers may not always appreciate how cross-functional localization operations are. I’m incredibly inspired by the level of innovation that our teams deliver for our customers.



Subscribe to stay updated between magazine issues.

MultiLingual Media LLC