NeuralSpace completes seed round with $2.8 million

NeuralSpace, a natural language processing (NLP) startup based in London, announced earlier this month that it’s completed its seed round of funding with $2.8 million.

The startup specializes in the development of voice technology for low-resource (or locally spoken) languages. Earlier this year, the company launched a self-serve toolkit capable of language understanding in more than 90 languages, automatic language detection, and automatic data set conversion, among other features. With its first round of investments finished, the startup notes that it hopes to develop improved speech-to-text and text-to-speech models for low-resource languages.

“The latest funding will be used to double down on the voice artificial intelligence (AI) development at NeuralSpace, which will include building models for mixing languages (such as Arabic-English, Chinese-English, Spanish-English or Hindi-English), and significantly increasing the accuracy of automatic speech recognition (ASR) models in locally spoken languages, compared to current market leaders,” reads a Sept. 21 blog post from the company.

In Jan. 2022, the company launched the aforementioned self-serve toolkit, the NeuralSpace Platform. The NeuralSpace Platform features a no-code workspace that allows users to develop and improve custom models for machine translation, transliteration, speech-to-text, and more. 

“Using the NeuralSpace Platform does not require any machine learning expertise, and all that is needed is a handful of data to train and continuously improve each user’s custom models,” the blog post reads.

NeuralSpace says its text AI models garnered attention in the Middle East and South Asia, and it hopes to focus on developing equally robust voice technology over the course of the next year or so. One of the company’s primary goals is to improve the quality of this type of technology for low-resource languages, for which language technology often lags behind more widely spoken or high-resource languages like English and Mandarin Chinese.

“Training latest deep learning models accurately on very small data sets, which any local, low-resource language usually suffers from, is a very challenging task because models are usually designed to perform well in English with terabytes of available text and speech data but not in languages with a few hundred megabytes of data,” said NeuralSpace co-founder and CEO Felix Laumann.

RELATED ARTICLES

Andrew Warner
Andrew Warner is a writer from Sacramento. He received his B.A. in linguistics and English from UCLA and is currently working toward an M.A. in applied linguistics at Columbia University. His writing has been published in Language Magazine, Sactown Magazine, and The Takeout.

Weekly Digest

Subscribe to stay updated

 
MultiLingual Media LLC