A study on a health care risk prediction algorithm used on over 200 million people in U.S. hospitals found substantial racial bias favoring white over black patients. This is just one example of the faulty use of artificial intelligence (AI). According to the IBM Global AI Adoption Index, 35% of companies are using AI, and another 42% are exploring it.
Companies use AI from process automation to facial recognition and language translation. It predicts who to hire, who to lend to, how much to charge for insurance, what content to show, and which defendants are likely to be repeat offenders, among many others.
Importance of Ethical AI
Experts have expressed concern about the ethical use of AI. The U.S. government has also proposed an AI bill of rights. An IBM survey revealed that 85% of consumers want companies to consider ethics in their AI projects. That’s why Forrester predicts the market for responsible AI will double in 2022.
Ethical or responsible AI is a set of principles, guidelines, and techniques for implementing AI that upholds fundamental values, such as human dignity, fairness, and positive impact on all stakeholders. The purpose is to mitigate biases, ensure appropriate representation, and protect data privacy and security.
Organizations planning and deploying AI projects will benefit from practicing ethical AI, including:
- Protecting brand reputation. More than just embarrassment from wrong AI predictions, algorithms that lead to racial, gender, or income discrimination can lead to a public relations nightmare. In AI-enabled language translation, offensive words or images can tarnish your brand reputation.
- Preventing legal action. Organizations can avoid possible legal issues and regulatory fines that could arise from problems with AI implementation. These include issues on safety and effectiveness, data protection and privacy, and algorithmic fairness and biases.
- Building trust. Practicing ethical AI is not just a defensive measure. It shows customers and other stakeholders your organization cares about their safety and privacy and wants to provide fair and responsible access to your products or services.
Causes of Ethical Problems in AI
Ethical problems, particularly biases, in AI arise because of two things: the algorithm and the data.
AI Model Bias
Human biases can be hard-coded into AI algorithms. AI developers have their cognitive biases. Many AI professionals are primarily male and white. Repeatable and systematic errors produced by the machine learning algorithm cause this algorithmic bias in AI. How the system is designed, how problems or scenarios are framed, and how the training data is labeled and used can all contribute to the problem.
Data Bias
AI models that are part of use cases, including language translation, prediction, recommendation, and conversational AI, depend on training data. That data can skew toward specific groups, leading to biases based on age, race, gender, sexual preferences, or other characteristics. The output can have discriminatory, offensive, and harmful effects.
AI Ethics in Multilingual Content
AI is being used increasingly to generate content for multilingual audiences. Neural machine translation (NMT), natural language processing (NLP), and machine learning (ML) allow organizations to rapidly translate content and enable real-time conversations using AI-powered chatbots. When certain words are translated to another language, the downside is that they could be culturally inappropriate, offensive, or discriminatory.
As such, AI ethics plays a critical role in translating multilingual content. These are some ways it can be used for language services.
Identifying Social Bias
Microsoft learned the hard way about the importance of ensuring ethical AI. In 2016, it released its experimental AI Twitter bot named Tay, which quickly tweeted offensive racist and misogynistic remarks.
While there is no such thing as zero risk, practicing ethical AI can diminish potential social biases in developing multilingual content and conversational AI applications. For example, you can detect inappropriate content, such as hate speech and offensive language within text across different languages and get alerts immediately.
MT engines can accurately handle gender-friendly translation issues and errors. AI components can detect non-inclusive language and offensive terms in language-specific engine training data and then recommend replacements and provide synonym definitions for more accurate understanding.
Monitoring Issues in User-Generated Social Media Content
Use AI to monitor user-generated content (UGC), such as customer reviews, social media posts and comments, and forum discussion threads. AI technology can scan phrases automatically for misinformation, offensive language, and social bias. It can also be used to check for unintentional bias in corporate or customer-facing content, such as advertisements, brochures, and websites.
You can then incorporate the results into your training data for AI applications to identify discriminatory behavior in your target markets, conduct foreign language sentiment analysis, and adapt future marketing messaging and content for better engagement.
About AI at Welocalize
Ethical AI is critical for global brands that require multilingual content and chatbots. But AI is only as good, fair, and responsible as the model and the training data used to build it. For this, working with a language services partner with experience in AI deployment is essential.
Welocalize combines natural language processing with machine learning to enhance your AI applications. For more information about Welocalize AI solutions, connect with us here.
Photo by Drew Dizzy Graham on Unsplash