Timnit Gebru and The Problem with Large Language Models

In December of 2020, Google fired Timnit Gebru, a co-leader of their Ethical Artificial Intelligence team (Google continues to maintain Gebru resigned, though even Google’s version of events leaves open the question of constructive discharge). In 2021, Google further indicated its stance regarding research in the ethical field by firing Margaret Mitchell, a researcher studying the possibility of unbiased internet intelligence. These departures sparked a large backlash in the company and at least two engineers quit their jobs in protest.

Instances like these prominently display Silicon Valley’s persistent problems in dealing with the importance of ethics when studying AI and machine learning. These particular departures are interesting in the language sphere as Gebru was fired for working on a paper entitled, “On the Dangers of Stochastic Parrots: Can Language Models be Too Big?”.
Gebru et al’s paper originated as a means of studying whether or not a cohesive language model can ever exist, given that any attempts at it so far have had large and impactful mistakes.


For example, Microsoft was forced to take down its chatbot Tay in 2016 as it began sending misogynist and racist tweets within 16 hours of being online. Chatbots like these are created using input data from the internet in the hope that they will produce content similar to what they are receiving. The problem is that the input data chatbots are receiving is riddled with biased and prejudiced content. So, as the chatbot tries to mimic and fit in with other tweets, it begins sending out problematic content.
These types of incidents led Gebru and other researchers to create a comprehensive paper outlining the risks of large AI models.

Their four proposed risks of current AI modeling are the following:

  1. The environmental and financial costs of training AI systems are very high
  2. The input data that AI models are receiving are problematic themselves
  3. The cost of studying large data sets is very high, yet people don’t want to study smaller sets
  4. AI can misinterpret the meanings of words and phrases (what Gebru et al. call “illusions of meaning”), and the fear is that the models will create and spread misinformation

These risks seem quite plausible, and large-scale language models currently being used in AI are faced with all of these problems, so why fire an employee for contributing to this paper?

Google is currently a major user and developer of AI technology, and a paper like this would require a dramatic change in its current processes. These issues don’t have as quick a fix as some may believe. For example, some argue that if the chatbots are being offensive then one should censor the input data the machines are receiving. In practice, this could mean censoring the n-word from a chatbot’s input data so it does not mimic this word in practice.

However, this kind of solution is not useful in actual practice, as the chatbot would be losing instances where people are using the n-word in reclamatory colloquial contexts.
Clearly, the risks of large linguistic models can’t be resolved easily, and perhaps Google tried to ignore the ethical problem instead of solving it.

It is also possible that Google reprimanded Gebru et al’s paper as it brings up instances of flaws in language modeling that are having immediate and drastic real-time effects. One of the more stark examples the paper gives is of the Israeli police arresting a Palestinian man after a language model mistranslated his Facebook post. The model translated his writing “good morning” in Arabic to “attack them” in Hebrew, and “hurt them” in English. Mistranslations like these are having dramatic, harmful effects on people, and Gebru et al’s paper shines a spotlight on these instances. By halting the publication of this paper, technological institutions are insinuating that examples like these should not be brought to wider public knowledge.

Timnit Gebru’s treatment stands as an example of the larger problem of technology corporations’ deprioritizing of (or sometimes outright disregard for) ethical advancements and considerations and would seem to prove that companies would rather advance technology for its own sake than for the benefit of the people using it.

The final draft of Gebru et al’s paper was eventually published in the same conference to which it was submitted.

Editor’s note: this article has been updated from an earlier version that incorrectly stated that Gebru et al’s final draft remained unpublished.

RELATED ARTICLES

Jasleen Pelia-Lutzker
Jasleen Pelia-Lutzker is an undergraduate student at the University of Edinburgh pursuing a degree in philosophy and linguistics with honors. She has a particular interest in bilingual language acquisition and the evolution of languages in migrant communities. A California native, she is fluent in French, English, and Hindi.

Weekly Digest

Subscribe to stay updated

 
MultiLingual Media LLC