OpenAI Creates New Language-Generating AI

In its latest development of language-generating AI, OpenAI has created GPT-3, its most massive language model to date. The AI represents some of the best and worst parts of language.

Speech-related AI is breaking new ground. OpenAI has developed a new AI software called Generative Pre-Trained Transformer 3, or more commonly GPT-3. The new language-generating AI represents the San Francisco-based AI laboratory latest development in its mission to steer the creation of intelligent machines.

Relying on a statistical model, the algorithm has better capacity to perform human-like speech patterns if exposed to more text. In order to train a large enough statistical language model, OpenAI sourced from the biggest set of text ever amassed, including a mixture of books, Wikipedia articles, and billions of pages of text from the internet.

GPT-3’s size is one of the factors that differentiate it from its predecessors. With that in mind, OpenAI provided the GPT-3 with 175 billion parameters — the weights of the connections between the network’s nodes, and a proxy for the model’s complexity —in relation to the GPT-2’s 1.5 billion parameters and GPT-1’s 117 million parameters.

Based on a graph published in The Economist, the GPT-3’s massive number of parameters makes distinguishing its AI-generated news articles from human-generated ones nearly equivalent to a guess at random. Furthermore, GPT-3 can even write poetry, such as this verse about Elon Musk:

“The sec said, ‘Musk,/your tweets are a blight./They really could cost you your job,/if you don’t stop/all this tweeting at night.’/…Then Musk cried, ‘Why?/The tweets I wrote are not mean,/I don’t use all-caps/and I’m sure that my tweets are clean.’/’But your tweets can move markets/and that’s why we’re sore./You may be a genius/and a billionaire,/but that doesn’t give you the right to be a bore!’”

This new development fulfills several tech predictions in recent years and signals promising advances in the field of AI language modeling, but not without flaws that have perpetually marred both AI imaging and AI text generation. Despite GPT-3’s grammatically fluent text, the statistical word-matching does not reflect understanding of the world.

Melanie Mitchell, a computer scientist at the Santa Fe Institute, said that the text generated by GPT-3 “doesn’t have any internal model of the world — or any world — and so it can’t do reasoning that requires such a model.”

The result has led it into similar pitfalls discovered in the GPT-1 and 2, namely the AI’s inability to distinguish language promoting racism, anti-Semitism, misogyny, homophobia, or any other oppressive language that it finds in its sources. OpenAI even added a filter to the GPT-2 to disguise the problem of mimicking bigotry by limiting the model’s ability to talk about sensitive subjects. However, the issue still poses a risk to GPT-3, which has already reproduced prejudiced text.

To work around the problem, OpenAI has added a filter to a newer model of the GPT-3, but the fix may just be a band-aid at this point. Still, with such rapid development of new language models, the GPT-3 will likely soon be replaced by a version with an even larger scale and maybe some power of discernment.

Jonathan Pyner
Jonathan Pyner is a poet, freelance writer, and translator. He has worked as an educator for nearly a decade in the US and Taiwan, and he recently completed a master’s of fine arts in creative writing.

RELATED ARTICLES

Weekly Digest

Subscribe to stay updated

 
MultiLingual Media LLC