The GPT-3 AI Language Model Might Be on Your Newsfeed

OpenAI’s GPT-3 was released privately for beta testing in July. The AI language model has already generated tens of thousands of views and dozens of subscribers on one beta test blog from many unwitting readers.

When OpenAI released its GPT-3 AI language model for beta testing last month, the San Francisco-based AI research and deployment company was aware that issues might arise. After all, following the racist, homophobic, misogynistic language generated by the GPT-2, the GPT-3 could easily fall into similar patterns.

To prepare for such outcomes, OpenAI decided to launch the language model in beta to limit its capacity to stray into problematic territory. Although the company released the beta primarily to university and industry researchers, one computer science major at University of California, Berkeley reached out to a PHD candidate to request access to GPT-3.

Once the graduate student agreed to collaborate, Liam Porr wrote a script for him to run. The script gave GPT-3 a headline and introduction for a blog post and ordered it to generate multiple completed versions. Porr then created Adolos, a blog he would use to test his hypothesis that the AI could convince an audience that the blog was written by a human.

Porr did as little as creating a title and introduction, choosing a photo, and copy-pasted from one of the outputs with little to no editing. After two weeks, the blog had over 26,000 visitors and 60 subscribers, with one post even making it to the number one spot on Hacker News. Furthermore, while a few readers suspected the posts had been written by GPT-3, many of those comments were subsequently down-voted by community members.

One of the tricks Porr discovered, which would allow the algorithm to function at a more convincingly human level, was choosing the right subject matter. Although the GPT-3 language model is far more vast than the GPT-2, it still struggles to produce language in a rational, logical way. Indeed, even OpenAI’s first use of the model was to write a poem about previous board member Elon Musk.

Focusing on subject matter that utilizes more emotional, creative language, Porr settled on productivity and self-help. He then searched through Medium and Hacker News articles to emulate titles related to those subjects and let the AI loose.

After conducting this two-week experiment, Porr wrote a post on his blog — without the help of GPT-3 — discussing his findings and the implications of OpenAI’s newest language model. With the promising efficiency of GPT-3, the model could have a major impact on the future of online media, according to Porr. The experiment comes during global discussions around the ethics of AI.

MultiLingual Staff
MultiLingual creates go-to news and resources for language industry professionals.


Weekly Digest

Subscribe to stay updated

MultiLingual Media LLC