Here’s what people are saying about OpenAI’s ChatGPT

OpenAI, the San Francisco-based artificial intelligence laboratory behind the large language model (LLM) GPT-3 and the image generator DALL-E, launched a new tool on Wednesday: ChatGPT.

ChatGPT has already taken the internet by storm, reaching a total of one million users just a couple of days after its launch. Freely available on OpenAI’s website, the tool is a chatbot that can understand and respond naturally to user prompts. While ChatGPT is, admittedly, far from perfect, it is capable of generating human-like, thoughtful (though not always accurate) texts in response to user input, and has even been dubbed the “most incredible tech to emerge in the last decade.”

ChatGPT is based on the LLM GPT-3.5, a sort of intermediate between GPT-3 and OpenAI’s GPT-4, rumored to be coming out in the next couple of months. Users simply sign up using their email address and can then suggest whatever prompt they can think of; ChatGPT is also capable of identifying whether or not a user’s prompt is offensive, and will flag certain answers or questions if it believes they are inappropriate. 

As OpenAI CEO Sam Altman wrote on Twitter, ChatGPT is still in its early demo stages. It’s likely that ChatGPT will be monetized sometime in the future, as Altman noted that “the compute costs are eye-watering.” 

Some users have posited that ChatGPT poses a major threat to Google’s search function — users can ask the chatbot any question they like, and ChatGPT will provide a coherent answer that’s a bit more digestible than some of the search results Google might direct you to. However, it’s important to note that, just because ChatGPT’s responses are human-like, doesn’t necessarily mean they’re factually correct. 

For example, when a Mashable reporter asked ChatGPT what the largest country in Central America other than Mexico is, the chatbot incorrectly responded with Guatemala. In fact, the answer is Honduras. Similarly, when MultiLingual asked ChatGPT what language was most closely related to Hmong, ChatGPT incorrectly referred to Hmong as a language isolate.

Despite the factual inaccuracies, ChatGPT is certainly capable of producing human-like text. This has led some people to fear its potential for misuse, particularly in an educational setting. Some have proposed ways in which ChatGPT could change the way we teach students, while others have noted that university and high school instructors will likely not be able tell some students’ writing from that produced by an LLM.

Of course, it remains to be seen whether or not this prediction will pan out. And while some users note that ChatGPT and other LLMs like it have a fairly elementary and basic writing style, it still demonstrates a high level of linguistic proficiency — as Altman put it himself, “the field has a long way to go, and big ideas yet to discover. we will stumble along the way, and learn a lot from contact with reality. it will sometimes be messy; we will sometimes make really bad decisions. we will sometimes have moments of transcendent progress and value.”

Andrew Warner
Andrew Warner is a writer from Sacramento. He received his B.A. in linguistics and English from UCLA and is currently working toward an M.A. in applied linguistics at Columbia University. His writing has been published in Language Magazine, Sactown Magazine, and The Takeout.


Weekly Digest

Subscribe to stay updated

MultiLingual Media LLC