Can Neural Networks Really Understand Human Language?

Artificial intelligence (AI) researchers have long been preoccupied with solving the issue of syntactic ambiguities in human language — in their recently released book Linguistics for the Age of AI, researchers Marjorie McShane and Sergei Nirenburg analyze some of the most pressing matters in the fields of AI and natural language processing (NLP). Earlier this month, TechTalks published a feature on the book and the ideas presented in it, emphasizing some of the problems with neural networks in the fields of NLP and natural language understanding (NLU).

“Conceptually and methodologically, the program of work is well advanced,” McShane told TechTalks. “The main barrier is the lack of resources being allotted to knowledge-based work in the current climate”

In their book, McShane and Nirenburg discuss the distinction between knowledge-based and knowledge-lean systems extensively. These systems are both different ways to approach syntactic or semantic ambiguities in NLP and NLU — knowledge-based systems can be fairly thorough and successful at explaining linguistics structures but they also require a significant degree of human input, as humans must engineer all of the lexical structures and software to make these systems work coherently. Knowledge-lean systems, on the other hand, are a bit less thorough but they can be much more efficiently developed as they can be trained on a large corpus of data. 

According to TechTalks, machine learning algorithms (which fall into the category of knowledge-lean systems, as they deal with the aforementioned contextual problems and syntactic ambiguities by analyzing statistical relations) have been at the forefront of NLP and NLU research in recent years — however, they fail at producing truly human-like results, due to the fact that they are based upon the statistical relations of different words in a given corpus, rather than the actual meaning of the words.

“The statistical/machine learning approach does not attempt to compute meaning,” McShane said in the interview with TechTalks. “Instead, practitioners proceed as if words were a sufficient proxy for their meanings, which they are not. In fact, the words of a sentence are only the tip of the iceberg when it comes to the full, contextual meaning of sentences. Confusing words for meanings is as fraught an approach to AI as is sailing a ship toward an iceberg.”

RELATED ARTICLES

Andrew Warner
Andrew Warner is a writer from Sacramento. He received his B.A. in linguistics and English from UCLA and is currently working toward an M.A. in applied linguistics at Columbia University. His writing has been published in Language Magazine, Sactown Magazine, and The Takeout.

Weekly Digest

Subscribe to stay updated

 
MultiLingual Media LLC