In his book Through the Looking-Glass, Humpty Dumpty scornfully said, “When I use a word, it means exactly what I choose to mean it–no more, no less.” Alice replied, “The question is whether you can make the words mean many different things.”
Studying what the words really mean is outdated. The human mind must analyze a web of detailed and flexible information and use sophisticated common sense to grasp its meaning.
Now, a new problem with the meaning of words has emerged: Scientists are studying whether artificial intelligence can simulate the human brain to understand words the way people do. A new study by researchers at the University of California, Massachusetts Institute of Technology and the National Institutes of Health addresses this question.
The paper published in the magazine The nature of human behavior, reports that AI systems can indeed learn the meanings of very complex words, and scientists have discovered a simple trick to extract that complex knowledge. They found that the AI system they studied represented the meanings of words in a way that is closely related to human judgment.
The artificial intelligence system that the authors have investigated has been used repeatedly in the past decade to study the meaning of words. Discovering the meanings of words learns by “reading” huge amounts of content on the Internet, which includes tens of billions of words.
When words are repeated together – eg “table” and “chair” – the system learns that their meanings are related to each other. And if pairs of words happen together very rarely – like “table” and “planet” – he learns that they have very different meanings.
This approach seems like a logical starting point, but consider how well humans understand the world if the only way to understand meaning is to count the number of times words occur near each other, without any ability to interact with others and with our environment.
Aidan Blank, UCLA assistant professor of psychology and linguistics and co-author of the study, said the researchers set out to find out what the system knows about the words it is learning, and what kind of “common sense” it has.
Before starting the research, Planck said, the system seemed to have one major limitation: “In terms of the system, every two words have only one numerical value that represents how similar they are.”
In contrast, human knowledge is more detailed and complex.
“Consider our knowledge of dolphins and crocodiles,” Planck said. “When we compare the two on a scale of size, from ‘small’ to ‘large’, they are relatively similar. In terms of their intelligence, they are somewhat different. In terms of the danger they pose to us, on a scale from ‘safe’ to ‘dangerous’, they are They differ greatly, so the meaning of the word depends on the context.
“We wanted to ask whether this system really knows these subtle differences – whether its idea of similarity is flexible in the same way for humans.”
To find out, the authors developed a technique they called “semantic projection.” One could draw a line between the model’s representations of the words “large” and “small,” for example, and see where the representations of the various animals fall on that line.
Using this method, the scientists studied 52 groups of words to see if the system could learn to sort out meanings — such as judging animals by their size or how dangerous they are to humans, or ranking US states by weather or overall wealth.
Among other word groups were terms related to clothing, occupations, sports, mythical creatures, and first names. Each category was assigned multiple contexts or dimensions – size, danger, intelligence, age, and speed, for example.
The researchers found that across those many things and contexts, their method proved to be very similar to human intuition. (To make this comparison, the researchers also asked groups of 25 people each to make similar ratings on each of the 52-word groups.)
Remarkably, the system learned to recognize that the names “Betty” and “George” are similar in that they are relatively “old”, but represent different genders. And that “weightlifting” and “fencing” are similar in that both usually happen indoors, but they differ in terms of the amount of intelligence required.
“It’s simple, beautiful, and totally intuitive,” said Planck. “The line between ‘big’ and ‘small’ is like a mental scale, and we put animals on that scale.”
Planck said he actually didn’t expect the technique to work, but was happy when it did.
“This machine learning system turns out to be smarter than we thought; it has very complex forms of knowledge, and that knowledge is organized into a very intuitive structure.” “Only by tracing the words that speak to each other in the language, you can learn a lot about the world.”
Reference: Semantic projection, Grand G, Blank IA, Pereira F, Fedorenko E. nat they behav. 2022: 1-13. dui: 10.1038 / s41562-022-01316-8
This article has been republished from the following Materials. Note: The article may have been modified for length and content. For more information, please contact the mentioned source.