Why Robots Are More Skilled At Detecting Real Pain Than Humans
Computer systems can detect tiny nuances in facial expressions that people miss.
The study, titled “Automatic Decoding of Deceptive Pain Expressions” was published in Current Biology. Marian Bartlett, research professor at UC San Diego’s Institute for Neural Computation, was lead author of the study, and Kang Lee, professor at the Dr. Eric Jackman Institute of Child Study at the University of Toronto, was senior author.
According to Bartlett, computers are able to detect distinct features of facial expressions that people usually miss, making them better at differentiating between fake and real pains.
During the study the computer-vision system had an 85% accuracy rate compared to the 55% accuracy rate of human participants.
In highly social species such as humans, faces have evolved to convey rich information, including expressions of emotion and pain. And, because of the way our brains are built, people can simulate emotions they’re not actually experiencing – so successfully that they fool other people. The computer is much better at spotting the subtle differences between involuntary and voluntary facial movements.
The study also found that the mouth was the main predictive feature of fake expressions. Those who faked expressions tended to open their mouths with less variation and too regularly. Further research would help determine whether over regularity is a general feature of fake expressions.
Bartlett adds that aside from detecting fake pain, the computer-vision system can also be designed to detect other types of deceptions and may be useful in the areas of homeland security, psychopathology, job screening, medicine, and law. It can also be applied in detecting a person’s health, physiological or emotional state.