Facial perception involves more than just the features common to all human faces, like the placement of the mouth, nose, and eyes. Our brains might be evolutionarily attuned to those universal patterns, but reading social information requires being able to determine if someone is happy, angry, or sad, or whether they are paying attention to us. Alais’ group designed a sensory adaptation experiment, and it determined that we do indeed process facial pareidolia in much the same way as we do for real faces, according to a paper published last year in the journal Psychological Science.

This latest study admittedly has a small sample size: 17 university students, all of whom completed practice trials with eight real faces and eight pareidolia images prior to the experiments. (The trial data were not recorded.) The actual experiments used 40 real faces and 40 pareidolia images, selected to include expressions ranging from angry to happy and falling into four categories: high angry, low angry, low happy, and high happy. During the experiments, subjects were briefly shown each image and were then asked to rate the emotional expression on the angry/happy rating scale.

The first experiment was designed to test for serial effects. Subjects completed a sequence of 320 trials, with each of the images shown eight times in randomized order. Half of the subjects completed the portion using real faces first and the pareidolia images second. The other half of the subjects did the opposite. The second experiment was similar, except both real faces and pareidolia images were randomly combined in the trials. Each participant rated a given image eight times, and those results were averaged into a mean estimate of the image’s expression.

“What we found was that actually these pareidolia images are processed by the same mechanism that would normally process emotion in a real face,” Alais told The Guardian. “You are somehow unable to totally turn off that face response and emotion response and see it as an object. It remains simultaneously an object and a face.”

Specifically, the results showed that subjects could reliably rate the pareidolia images for facial expression. The subjects also showed the same serial dependency bias as Tinder users or art gallery patrons. That is, a happy or angry illusory face in an object will be perceived as more similar in expression to the preceding one. And when real faces and pareidolia images are mixed, as in the second experiment, that serial dependence was more pronounced when subjects viewed the pareidolia images before the human faces. Alais et al. concluded that this is indicative of a shared underlying mechanism between the two, which means “expression processing is not tightly bound to human facial features,” they wrote.

“This ‘cross-over’ condition is important, as it shows that the same underlying facial expression process is involved, regardless of image type,” said Alais. “This means that seeing faces in clouds is more than a child’s fantasy. When objects look compellingly facelike, it is more than an interpretation: They really are driving your brain’s face-detection network. And that scowl or smile—that’s your brain’s facial expression system at work. For the brain, fake or real, faces are all processed the same way.”

This story originally appeared on Ars Technica.


More Great WIRED Stories