Recognition of posed and spontaneous facial expressions

Human facial expressions are different when they are acted out and when they are spontaneous. Most people can see which expressions are posed when they are exaggerated and with some training, it is quite possible to tell distinction between the two quite accurately. So, would it be possible to make a computer program which could do the same, a kind of lie-detector?
I proposed this idea as the topic for my end project during my bachelors to my professor who was doing a lot of research in facial recognition. She was very enthusiastic about the idea, so I spent the following six months working on it.

First, we had to collect data; we filmed a couple of fellow students. They were first instructed to pose expressions, and after that they watched several short clips on TV, which we selected to elicit emotions, while we filmed them. Next, we used a facial tracking system, developed on the university, to track different facial features from the videos. From all the points on the face that were tracked we could deduce which parts of face and how quickly were they activating.

After all this data was collected, we wrote a program that would attempt to classify the expressions as either posed or genuine. The computer was trained on the data for days, but in the end, the classification rate, although better than pure guessing, was not very high. Maybe the model we used could have been adjusted, or a different classification method could have been tried, but my project time was running short so I had to conclude it.

The project made me realize just how difficult transferring knowledge to a computer can be, even if the task seems quite straightforward to humans. It also made me appreciate all the technology that is being developed to solve this. Finally, it is quite interesting to note how much we still need to learn about how we function before we attempt to transfer that knowledge to computers.

← Back