A Ryan Gosling meme with a caption that reads 'Hey girl, I want to spend the entire day decoding your face.'


Let's try an experiment. Stand in front of a mirror, and look closely at your face. Smile. Physically, how has that face in the mirror changed?

Now frown. How has the face in the mirror changed now? What looks different from when you were smiling? How is it different from you were just standing with a neutral expression?

As a person, the expressions we see on a face are intuitive to us. You can automatically tell whether someone is smiling or frowning (or indeed if they are showing many other emotions) simply by looking at their face. But once again, I ask you, how?

Our face is the most complex and individualized part of our body. To quote Wikipedia, "the face is a central organ of sense and is also central in the expression of emotion among humans". It goes on to add, "the face is crucial for human identity, and damage such as scarring or developmental deformities have effects stretching beyond those of solely physical inconvenience".

Every face is different in some way - even identical twins have some distinguishing features.

Every unborn child that is yet to come will have a face that is different from every person who has been! That is a mind-boggling number of different faces that have been and will be created!

There is one part of our brain, known as the fusiform face area (FFA) that enables us to recognize faces. People who have damaged their FFA have struggled to recognize even close family members.

Of course, as faces move they look different. The face you see in the mirror smiling back at you looks different from the face frowning at you, from the face gurning to make a point. Assuming that your FFA is fully functioning, you will still recognize those faces as belonging to the same person - you if you're still staring in the mirror!


What is FACS?


Scientists have tried to look at the concept of facial recognition, facial movement and indeed emotion-depiction, from a clinical, logical perspective. They have tried to map our faces in some clear-cut, defined way.

One system that analyses the complex movements of the face is the Facial Action Coding System (FACS). FACS was developed by Swedish anatomist Carl-Herman Hjortsjo in the late 1960s and was later popularised by Paul Ekman and Joseph C Hager as they developed their Emotion Analysis theories in the late 1970s. The most recent revision of the system occurred in 2002.

FACS is a system used to codify movements in face muscles. Every movement in particular face muscles results in a change in the FACS codes used, and in turn a difference in the face appearance.

Of course, our face movements are complex. They are constantly changing, often at high speed. We even make microexpressions, where our faces change for a fraction of a second, usually as a reaction to something.

Measuring these changes is challenging - well beyond the abilities of a human observer. Over time, most FACS measurement has occurred as a result of computer automated systems.


How is FACS used in the real world?


The FACS system takes every part of the face and breaks it down into possible movements. Each movement is given an action unit (AU) number. For instance, an inner brow raise has an AU of 1, a cheek raiser has an AU of 6, and a lip corner puller results in an AU of 12. There are nearly 46 basic AUs and nearly 100 AUs in total.

Facial expressions are often made up of combinations of these AUs. A code combination of AU6 and AU12 together represent a sincere and involuntary smile.

The system recognizes differing levels of intensity with the use of a letter (A-E). The range from a trace of the action occurring (A) up to an extreme or maximum movement of that muscle (E). An extreme inner brow raise, for instance, is represented by a code of 1E.

Together, this provides for an automated model of the mechanisms of a face. It has made life much easier for animators and creators of artificial intelligence. It provides a set programmable way to replicate a face's movements.

Animators simply need to create a script to depict the movement of each AU. Once they have this series of movements, they can easily join movement together to create fluid, realistic facial animations of their characters performing certain actions.

Be the first to receive the latest Kairos news, insights, and product updates. No spam ever.


FACS and Emotion


Although Paul Ekman may not be the founder of FACS, he is the champion of it. He adopted it to be the foundation of his study into universally recognised facial expressions of emotion.

Ekman believed that there were six emotions that occurred consistently enough between people to be considered universal facial expressions of emotion: happiness, sadness, surprise, fear, anger and disgust. He later added contempt to the list.

FACS represents each of these emotions with a combination of different action units. Sadness, for instance, is a combination of AU1 (inner brow raiser), AU4 (brow lowerer), and AU15 (lip corner depressor).


How Closely Should I Follow the FACS Model?


There is merit in understanding FACS as a model representing the movement of the muscles in a face. There have been quite a few studies supporting the analysis.

Yet, FACS is not universally followed. Even the developers of the Kairos Emotions Detection API were wary of following the model too closely. For instance, they felt it was debatable whether the stereotypical depictions for anger and disgust are that different from each other in practice.

The model does provide us with a good sign of how our face works. There is no real issue of whether people make the facial movements. The question is simply, does everybody make the same movements in relation to particular stimuli?

There is a complex 500-page manual available which explains what the facial action codes mean, as well as Ekman's theories on what combinations of AUs depict particular emotions.

In reality, it takes quite some experience to become a skilled FACS coder. You may know what muscle movements particular AU codes represent, but it is often difficult to recognize changes. We don't see the muscles themselves move because they are covered by skin. Also, a number of the codes represent similar-looking movements.

The facial coding action system does seem to provide many benefits for animators. They can save themselves many hours of work, with a common reference point for movements that they can use to put together particular expressions.

I am not so certain that there is as much benefit for those trying to determine emotion from peoples' faces. People do not appear to be quite as universal as Ekman would have hoped. FACS is useful, but it is only one tool in the emotion detection tool bag.



Verify people in your apps—Integrate face recognition with our easy-to-code API.



Discover the benefits of Kairos Face Recognition—Let's connect.




Ready to get started with Kairos?