top of page

Emotion Recognition

Humans express emotions in many ways: gestures, body language, intonation, etc. However, the main method is of course facial expressions: frowning, wrinkled nose, bright smile-these can tell us a lot about their master's mentality and constitute one of the pillars of nonverbal communication. From the first time we opened our eyes, each of us learned to recognize this pattern (basically intuitive behavior analysis), and it eventually became second nature to us.
Now that we know that AI (especially neural networks) can also learn, we must ask an inevitable question: "Can it learn to interpret human emotions?" And, can it learn to do this faster than we can? There is no doubt that the answer is yes, but it will take a while: as mentioned earlier, facial expressions are not the only way to express emotions, so just analyzing the face cannot get 100% accurate results.

includes the module Face Detector (Emotions) that works in conjunction with Object Detector. It checks the picture for faces and attempts to gauge the overall state of mind for each of them using 7 measures:

  • happy

  • surprised

  • angry

  • disgust

  • afraid

  • sad

  • neutral

Each of these is displayed on the detected face in a different degree, and a percentage (0-100) is allocated according to the degree. One of the highest of the above seven scores is considered to be the dominant state-emotion. However, all calculations will be displayed on the screen, so lower percentages are also visible. Here are some examples:

emotion_recognition_example1.jpg

The mood has been identified as Surprised. Notice that the second highest score comes from Afraid (23%) – this is quite common, as facial expressions for these 2 emotional states have plenty in common. The detector can observe the difference, however.

emotion_recognition_example3.jpg

You may have noticed before that among the parameters emotion recognition shows under a face there is also a Name – that is a number assigned to each face currently in the frame (starting with 0). Here there are 2 faces, so they are numbered 0 and 1. The full faces are also not always necessary to get an accurate reading. The mood is obviously Happy, no real challenge.
Here is a somewhat more complicated case:

emotion_recognition_example4.jpg

The hair partly covers the oval face, and facial hair also appears. The frame contains objects that are the same color as the face but are not part of the face (hand), all of which complicate the recognition. Still annoying.

Use the archive stream (of high resolution) for detection – allows the detector to analyze the archive stream instead of the preview one .
Postrecord – indicates how long the archive should keep recording after the detector stops reacting.
Range of face recognition – sets how far a face could be from the actual camera; basically, the smaller the face – the higher the slider.

All React-boxes are where the module’s true functionality starts. Each emotion has a separate box but they are counted together. If you are familiar with computer logic, you can view the checked boxes as connected with logical AND (or conjunction). If not, here is how it works:

  • If only one emotion is selected – the detector will react only to that emotion and ignore all others;

  • If 2 emotions are selected – the detector will react only to faces that exhibit BOTH these emotions;

  • If 2 emotions are selected – the detector will not react to a face that expresses only one of the selected emotions.

Each of these boxes, when checked, displays 2 sliders to adjust the minimum and maximum percentage to react to. This is best illustrated by examples:

emotion_recognition_exhibit1.jpg

Here emotion recognition checks only for neutrality and sadness. The face’s neutrality is 57% (fits between 40% and 100%); sadness is 19% (fits between 15% and 100%) – thus the detector is triggered, both conditions are met.

emotion_recognition_exhibit2.jpg

Here the detector looks for 3 emotions: anger, disgust and sadness. The face’s anger is 61% (fits between 50% and 100%); disgust is 4% (doesn’t fit between 15% and 100%); sadness is 23% (fits between 20% and 100%) – thus the detector is not triggered, since only 2 of the 3 conditions are met (anger and sadness).

This logic applies to any combination of emotions. However, some things are unlikely to produce good results:

  • Look for opposite emotions at the same time-for example, happiness and anger will almost never appear on the same face, so if two boxes are checked at the same time, it is almost impossible for the module to react to anything.

  • Look for all emotions at the same time-as expressive as a human face, its expressive power is limited. No one's face can show all 7 emotions at the same time, so checking all the check boxes will also prevent the module from reacting to anything.

React only to the specified amount of faces in camera image – indicates how many faces (minimum and maximum) needs to be in the frame for the detector to react.
React only to the specified percentage of the selected emotions from the total number of recognized faces – this makes the module more statistics-oriented: it will check all faces in the frame (up to 200) for the selected emotions and calculate the percentage of those that fit the requirements; if that percentage fits the sliders you set up – the detector reacts. In short, it can show how many of the people currently in the frame are happy or sad and so on. Such data is paramount for behavior analysis.
Full path to an external program for processing of detection results – this exists for integration purposes: you can make a script or database that will store and manage the detector’s findings and put the path to the program that opens the script/database into this box (e.g. Python, PHP, etc.).
Parameters for launch of the external program – works in conjunction with the previous box; here is where you put the path to the actual script/database (e.g. my_script.py, emotion_database.php, etc.) and tell system what parameters to pass on to it using macros. You can check the whole list of these parameters in Information about launch of the external program.
Interval between launches of the external program – sets how often the script/database can be used; since a single face may linger in the frame for some time, you may not need to send the info on it several times, so this interval can be set higher.
Save data in CSV report – creates a file in the specified directory that will log the detector’s every reaction for future (or immediate) analysis.

The possible applications of emotion recognition are very diverse: crowd control, road safety, statistical information collection, and of course retail marketing. As far as crowd control is concerned, the facial expressions of a large number of people facing anger or fear measures can be used as an indicator of fighting or commotion. In cars: A system that continuously monitors the emotional state of the driver can be used as a calming factor to prevent road rage and prevent the wheels from falling asleep. Statistics and marketing go hand in hand: understanding how long customers express positive emotions while observing certain products can show marketers whether their location is selected (attractive) and whether it is potentially and relevant to potential customers . Selected target audience. This is especially valuable for new products.

It goes without saying that a combination of neural networks and behavior analysis is a cutting-edge technology, one that is expected to grow and refine itself with incredible pace. We are excited to follow this trend and bring you the very fruits of this study.

DISCLAIMER: No faces or emotions were harmed in the making of this article. All faces are properties of their respective owners.

How to install the camera to use Emotion recognition

emotion_recognition_camera_position_en_e

Here are the ways to increase the successful recognition rate:

• You can place the camera as close as possible to the area where you need to detect emotions in (preferably at right angle to the face)
• Place camera at right angle, face should occupy a large part of the frame
• Lighting should not be very dim or with a lot of flashes (you can use special HLC (High Light Compensation) cameras (often marked ‘For LPR/ANPR’))
• Use long-focus objective for a better view

emotion_recognition_example1.jpg

Emotion Additional License

Per camera 

"Emotion Recognition" module for 1 camera, 1-year updates, perpetual license. License request Pro License in use at the system.

from

HKD$5500

bottom of page