Objective: Emotion recognition impairments are a common feature of schizophrenia. This pilot study investigates the effectiveness of the 'micro-expressions training tool' (METT) to help improve this skill.
Conclusions: Patients with schizophrenia make significant improvements in emotion recognition following training with this tool, suggesting that brief remediation therapy may be a valuable adjunct to existing treatment programmes.
Micro Expression Training Tool Mett
We found that micro-expressions were better recognized when the emotional valences of context and target were inconsistent, that is, anger was easier to recognize with positive context, whereas happiness was easier to recognize with negative context. However, previous research found that happy faces were recognized more accurately when primed by a happy face than by an angry face, whereas sad expressions were recognized more accurately when primed by an angry face than by a happy face [20]. Similar results were also observed when the facial expressions were primed by affective scenes [21]. These seemingly contradictory findings might be due to the different presentation duration of prime and target. The prime was displayed for a relatively shorter duration than were the target faces in previous studies [20], whereas the context expression was displayed much longer than was the target micro-expression in our study. Hence, it is likely that the briefly flashed prime facilitated recognition of a similar target facial expression, whereas the longer presentation of the context facial expression impaired the recognition of the similar target because of the smaller changes between the context and target expressions.
Moreover, the lower accuracy rates for inconsistent than for consistent valences trials might have been owing to the differences between stimuli. Consistent context expressions differed from target expressions only in the mouth region (closed vs. opened), whereas the differences between the inconsistent context and target expressions were in both the mouth regions and the other parts of the face. That is, the stimulus differences between the context and target micro-expressions might have led to the context effect. Previous studies have shown that the target was more easily recognized when the differences between the targets and non-targets were obvious [36]. However, it is important to note that when the target was a micro-expression with a morph ratio of 50% happiness plus 50% anger, negative context led to more happiness responses and fewer anger responses than did neutral context, whereas positive context led to more anger responses than did neutral context. These results revealed that the valence differences between contexts also contributed to the effect of emotional context. Therefore, the context effect on micro-expression recognition might be owing to not only the stimulus differences between the context and target micro-expressions but also the valence differences between contexts.
Previous research has shown that the processes of facial expression recognition are not simple classification but are cognitive processes including the results of sequential and cumulative stimulus evaluations that took the context information into account [37]. However, it remains unclear exactly how emotional context influences micro-expression recognition. The current study has provided behavioral evidence for the role of emotional context information in micro-expression recognition. Further studies should use neuroimaging techniques to reveal the stages in micro-expression processing that are influenced by the emotional context.
With 50 years of research and innovative study - Dr. Paul Ekman has developed this online training - based on reading micro facial expressions and subtle facial expressions. This training has been scientifically proven and field tested and is now the basis of a television show on FOX/SKY tv - LIE TO ME - to which Dr. Paul Ekman is the Scientific Consultant.Dr Paul Ekman designed and endorsed training will improve your ability to "see" and "relate" to the world around us.
When people deliberately try to conceal their emotions, or unconsciously repress their emotions, a very brief facial expression often occurs. This is invisible to nearly everyone who has not trained with METT as it usually occurs for only 1/15 to 1/25 of a second. Training with METT enables you to better spot truth and lies, put people at ease, understand others more deeply and be more successful in many contexts including sales, leadership, management, coaching and customer service. While most of us miss the valuable signs of concealed emotions, the Micro Expression Training Tool (METT) will enable you to spot most of them. The facial expressions of anger, fear, sadness, disgust, contempt, surprise and happiness are universal - the same for all people.METT includes a Pre-Test to establish how many micro expressions you can spot without training, followed by Training, Practice, and a Post-Test so you can see how much you have improved.Access the Online courses and tools.
Subtle expressions appear in very small movements, typically in just one region of the face: the brows, eyelids, cheeks, nose, lips, or chin. Subtle expressions occur when an emotion is first beginning. The person showing a subtle expression might not yet know he or she is feeling an emotion, but you will know it. These are the first signs of an emotion that later develops into a larger, more obvious expression. And, subtle expressions occur when the emotion felt is very slight.The subtle expressions for each of the seven universal emotions are presented in this online tool created by Dr Paul Ekman: anger, fear, sadness, disgust, happiness, surprise, and contempt. (The only one of those words that may be unfamiliar is contempt, which is a feeling of moral superiority). This tool has also been scientifically validated by Dr Ekman.Access the Online courses and tools.
Experiment design. (A) Experimental procedure. Two visits were made. In the first visit, participants underwent anodal or sham stimulation and then completed the Chinese version of Micro-Expression Training Tool (METT); in the second visit 2 weeks later, they only finished the Chinese version of METT. The Chinese version of METT included five sections, pre-test, training, practice, review, and post-test. In the sections of pre-test and post-test, participants were asked to choose one of eight emotion labels after seeing the stimuli. The stimuli of pre-test included static expressions, artificial and spontaneous micro-expressions. The stimuli of post-test included artificial and spontaneous micro-expressions. (B) Placement of the anodal electrode for the right temporal parietal junction (rTPJ) between P6 and CP6 regions (top row) and the normalized electric field (NormE) derived from electric field modeling calculations using SimNIBS (bottom row). (C) The time series of artificial and spontaneous micro-expression in disgust. Source: L.F. Chen and Y.S. Yen, Taiwanese facial expression image database, Brain Mapping Laboratory, Institute of Brain Science, National Yang-Ming University, 2007.
The next section of the training included several Chinese versions of commentary videos in which the narrator emphasized critical facial features and explained how to recognize and distinguish confusing emotions accurately. In the practice section, participants practiced with feedback to ensure they understood and internalized the knowledge and skills learned in the earlier section. The review section repeated the training section. The final post-test followed the same form as the pre-test, but alternative materials were used to assess the ability of artificial and spontaneous micro-expression recognition after training.
Repeated ANOVA indicated a training effect on artificial micro-expression recognition. The mean accuracy of artificial micro-expression recognition for each condition is displayed in Figure 2A. The main effect of the testing stage was significant for the first visit [F(1,56) = 146.91, pcorrected
Similar training effects were found in spontaneous micro-expression recognition (see Figure 2B). The main effect of the testing stage was significant (first visit: F(1,56) = 23.78, pcorrected 2ff7e9595c
Comments