dc.description.abstract | According to the shared signal hypothesis, the perception of faces is enhanced when the
emotion, gaze, and motivation are congruent. When threatening emotions are congruent with
gaze and motivation, it becomes relevant to the observer. For example, an angry face looking
at you is a direct threat whereas a fearful face looking away signals threat in the environment
relevant to you. Visual search studies using faces have yielded mixed results and most studies
have been compromised by low ecological validity. This study aims to tackle this limitation
by using real faces. 32 non-clinical participants completed a visual search task with multiple
trials representing each combination between the levels of emotion (angry, fearful), gaze
(direct, averted) and set-size (4, 8, 16). Gaze data were retrieved with an eye-tracker and traitanxiety was measured with a questionnaire afterwards. Multilevel models were performed to
assess differences in response times between all possible combinations. The results showed
that the shared signal hypothesis was only true for anger. Furthermore, fearful faces were
found faster than angry faces and once found, they were also faster identified as the emotional
target. Lastly, trait-anxiety levels did not moderate reaction times for self-relevant threat, but
did bias the individual to direct gazes compared to indirect gazes. Limitations include the lack
of an emotional intensity measurement of the stimuli and the small sample of models used for
the stimuli. The current study is a great stepping stone for future research investigating the
ecological validity of attentional biases to self-relevant facial threat. | |