Decoding the intensity of facial expressions is of primary importance for humans. Modeling this computationally, however, is not an easy task. Here, we propose a new dataset composed of circa 400 videos and 1,000 images automatically extracted from several movies, and rated by humans on intensity. Each stimulus presents facial expressions of one person only, but overall, the stimuli represent a large variety of expressions in individuals of different age, gender, and ethnicity, in fictional yet natural movie settings. Each video was rated by 5 people in terms of perceived intensity and variability using a 7-point Likert scale; each image was rated by 5 people only for intensity. In total, 90 people participated in the ratings, and the average interrater ICC agreement is 0.63 for videos and 0.66 for images. For each video and image we also extracted action units using the OpenFace software.We report results for both human and computer-assisted intensity ratings, and propose a baseline regression model capable of estimating the perceived intensity in videos with a mean squared error of 0.74. We conclude our paper by discussing potential applications of a general computational model of perceived intensity.
How Do We Perceive the Intensity of Facial Expressions? The PIFE Dataset for Analysis of Perceived Intensity
Niewiadomski R.
2024-01-01
Abstract
Decoding the intensity of facial expressions is of primary importance for humans. Modeling this computationally, however, is not an easy task. Here, we propose a new dataset composed of circa 400 videos and 1,000 images automatically extracted from several movies, and rated by humans on intensity. Each stimulus presents facial expressions of one person only, but overall, the stimuli represent a large variety of expressions in individuals of different age, gender, and ethnicity, in fictional yet natural movie settings. Each video was rated by 5 people in terms of perceived intensity and variability using a 7-point Likert scale; each image was rated by 5 people only for intensity. In total, 90 people participated in the ratings, and the average interrater ICC agreement is 0.63 for videos and 0.66 for images. For each video and image we also extracted action units using the OpenFace software.We report results for both human and computer-assisted intensity ratings, and propose a baseline regression model capable of estimating the perceived intensity in videos with a mean squared error of 0.74. We conclude our paper by discussing potential applications of a general computational model of perceived intensity.| File | Dimensione | Formato | |
|---|---|---|---|
|
ACII24_tiulenevaetal.pdf
accesso chiuso
Tipologia:
Documento in Post-print
Dimensione
3.41 MB
Formato
Adobe PDF
|
3.41 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
|
How_Do_We_Perceive_the_Intensity_of_Facial_Expressions_The_PIFE_Dataset_for_Analysis_of_Perceived_Intensity.pdf
accesso chiuso
Tipologia:
Documento in versione editoriale
Dimensione
2.73 MB
Formato
Adobe PDF
|
2.73 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



