TY - GEN
T1 - CapsuleNet for micro-expression recognition
AU - Van Quang, Nguyen
AU - Chun, Jinhee
AU - Tokuyama, Takeshi
N1 - Funding Information:
VII. ACKNOWLEDGEMENTS This work was supported by JSPS KAKENHI Grant Number 15H02665, 17K00002.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/5
Y1 - 2019/5
N2 - Facial micro-expression recognition has attracted researchers in terms of its objectiveness to reveal the true emotion of a person. However, the limited number of publicly available datasets on micro-expression and its low intensity of facial movements have posed a great challenge to training robust data-driven models for recognition task. In 2019, Facial Micro-Expression Grand Challenge combines three popular datasets, i.e. SMIC, CASME II, and SAMM into a single crossdatabase which requires the generalization of proposed method on a wider range of subject characteristics. In this paper, we propose a simple yet effective CapsuleNet for micro-expression recognition. The effectiveness of our proposed methods was evaluated on the cross-database micro-expression benchmark using the Leave-One-Object-Out cross-validation. The experiments show that our method achieved superiorly higher results than the baseline method (LBP-TOP) provided and other state-of-the-art CNN models.
AB - Facial micro-expression recognition has attracted researchers in terms of its objectiveness to reveal the true emotion of a person. However, the limited number of publicly available datasets on micro-expression and its low intensity of facial movements have posed a great challenge to training robust data-driven models for recognition task. In 2019, Facial Micro-Expression Grand Challenge combines three popular datasets, i.e. SMIC, CASME II, and SAMM into a single crossdatabase which requires the generalization of proposed method on a wider range of subject characteristics. In this paper, we propose a simple yet effective CapsuleNet for micro-expression recognition. The effectiveness of our proposed methods was evaluated on the cross-database micro-expression benchmark using the Leave-One-Object-Out cross-validation. The experiments show that our method achieved superiorly higher results than the baseline method (LBP-TOP) provided and other state-of-the-art CNN models.
UR - http://www.scopus.com/inward/record.url?scp=85070449521&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85070449521&partnerID=8YFLogxK
U2 - 10.1109/FG.2019.8756544
DO - 10.1109/FG.2019.8756544
M3 - Conference contribution
AN - SCOPUS:85070449521
T3 - Proceedings - 14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019
BT - Proceedings - 14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019
Y2 - 14 May 2019 through 18 May 2019
ER -