CapsuleNet for micro-expression recognition

Nguyen Van Quang, Jinhee Chun, Takeshi Tokuyama

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

89 Citations (Scopus)

Abstract

Facial micro-expression recognition has attracted researchers in terms of its objectiveness to reveal the true emotion of a person. However, the limited number of publicly available datasets on micro-expression and its low intensity of facial movements have posed a great challenge to training robust data-driven models for recognition task. In 2019, Facial Micro-Expression Grand Challenge combines three popular datasets, i.e. SMIC, CASME II, and SAMM into a single crossdatabase which requires the generalization of proposed method on a wider range of subject characteristics. In this paper, we propose a simple yet effective CapsuleNet for micro-expression recognition. The effectiveness of our proposed methods was evaluated on the cross-database micro-expression benchmark using the Leave-One-Object-Out cross-validation. The experiments show that our method achieved superiorly higher results than the baseline method (LBP-TOP) provided and other state-of-the-art CNN models.

Original languageEnglish
Title of host publicationProceedings - 14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728100890
DOIs
Publication statusPublished - 2019 May
Event14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019 - Lille, France
Duration: 2019 May 142019 May 18

Publication series

NameProceedings - 14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019

Conference

Conference14th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2019
Country/TerritoryFrance
CityLille
Period19/5/1419/5/18

Fingerprint

Dive into the research topics of 'CapsuleNet for micro-expression recognition'. Together they form a unique fingerprint.

Cite this