TY - GEN
T1 - Data collection and analysis for automatically generating record of human behaviors by environmental sound recognition
AU - Furuya, Takahiro
AU - Chiba, Yuya
AU - Nose, Takashi
AU - Ito, Akinori
PY - 2019/1/1
Y1 - 2019/1/1
N2 - Nowadays, the “life-log,” recording our daily activities using a camera, microphone or other sensors and retrieve those recorded data, is becoming more and more realistic. One of the applications that utilize the life-log data is the automatic generation of activity summary of the user. The present work focuses on using sound data to make the activity summary. There have been several works that classified the recorded sound based on the user’s activity. The focus of those studies was how to classify the collected data into a pre-defined set of activity classes. However, there have been no considerations what kind of activity classes were appropriate for this purpose. Moreover, we need a basic investigation for optimizing parameters of sound recognition such as window size for feature calculation. Therefore, we first investigated the optimum parameters for feature extraction, and then analyzed the acoustic similarities of sound features observed by various activities. We exploited twenty-two hours of environmental sound in a test subject’s ordinal life as the training and test data. Using the data, we analyzed the acoustic similarities of the activity sound using hierarchical clustering. As a consequence, we observed that target classes could be classified into three groups (“speaking,” “silent” and “noisy”). Misrecognitions between those groups were rare, and we observed a large number of misrecognitions within the “speaking” group.
AB - Nowadays, the “life-log,” recording our daily activities using a camera, microphone or other sensors and retrieve those recorded data, is becoming more and more realistic. One of the applications that utilize the life-log data is the automatic generation of activity summary of the user. The present work focuses on using sound data to make the activity summary. There have been several works that classified the recorded sound based on the user’s activity. The focus of those studies was how to classify the collected data into a pre-defined set of activity classes. However, there have been no considerations what kind of activity classes were appropriate for this purpose. Moreover, we need a basic investigation for optimizing parameters of sound recognition such as window size for feature calculation. Therefore, we first investigated the optimum parameters for feature extraction, and then analyzed the acoustic similarities of sound features observed by various activities. We exploited twenty-two hours of environmental sound in a test subject’s ordinal life as the training and test data. Using the data, we analyzed the acoustic similarities of the activity sound using hierarchical clustering. As a consequence, we observed that target classes could be classified into three groups (“speaking,” “silent” and “noisy”). Misrecognitions between those groups were rare, and we observed a large number of misrecognitions within the “speaking” group.
KW - Environmental sound recognition
KW - Hierarchical clustering
KW - Neural network
UR - http://www.scopus.com/inward/record.url?scp=85057116708&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85057116708&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-03748-2_18
DO - 10.1007/978-3-030-03748-2_18
M3 - Conference contribution
AN - SCOPUS:85057116708
SN - 9783030037475
T3 - Smart Innovation, Systems and Technologies
SP - 149
EP - 156
BT - Recent Advances in Intelligent Information Hiding and Multimedia Signal Processing - Proceeding of the Fourteenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing
A2 - Jain, Lakhmi C.
A2 - Jain, Lakhmi C.
A2 - Tsai, Pei-Wei
A2 - Ito, Akinori
A2 - Pan, Jeng-Shyang
A2 - Jain, Lakhmi C.
PB - Springer Science and Business Media Deutschland GmbH
T2 - 14th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2018
Y2 - 26 November 2018 through 28 November 2018
ER -