TY - JOUR
T1 - Partial distortion entropy maximization for online data clustering
AU - Takizawa, Hiroyuki
AU - Kobayashi, Hiroaki
PY - 2007/9
Y1 - 2007/9
N2 - Competitive learning neural networks are regarded as a powerful tool for online data clustering to represent a non-stationary probability distribution with a fixed number of weight vectors. One difficulty in practical applications of competitive learning neural networks to online data clustering is that most of them require heuristically-predetermined threshold parameters for balancing a trade-off between convergence accuracy, i.e. error minimization performance, and speed of adaptation to the changes in source statistics. Although adaptation acceleration is achievable by relocating a "useless" node so that it becomes useful, excessive relocation often disturbs error minimization. Hence, both of the adaptation speed and the error minimization performance sensitively depend on threshold parameters to determine whether a node should be relocated or not. In general, it is difficult to know adequate threshold parameters a priori. This paper proposes a novel criterion for decision making of node relocation without heuristically predetermined thresholds. According to the proposed criterion, a node is relocated only if the relocation task improves partial distortion entropy, which is an online optimality metric reliable from the viewpoint of error minimization. Hence, node relocation is carried out without disturbing error minimization. As a result, both quick adaptation and error minimization are simultaneously accomplished without any carefully predefined parameters. Experimental results clarify the validity of the proposed criterion. Competitive learning with the criterion is clearly superior to other representative algorithms in terms of both quick adaptation and error minimization performance.
AB - Competitive learning neural networks are regarded as a powerful tool for online data clustering to represent a non-stationary probability distribution with a fixed number of weight vectors. One difficulty in practical applications of competitive learning neural networks to online data clustering is that most of them require heuristically-predetermined threshold parameters for balancing a trade-off between convergence accuracy, i.e. error minimization performance, and speed of adaptation to the changes in source statistics. Although adaptation acceleration is achievable by relocating a "useless" node so that it becomes useful, excessive relocation often disturbs error minimization. Hence, both of the adaptation speed and the error minimization performance sensitively depend on threshold parameters to determine whether a node should be relocated or not. In general, it is difficult to know adequate threshold parameters a priori. This paper proposes a novel criterion for decision making of node relocation without heuristically predetermined thresholds. According to the proposed criterion, a node is relocated only if the relocation task improves partial distortion entropy, which is an online optimality metric reliable from the viewpoint of error minimization. Hence, node relocation is carried out without disturbing error minimization. As a result, both quick adaptation and error minimization are simultaneously accomplished without any carefully predefined parameters. Experimental results clarify the validity of the proposed criterion. Competitive learning with the criterion is clearly superior to other representative algorithms in terms of both quick adaptation and error minimization performance.
KW - Competitive learning
KW - Non-stationary probability distributions
KW - Partial distortion entropy maximization
KW - Partial distortion theorem
UR - http://www.scopus.com/inward/record.url?scp=34548451977&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34548451977&partnerID=8YFLogxK
U2 - 10.1016/j.neunet.2007.04.029
DO - 10.1016/j.neunet.2007.04.029
M3 - Article
C2 - 17683903
AN - SCOPUS:34548451977
SN - 0893-6080
VL - 20
SP - 819
EP - 831
JO - Neural Networks
JF - Neural Networks
IS - 7
ER -