TY - JOUR
T1 - Thinning-out
T2 - A method to reduce trials in skill discovery of a robot
AU - Kobayashi, Hayato
AU - Hatano, Kohei
AU - Ishino, Akira
AU - Shinohara, Ayumi
PY - 2009
Y1 - 2009
N2 - In skill discovery of a robot, the number of trials (i.e., evaluations of a score function) is highly limited since each trial takes much time and cost. In this case, memory-based learning, which retains and utilizes the history of trials, is efficient. There are mainly two approaches in studies of memory-based learning. One is an approach to estimate scores by using an approximation model of an original score function despite evaluating the score function. The other is an approach to estimate proper scores in a noisy score function. In this paper, we take another approach to find unpromising search points and skip over the evaluations by characterizing a function class which a score function belongs to. We call this approach thinning-out of search points in contrast of pruning of search trees. The main advantage of thinning-out is to make correct judgments definitely, which means that thinning-out skips over only unpromising search points, as long as the defined function class is proper. We show the properties of thinning-out by addressing the maximization problems of several test functions. In addition, we apply thinning-out to the problem of discovering of physical motions of virtual legged robots and show that the virtual robots can discover sophisticated motions that are much different from the initial motion in a reasonable amount of trials.
AB - In skill discovery of a robot, the number of trials (i.e., evaluations of a score function) is highly limited since each trial takes much time and cost. In this case, memory-based learning, which retains and utilizes the history of trials, is efficient. There are mainly two approaches in studies of memory-based learning. One is an approach to estimate scores by using an approximation model of an original score function despite evaluating the score function. The other is an approach to estimate proper scores in a noisy score function. In this paper, we take another approach to find unpromising search points and skip over the evaluations by characterizing a function class which a score function belongs to. We call this approach thinning-out of search points in contrast of pruning of search trees. The main advantage of thinning-out is to make correct judgments definitely, which means that thinning-out skips over only unpromising search points, as long as the defined function class is proper. We show the properties of thinning-out by addressing the maximization problems of several test functions. In addition, we apply thinning-out to the problem of discovering of physical motions of virtual legged robots and show that the virtual robots can discover sophisticated motions that are much different from the initial motion in a reasonable amount of trials.
KW - Four-legged robot
KW - Memory-based learning
KW - Robocup
KW - Skill discovery
UR - http://www.scopus.com/inward/record.url?scp=59349093565&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=59349093565&partnerID=8YFLogxK
U2 - 10.1527/tjsai.24.191
DO - 10.1527/tjsai.24.191
M3 - Article
AN - SCOPUS:59349093565
SN - 1346-0714
VL - 24
SP - 191
EP - 202
JO - Transactions of the Japanese Society for Artificial Intelligence
JF - Transactions of the Japanese Society for Artificial Intelligence
IS - 1
ER -