TY - GEN

T1 - Refutably probably approximately correct learning

AU - Matsumoto, Satoshi

AU - Shinohara, Ayumi

N1 - Publisher Copyright:
© 1994, Springer Verlag. All Rights Reserved.

PY - 1994

Y1 - 1994

N2 - We propose a notion of the refutably PAC learning, which formalizes the refutability of hypothesis spaces in the PAC learning model. Intuitively, the refutably PAC learning for a concept class F requires that the learning algorithm should refute F with high probability if a target concept can not be approximated by any concept in F with respect to the underlying probability distribution. We give a general upper bound of O((1/ε + 1 + 1/ ε′) ln (|F[n]|/δ)) on the number of examples required for refutably PAC learning of F. Here, ε and δ are the standard accuracy and confidence parameters, and ε′ is the refutation accuracy. Furthermore we also define the strongly refutably PAC learning by introducing the refutation threshold. We prove a general upper bound of O((1/ε2 + 1/ε′2) In (|F[n]|/δ)) for strongly refutably PAC learning of F. These upper bounds reveal that both the refutably learnability and the strongly refutably learnability are equivalent to the standard learnability within the polynomial size restriction. We also define the polynomialtime refutably learnability of a concept class, and characterize it.

AB - We propose a notion of the refutably PAC learning, which formalizes the refutability of hypothesis spaces in the PAC learning model. Intuitively, the refutably PAC learning for a concept class F requires that the learning algorithm should refute F with high probability if a target concept can not be approximated by any concept in F with respect to the underlying probability distribution. We give a general upper bound of O((1/ε + 1 + 1/ ε′) ln (|F[n]|/δ)) on the number of examples required for refutably PAC learning of F. Here, ε and δ are the standard accuracy and confidence parameters, and ε′ is the refutation accuracy. Furthermore we also define the strongly refutably PAC learning by introducing the refutation threshold. We prove a general upper bound of O((1/ε2 + 1/ε′2) In (|F[n]|/δ)) for strongly refutably PAC learning of F. These upper bounds reveal that both the refutably learnability and the strongly refutably learnability are equivalent to the standard learnability within the polynomial size restriction. We also define the polynomialtime refutably learnability of a concept class, and characterize it.

UR - http://www.scopus.com/inward/record.url?scp=43049160085&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=43049160085&partnerID=8YFLogxK

U2 - 10.1007/3-540-58520-6_84

DO - 10.1007/3-540-58520-6_84

M3 - Conference contribution

AN - SCOPUS:43049160085

SN - 9783540585206

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 469

EP - 483

BT - Algorithmic Learning Theory - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994, Proceedings

A2 - Arikawa, Setsuo

A2 - Jantke, Klaus P.

PB - Springer Verlag

T2 - 4th International Workshop on Analogical and Inductive Inference, AII 1994 and 5th International Workshop on Algorithmic Learning Theory, ALT 1994

Y2 - 10 October 1994 through 15 October 1994

ER -