Abstract
A key component of Clark and Yoshinaka's distributional learning algorithms is the extraction of substructures and contexts contained in the input data. This problem often becomes intractable with nonlinear grammar formalisms due to the fact that more than polynomially many substructures and/or contexts may be contained in each object. Previous works on distributional learning of nonlinear grammars avoided this difficulty by restricting the substructures or contexts that are made available to the learner. In this paper, we identify two classes of nonlinear tree grammars for which the extraction of substructures and contexts can be performed in polynomial time, and which, consequently, admit successful distributional learning in its unmodified, original form.
Original language | English |
---|---|
Pages (from-to) | 339-377 |
Number of pages | 39 |
Journal | Fundamenta Informaticae |
Volume | 146 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2016 |
Externally published | Yes |
Keywords
- Distributional learning
- IO context-free tree grammar
- generalized context-free grammar
- parallel regular tree grammar
- tree language
- tree pattern
ASJC Scopus subject areas
- Theoretical Computer Science
- Algebra and Number Theory
- Information Systems
- Computational Theory and Mathematics