Distributional learning of some nonlinear tree grammars

Alexander Clark, Makoto Kanazawa, Gregory M. Kobele, Ryo Yoshinaka

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

A key component of Clark and Yoshinaka's distributional learning algorithms is the extraction of substructures and contexts contained in the input data. This problem often becomes intractable with nonlinear grammar formalisms due to the fact that more than polynomially many substructures and/or contexts may be contained in each object. Previous works on distributional learning of nonlinear grammars avoided this difficulty by restricting the substructures or contexts that are made available to the learner. In this paper, we identify two classes of nonlinear tree grammars for which the extraction of substructures and contexts can be performed in polynomial time, and which, consequently, admit successful distributional learning in its unmodified, original form.

Original languageEnglish
Pages (from-to)339-377
Number of pages39
JournalFundamenta Informaticae
Volume146
Issue number4
DOIs
Publication statusPublished - 2016
Externally publishedYes

Keywords

  • Distributional learning
  • IO context-free tree grammar
  • generalized context-free grammar
  • parallel regular tree grammar
  • tree language
  • tree pattern

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Algebra and Number Theory
  • Information Systems
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Distributional learning of some nonlinear tree grammars'. Together they form a unique fingerprint.

Cite this