The mechanism of additive composition

Ran Tian, Naoaki Okazaki, Kentaro Inui

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)


Additive composition (Foltz et al. in Discourse Process 15:285–307, 1998; Landauer and Dumais in Psychol Rev 104(2):211, 1997; Mitchell and Lapata in Cognit Sci 34(8):1388–1429, 2010) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words. In this article, we prove an upper bound for the bias of additive composition, which is the first theoretical analysis on compositional frameworks from a machine learning point of view. The bound is written in terms of collocation strength; we prove that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector. Our proof relies on properties of natural language data that are empirically verified, and can be theoretically derived from an assumption that the data is generated from a Hierarchical Pitman–Yor Process. The theory endorses additive composition as a reasonable operation for calculating meanings of phrases, and suggests ways to improve additive compositionality, including: transforming entries of distributional word vectors by a function that meets a specific condition, constructing a novel type of vector representations to make additive composition sensitive to word order, and utilizing singular value decomposition to train word vectors.

Original languageEnglish
Pages (from-to)1083-1130
Number of pages48
JournalMachine Learning
Issue number7
Publication statusPublished - 2017 Jul 1


  • Approximation error bounds
  • Bias and variance
  • Compositional distributional semantics
  • Hierarchical Pitman–Yor process
  • Natural language data


Dive into the research topics of 'The mechanism of additive composition'. Together they form a unique fingerprint.

Cite this