Right-truncatable neural word embeddings

Jun Suzuki, Masaaki Nagata

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

This paper proposes an incremental learning strategy for neural word embedding methods, such as SkipGrams and Global Vectors. Since our method iteratively generates embedding vectors one dimension at a time, obtained vectors equip a unique property. Namely, any right-truncated vector matches the solution of the corresponding lower-dimensional embedding. Therefore, a single embedding vector can manage a wide range of dimensional requirements imposed by many different uses and applications.

Original languageEnglish
Title of host publication2016 Conference of the North American Chapter of the Association for Computational Linguistics
Subtitle of host publicationHuman Language Technologies, NAACL HLT 2016 - Proceedings of the Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages1145-1151
Number of pages7
ISBN (Electronic)9781941643914
DOIs
Publication statusPublished - 2016
Event15th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - San Diego, United States
Duration: 2016 Jun 122016 Jun 17

Publication series

Name2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference

Conference

Conference15th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016
Country/TerritoryUnited States
CitySan Diego
Period16/6/1216/6/17

Fingerprint

Dive into the research topics of 'Right-truncatable neural word embeddings'. Together they form a unique fingerprint.

Cite this