Learning compact neural word embeddings by parameter space sharing

Jun Suzuki, Masaaki Nagata

Research output: Contribution to journalConference articlepeer-review

8 Citations (Scopus)

Abstract

The word embedding vectors obtained from neural word embedding methods, such as vLBL models and SkipGram, have become an important fundamental resource for tackling a wide variety of tasks in the artificial intelligence field. This paper focuses on the fact that the model size of high-quality embedding vectors is relatively large, i.e., more than 1GB. We propose a learning framework that can provide a set of 'compact' embedding vectors for the purpose of enhancing 'usability' in actual applications. Our proposed method incorporates parameter sharing constraints into the optimization problem. These additional constraints force the embedding vectors to share parameter values, which significantly shrinks model size. We investigate the trade-off between quality and model size of embedding vectors for several linguistic benchmark datasets, and show that our method can significantly reduce the model size while maintaining the task performance of conventional methods.

Original languageEnglish
Pages (from-to)2046-2052
Number of pages7
JournalIJCAI International Joint Conference on Artificial Intelligence
Volume2016-January
Publication statusPublished - 2016 Jan 1
Externally publishedYes
Event25th International Joint Conference on Artificial Intelligence, IJCAI 2016 - New York, United States
Duration: 2016 Jul 92016 Jul 15

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Learning compact neural word embeddings by parameter space sharing'. Together they form a unique fingerprint.

Cite this