Transfer Learning for Unseen Slots in End-to-End Dialogue State Tracking

Kenji Iwata, Takami Yoshida, Hiroshi Fujimura, Masami Akamine

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review


This paper proposes a transfer learning algorithm for end-to-end dialogue state tracking (DST) to handle new slots with a small set of training data, which has not yet been discussed in the literature on conventional approaches. The goal of transfer learning is to improve DST performance for new slots by leveraging slot-independent parameters extracted from DST models for existing slots. An end-to-end DST model is composed of a spoken language understanding module and an update module. We assume that parameters of the update module can be slot-independent. To make the parameters slot-independent, a DST model for each existing slot is trained by sharing the parameters of the update module across all existing slots. The slot-independent parameters are transferred to a DST model for the new slot. Experimental results show that the proposed algorithm achieves 82.5% accuracy on the DSTC2 dataset, outperforming a baseline algorithm by 1.8% when applied to a small set of training data. We also show its potential robustness for the network architecture of update modules.

Original languageEnglish
Title of host publicationLecture Notes in Electrical Engineering
PublisherSpringer Science and Business Media Deutschland GmbH
Number of pages13
Publication statusPublished - 2021

Publication series

NameLecture Notes in Electrical Engineering
ISSN (Print)1876-1100
ISSN (Electronic)1876-1119


Dive into the research topics of 'Transfer Learning for Unseen Slots in End-to-End Dialogue State Tracking'. Together they form a unique fingerprint.

Cite this