文章詳目資料

Journal of Computers EIMEDLINEScopus

  • 加入收藏
  • 下載文章
篇名 Dirichlet Variational Autoencoder for Joint Slot Filling and Intent Detection
卷期 32:2
作者 Wang GaoYu-Wei WangFan ZhangYuan Fang
頁次 061-073
關鍵字 Dirichlet variational autoencoderspoken language understandingsemi-supervised learningdata augmentationEIMEDLINEScopus
出刊日期 202104
DOI 10.3966/199115992021043202006

中文摘要

英文摘要

Spoken Language Understanding (SLU) is an important part of spoken dialogue systems, which involves two subtasks: slot filling and intent detection. In the SLU task, joint learning has proven effective because intent classes and slot labels can share semantic information with each other. However, because of the high cost of building manually labeled datasets, data scarcity has become a major bottleneck for domain adaptation in SLU. Recent studies on text generation models, such as Dirichlet variational autoencoders (DVAE), have shown excellent results in generating natural sentences and semi-supervised learning. Inspire by this, we first propose a new generative model DVAE-SLU that exploits DVAE’s generative ability to generate complete labeled utterances. Furthermore, based on DVAE-SLU, we propose a semi-supervised learning model SDVAE-SLU for joint slot filling and intent detection. Unlike previous methods, this is the first work to generate SLU datasets using DVAE. Experimental results on two classic datasets demonstrate that compared with baseline methods, existing SLU models achieve better performance by training synthetic utterances generated by DVAE-SLU, and the effectiveness of SDVAE-SLU.

本卷期文章目次

相關文獻