篇名 | A BERT Based Single Document Extractive Summarization Model |
---|---|
卷期 | 31:2 |
作者 | Wei Liu 、 Pei-Ran Song 、 Rui-Li Jiao |
頁次 | 241-249 |
關鍵字 | BERT 、 extractive summarization 、 sentence position embedding 、 single document 、 EI 、 MEDLINE 、 Scopus |
出刊日期 | 202004 |
DOI | 10.3966/199115992020043102020 |
BERT, a pre-trained Transformer model, has already become one of the most common model in multiple natural language processing (NLP) tasks. It has been customized for extractive summarization via the fine-tuned BERTSUM model. Different from the other NLP tasks, extractive summarization relies heavily on the sentence position information at the document level. However, this crucial feature has not been fully studied in the existing models, either BERT or BERTSUM. In this paper, we propose a novel single document extractive summarization model, which incorporate the sentence positions through an extra documental position embedding module. The proposed model has been tested on the well-known CNN/DaliyMail dataset. Results show that the performance of our model is competitively against the state-of-the-art models on this task. Ablation experiments prove that the quality of the extracted summary can be improved by adding the documental sentence position embedding module.