首页|Abstractive Document Summarization via Neural Model with Joint Attention

Abstractive Document Summarization via Neural Model with Joint Attention

扫码查看
Due to the difficulty of abstractive summarization, the great majority of past work on document summarization has been extractive, while the recent success of sequence-to-sequence framework has made abstractive summarization viable, in which a set of recurrent neural networks models based on attention encoder-decoder have achieved promising performance on short-text summarization tasks。 Unfortunately, these attention encoder-decoder models often suffer from the undesirable shortcomings of generating repeated words or phrases and inability to deal with out-of-vocabulary words appropriately。 To address these issues, in this work we propose to add an attention mechanism on output sequence to avoid repetitive contents and use the subword method to deal with the rare and unknown words。 We applied our model to the public dataset provided by NLPCC 2017 shared task3。 The evaluation results show that our system achieved the best ROUGE performance among all the participating teams and is also competitive with some state-of-the-art methods。

Abstractive summarizationAttentional mechanism Encoder-decoder frameworkNeural network

Liwei Hou、Po Hu、Chao Bei

展开 >

School of Computer Science, Central China Normal University, Wuhan 430079, China

Global Tone Communication Technology Co., Ltd., Beijing 100043, China

CCF international conference on natural language processing and Chinese computing

Dalian(CN)

Natural language understanding and intelligent applications

329-338

2017