Semantic role labeling (SRL), also known as shallow semantic parsing, is an
important yet challenging task in NLP. Motivated by the close correlation
between syntactic and semantic structures, traditional discrete-feature-based
SRL approaches make heavy use of syntactic features. In contrast,
deep-neural-network-based approaches usually encode the input sentence as a
word sequence without considering the syntactic structures. In this work, we
investigate several previous approaches for encoding syntactic trees, and make
a thorough study on whether extra syntax-aware representations are beneficial
for neural SRL models. Experiments on the benchmark CoNLL-2005 dataset show
that syntax-aware SRL approaches can effectively improve performance over a
strong baseline with external word representations from ELMo. With the extra
syntax-aware representations, our approaches achieve new state-of-the-art 85.6
F1 (single model) and 86.6 F1 (ensemble) on the test data, outperforming the
corresponding strong baselines with ELMo by 0.8 and 1.0, respectively. Detailed
error analysis are conducted to gain more insights on the investigated
approaches.
Luo Si、Min Zhang、Rui Wang、Qingrong Xia、Zhenghua Li、Guohong Fu、Meishan Zhang