首页|FAST: Improving Controllability for Text Generation with Feedback Aware
Self-Training
FAST: Improving Controllability for Text Generation with Feedback Aware
Self-Training
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
Arxiv
Controllable text generation systems often leverage control codes to direct
various properties of the output like style and length. Inspired by recent work
on causal inference for NLP, this paper reveals a previously overlooked flaw in
these control code-based conditional text generation algorithms. Spurious
correlations in the training data can lead models to incorrectly rely on parts
of the input other than the control code for attribute selection, significantly
undermining downstream generation quality and controllability. We demonstrate
the severity of this issue with a series of case studies and then propose two
simple techniques to reduce these correlations in training sets. The first
technique is based on resampling the data according to an example's propensity
towards each linguistic attribute (IPS). The second produces multiple
counterfactual versions of each example and then uses an additional feedback
mechanism to remove noisy examples (feedback aware self-training, FAST). We
evaluate on 3 tasks -- news headline, meta review, and search ads generation --
and demonstrate that FAST can significantly improve the controllability and
language quality of generated outputs when compared to state-of-the-art
controllable text generation approaches.
Konstantin Golobokov、Reid Pryzant、Yi Liu、Chenguang Zhu、Victor Ye Dong、Junyi Chai