Research Output
Improving the Naturalness and Diversity of Referring Expression Generation models using Minimum Risk Training
  In this paper we consider the problem of optimizing neural Referring Expression Generation (REG) models with sequence level objectives. Recently reinforcement learning (RL) techniques have been adopted to train deep end-to-end systems to directly optimize sequence-level objectives. However, there are two issues associated with RL training: (1) effectively applying RL is challenging, and (2) the generated sentences lack in diversity and naturalness due to deficiencies in the generated word distribution, smaller vocabulary size, and repetitiveness of frequent words or phrases. To alleviate these issues, we propose a novel strategy for training REG models, using minimum risk training (MRT) with maximum likelihood estimation (MLE) and we show that our approach outperforms RL w.r.t naturalness and diversity of the output. Specifically, our approach achieves an increase in CIDEr scores between 23%-57% in two datasets. We further demonstrate the robustness of the proposed method through a detailed comparison with different REG models.

  • Date:

    31 December 2020

  • Publication Status:

    Published

  • Funders:

    Economic and Social Research Council

Citation

Panagiaris, N., Hart, E., & Gkatzia, D. (2020). Improving the Naturalness and Diversity of Referring Expression Generation models using Minimum Risk Training. In Proceedings of the 13th International Conference on Natural Language Generation. , (41-51)

Authors

Monthly Views:

Available Documents