首页|LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments
LG-Eval: A Toolkit for Creating Online Language Evaluation Experiments
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
In this paper we describe the LG-Eval toolkit for creating online language evaluation experiments。 LG-Eval is the direct result of our work setting up and carrying out the human evaluation experiments in several of the Generation Challenges shared tasks。 It provides tools for creating experiments with different kinds of rating tools, allocating items to evaluators, and collecting the evaluation scores。
Natural Language GenerationEvaluation MethodsEvaluation Resources
Eric Kow、Anja Belz
展开 >
School of Computing, Engineering and Mathematics University of Brighton Brighton BN2 4GJ, UK
International conference on language resources and evaluation
Istanbul(TR)
8th International conference on language resources and evaluation 2012