Interpretability Research on the Construction of an Automatic Evaluation System for Interpreting Teaching from the Perspective of New Liberal Arts
With the continuous advancement of globalization,interpretation teaching is playing an increasingly important role in the field of new humanities.In order to improve the efficiency and quality of interpreting teaching,an automatic evaluation system has been introduced into interpreting teaching.However,currently most automated evaluation systems have encountered issues such as black box modeling,opaque evaluation results,and unclear scoring criteria in their working principles and interpretation of results.This study,from the perspective of new liberal arts translation studies,analyzes the elements of interpreting tasks and designs an interpretable automatic evaluation system for interpreting teaching.Based on the interpretable theory of artificial intelligence,corresponding optimization strategies are proposed to improve the transparency,credibility,and accuracy of the system.
explainable Artificial Intelligencenew liberal arts translation studiesinterpretation teachingautomatic evaluation system