Knowledge tracing via reinforcement of concept representation
Knowledge tracing models mainly use supervised learning paradigm to model the probability distribution of answers given the question information,which cannot adjust the model immediately based on new question information,ultimately af-fecting the prediction performance.To address this issue,this paper proposed a knowledge tracing model with enhanced knowledge representation by integrating reinforcement learning paradigm,which mainly consisted of three parts:a basic net-work,a value network,and a policy network.The basic network modeled the representation of questions and knowledge points,the value network calculated the value of questions and the temporal difference error,and the policy network optimized the prediction results.Experiments conducted with five baseline models on three datasets demonstrate that the proposed model excels in terms of AUC and ACC,especially on the ASSISTments2009 dataset,where AUC is improved by 6.83%~14.34%and ACC by 11.39%~19.74%.Furthermore,the quality of model representation is improved by 2.59%compared to base-line mo-dels,and ablation experiments confirm the effectiveness of the reinforcement learning framework.Finally,applying the proposed model to learning behavior data from three real courses shows its practical usability,as evidenced by its performance compared to baseline models.