对学习者的知识掌握水平进行追踪是智慧教育的重要研究方向之一。传统深度知识追踪方法的关注点主要集中在循环神经网络上,但存在缺乏可解释性、长序列依赖等问题。同时,很多方法没有考虑学习者特征信息及习题特征对实验结果的影响。针对以上问题,提出了一种融合习题特征信息的交叉注意力机制知识追踪模型。该模型结合知识点和习题特征信息得到习题特征嵌入模块,再根据学习者回答情况对注意力机制进行改进,得到双注意力机制模块。考虑到学习者实际做题情况,引入基于注意力机制的猜想失误模块。首先,将习题特征信息输入到该模型中,通过习题特征嵌入模块得到融合习题特征信息的学习者反应;然后,经过猜想失误模块的处理,可以得到学习者的真实反应;最后,通过预测模块得出学习者下一次回答正确的概率。实验结果表明,融合习题特征信息的交叉注意力机制知识追踪模型相对于传统深度知识追踪(deep knowledge tracing,DKT)模型,ROC曲线下面积(ar-ea under curve,AUC)和预测准确率(accuracy,ACC)分别提高了3。13%和3。44%,能够很好地处理长序列依赖情况,并具有更好的可解释性和预测性能。
Knowledge tracing model with integrating exercise feature information cross-attention mechanism
Tracing learners'mastery of knowledge is a pivotal research direction in the realm of wisdom education.Traditional deep knowledge tracing methods predominantly focus on recurrent neural net-works,facing challenges such as the lack of interpretability and handling long sequence dependencies.Additionally,many methods overlook the influence of learner characteristics and exercise features on experimental results.Addressing these issues,a cross-attention mechanism knowledge tracing model was proposed.The model integrated knowledge points and exercise features information to obtain a question feature embedding module.Subsequently,improvements were made to the attention mecha-nism based on learner responses,resulting in a dual attention mechanism module.To account for real ex-ercise-solving situations,a guess-error module based on attention mechanisms was introduced.firstly,the model took in exercise features information,obtaining a learner response with integrating exercise information through the exercise features embedding module.Following processing by the guess-error module,authentic learner responses were derived.Finally,the prediction module yielded the probability of a learner answering correctly in the next instance.Experimental results demonstrate that the cross-at-tention knowledge tracing model,incorporating exercise features,outperformed the traditional dynamic keyhole transformer(DKT)model,with 3.13%increase in AUC and 3.44%increase in ACC.This mod-el proves effective in handling long sequence dependencies while exhibiting enhanced interpretability and predictive performance.
wisdom educationknowledge tracingcross-attention mechanismrecurrent neural net-workexercise features information