摘要
由一名新闻记者-机器人与机器学习的工作人员新闻编辑-每日新闻-研究人员详细介绍了智能系统中的新数据。根据来自中国张家口的新闻,NewsRx记者称,“分析在线学习行为有助于了解学生在学习过程中的进步、困难和需求,使教师更容易及时提供反馈和个性化指导。”记者从河北某建筑大学的研究中得到一句话:“然而,网络教学的课堂行为(CB)是复杂多变的,依靠传统的课堂监督方法,教师发现很难全面关注每个学生的学习行为。在这方面,将AlphaPose人体关键点检测方法与图像D ATA方法相结合,设计了一种双流网络来捕获和分析CB。实验结果表明,当模型参数的学习率为0.001时,模型精度高达92.3%;当批量为8,该模型的准确率高达90.8%,融合模型对直立坐姿的捕捉准确率达到97.3%,而对举起手行为的捕捉准确率仅下降到74.8%,准确率和召回率均较好,对直立坐姿、举起手和直立坐姿的召回率分别为88.3%、86.2%和85.1%。
Abstract
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News-Researchers detail new data in intelli gent systems. According to news originating from Zhangjiakou, People's Republic of China, by NewsRx correspondents, research stated, "Analyzing online learning behavior helps to understand students' progress, difficulties, and needs during the learning process, making it easier for teachers to provide timely feedback a nd personalized guidance." The news reporters obtained a quote from the research from Hebei University of A rchitecture: "However, the classroom behavior (CB) of online teaching is complex and variable, and relying on traditional classroom supervision methods, teacher s find it difficult to comprehensively pay attention to the learning behavior of each student. In this regard, a dual stream network was designed to capture and analyze CB by integrating AlphaPose human keypoint detection method and image d ata method. The experimental results show that when the learning rate of the mod el parameters is set to 0.001, the accuracy of the model is as high as 92.3% . When the batch size is 8, the accuracy of the model is as high as 90.8% . The accuracy of the fusion model in capturing upright sitting behavior reached 97.3%, but the accuracy in capturing hand raising behavior decreas ed to only 74.8%. The fusion model performs well in terms of accura cy and recall, with recall rates of 88.3, 86.2, and 85.1% for capt uring standing up, raising hands, and sitting upright behaviors, respectively."