首页|Study Results from Wuhan University of Technology Update Understanding of Machin e Learning (Machine Learning-based Multimodal Fusion Recognition of Passenger Sh ip Seafarers' Workload: a Case Study of a Real Navigation Experiment)
Study Results from Wuhan University of Technology Update Understanding of Machin e Learning (Machine Learning-based Multimodal Fusion Recognition of Passenger Sh ip Seafarers' Workload: a Case Study of a Real Navigation Experiment)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News-A new study on Machine Learning is now available. According to news originating from Wuhan, People's Republic of China , by NewsRx correspondents, research stated, "Passenger ships have complex trans portation systems and seafarers face high workloads, making them susceptible to serious injuries and fatalities in the event of accidents. Existing unimodal wor kload recognition for seafarers mainly focuses on fixed load induction in bridge simulators, whereas a multimodal approach using multi-sensor data fusion can ov ercome the reliability and sensitivity limitations of a single sensor." Funders for this research include National Key R & D Program of Ch ina, National Natural Science Foundation of China (NSFC). Our news journalists obtained a quote from the research from the Wuhan Universit y of Technology, "To accurately identify the workload of seafarers, we propose a machine learning-based multimodal fusion method at the feature layer and utilis e the Gini index to determine the feature weight of the multimodal data. Through a real ship navigation experiment, the subjective workload assessment technique (SWAT) was employed to collect the continuous workload scores of 24 seafarers i n daily tasks. Further, the Dempster-Shafer evidence theory was used to integrat e these scores with the unsafe behavior probability of seafarers to obtain a cal ibrated workload. Electroencephalogram (EEG), electrocardiogram (ECG), and elect rodermal activity (EDA) signals were collected in real time, and a high-dimensio nal feature matrix was extracted to construct the workload recognition model. Ra ndom forest, XGBoost, and backpropagation neural networks were used to establis h multimodal fusion workload recognition models at the feature-fusion stage, and the model performances were compared. The results showed that the multimodal fu sion based on EEG, ECG, and EDA had an excellent recognition accuracy. The XGBoo st algorithm has better performance with an accuracy of 85.72%, whi ch is an increment of 9.49% compared to that of the unimodal algor ithm, and this improvement passed the statistical significance test. Important f eatures suitable for multimodal fusion recognition were also analysed."
WuhanPeople's Republic of ChinaAsiaCyborgsEmerging TechnologiesMachine LearningWuhan University of Technolog y