Research on Co-Eevolution Method of Multi-Terminal Video Stream Intelligent Recognition Models
Developing Artificial Intelligence of Things(AIoT)technology and boosting the construction of a ubiquitous computing digital infrastructure system are important directions.In order to overcome the privacy issues brought by cloud computing and meet the needs of low-laten-cy applications,deploying deep models on ubiquitous intelligent IoT terminals to provide intelli-gent applications/services has attracted more and more attention.But the terminal deployment of the deep model has many challenges.Limited by the available resources of the terminal hardware plat-form,researchers start with model compression technology and hardware accelerators to provide technical support for the lightweight and high-quality deployment of deep models.However,the video application based on the deep model will inevitably face the problem of data drift in the actu-al mobile scenes.Moreover,this problem is especially noticeable in mobile scenes and devices because of more severe distribution fluctuations and sparser network structures.Under the influ-ence of data drift,the accuracy of the deep model will decrease significantly,making it difficult to meet the performance requirements.Edge-assisted model online evolution is an effective way to solve the problem of data drift,which can realize an intelligent computing system that can evolve and grow.Previous model evolution systems only focus on improving the accuracy of the terminal model.But in multi-terminal system,the global model is also affected by data drift due to the ore complex and varied scenario data from different terminals,resulting in a decrease in accuracy gain in the system.In order to provide stable and reliable knowledge transfer to the terminal models,it is necessary to use federated learning to evolve the global model.However,traditional federated learning will face multiple challenges of terminal model heterogeneity,and data distribution het-erogeneity in multi-model evolution systems.What's more,the speed of online evolution will af-fect the time proportion of high-accuracy services in the terminal models,decreasing their life-cycle performance.In order to collaboratively improve the accuracy and speed of model evolution for multi-ple terminal models,the paper proposes a method and system for the co-evolution of multi-termi-nal video stream intelligent recognition models based on the concept of software and hardware in-tegration.On the one hand,we develop a novel multi-terminal mutual learning and co-evolutionary evolution method,which overcomes the challenge of model data heterogeneity with the help of new terminal scene data,and realizes high accuracy gain co-evolution and co-evolutionary learning of multi-terminal models and global models.On the other hand,combined with the characteristics of mutual learning algorithms,a training acceleration method based on in-memory computing is pro-posed,which uses adaptive data compression and model training optimization to improve hard-ware performance,and accelerates the evolution speed of multiple terminal models while ensuring the evolution accuracy gain.Finally,through the experimental verification of the continuous evolu-tion task of the lightweight model in different real mobile scenarios,and comparing six bench-mark methods,it is proved that NestEvo can effectively reduce the evolution delay by 51.98%and improve the average inference accuracy of the lightweight model of the terminal by 42.6%.
data driftmodel evolutionmutual learningtraining acceleration schemein-memory computingArtificial Intelligence of Things