Research on broad learning systems based on dynamic sparse training
[Objective]In sparse broad learning systems,the changing importance of output weights is overlooked.Some weights are unimportant in the early stages of model training but become important after being trimmed,making their recovery challenging.Inspired by dynamic sparse training in neural networks,this paper proposes a width learning system utilizing dynamic sparse training to compensate for pruning errors during model training and improve overall model performance while maintaining model sparsity.[Methods]This system introduces a regularization term to constrain the output weight threshold in the objective function of a standard-width learning system.It seeks optimal network parameters and a sparse network structure through joint training of output weights and their thresholds.Introduce an output weight threshold for each output weight,and generate an output weight mask for the control model structure based on changes in the importance of the output weight.The mask is jointly generated using weights and their thresholds and dynamically adjusts weight threshold during training to prune and restore output weights based on changes in weight importance.This system can indirectly sparsify models using the mask while retaining output weights,achieving an optimal balance between network structure and accuracy through dynamic training,and improving the overall model performance by minimizing the incorrect pruning of weights.There is the greatest improvement in accuracy on the dataset'BUTCSP',with an increase of approximately 30.12%.This article introduces exponential powers as regularization terms to constrain the weight threshold in the loss function of a standard-width learning system and adds a weight mask to the error term of the loss function.The alternating direction multiplier method is used to optimize and solve the objective function.[Results]To verify the effectiveness of the broad learning system based on dynamic sparse training(BLSDST),this paper uses six UCI public datasets for simulation.The performance of the system was compared with those of the broad learning system(BLS)and lasso broad learning system(L1BLS).Results indicate that the BLSDST achieves a balance between model accuracy and sparsity by constraining the weight-threshold regularization term.Further,it can reduce model complexity without sacrificing model accuracy while compensating for the impact of model pruning on model performance.[Conclusions]Experimental results show that the proposed system can achieve model dynamic sparsity without reducing model performance and even improving it.