Application of Reinforcement Learning in Cooperative Spectrum Sensing
With the increase of the number of nodes,multi-node Cooperative Spectrum Sensing(CSS)will generate a large amount of local data,resulting in high energy consumption and global decision delay.Aiming at this problem,a Reinforcement Learning(RL)algorithm based on Node Evaluation Selection(NES)and Grid Search(GS)is proposed.Firstly,the NES algorithm is used to update the trust value of collaboration users in real time in the Fusion Center(FC),sort the trust value size,and prevent Malicious Users(MU)from participating in CSS according to the set threshold.Then,the processed data is labeled through the RL mechanism based on GS,and all possible parameter combinations are searched out by taking the Signal to Noise Ratio(SNR)and trust value as input parameters.When the parameters of the same environment are used,the FC can directly call the nodes in the environment,without the need to re-perceive the operation,if a new user joins by changing the range of parameters to re-search,the new user can imitate the experience of RL of other users,so as to obtain faster channel occupancy.The simulation results show that compared with other algorithms,this method improves the detection probability,reduces energy consumption,reduces the time of repeated calculation,and solves the problem of global decision delay.