Random K-nearest neighbor algorithm with learning process
The traditional KNN(K-nearest neighbor)algorithm is a classic machine learning algorithm.This algorithm has no learning process and needs to traverse all the learning samples when classifying,and is time-sensitive and sensitive to the k value.This paper proposes two random KNN algorithms(RKNN)with a learning process,including the SRKNN algorithm on sample Bootstrap sampling and the ARKNN algorithm on sample feature Bootstrap sampling,both of which belong to Bagging ensemble learning.After learning multiple simple KNNs,the voting output results.The algorithm combines the features of the samples to obtain the combined features,and the simple KNN is obtained based on the combined features.It focuses on how to select the optimal combination coefficient of features,and obtains the selection rules and formulas of the optimal combination features for the best classification accuracy.The RKNN algorithm introduces learning when constructing a simple KNN,it no longer needs to traverse all the learning samples when classifying,but only needs to use the binary search method,and its classification time complexity is an order of magnitude lower than that of the traditional KNN algorithm.The classification accuracy of the RKNN algorithm is significantly improved than that of the traditional KNN algorithm.The RKNN algorithm solves the problem that it is difficult to select the k value using the KNN algorithm.Both theoretical analysis and experimental results show that the proposed RKNN algorithm is an efficient improvement to the KNN algorithm.