In the context of satellite denial conditions,the foundation for the safe and reliable completion of various tasks by unmanned aerial vehicles(UAVs)is the acquisition of high-precision positioning information.Traditional image matching methods face challenges in guaranteeing security,exhibit poor positioning accuracy,and involve numerous matching constraints.Therefore,a visual positioning method based on deep feature orthorectification matching is proposed,which utilizes a deep learning network to extract depth features from orthorectified UAV aerial images and commercial maps,establishes matching relationships and subsequently calculates high-precision UAV position information.The impact of different factors on visual positioning accuracy is analyzed according to the visual measurement model,and offline experiments ae conducted using a dataset of hollow aerial images.The experimental results demonstrate that,compared with the traditional template matching methods based on histogram of oriented gradients(HOG)features,the proposed method improves positioning accuracy by 25%,and the positioning root mean square error(RMSE)is better than 15 m+0.5%H(for height below 5000 m),which shows certain engineering application value.