In the context of offshore wind turbine maintenance personnel boarding scenarios,an end to end deep learning network approach is proposed to train a position estimator based on RGB images of wind turbine platforms.First,an image data acquisition device capable of precise control over position and attitude is designed and constructed.This device is used to collect training,testing,and validation image datasets with cylindrical coordinate labels.The training dataset consists of 12 586 images,while the test dataset contains 3 198 images.The spatial labels for the training dataset include radial distances ranging from 2.1 to 18.9 cm,heights from 1.2 to 9.0 cm,and angles from 66° to 114°.The sampling intervals are 0.6 cm radially,0.6 cm in height,and 1.6° in angle.Subsequently,a relative position estimator of the ship-wind turbine platform based on the ResNet50 network is learned through training.The performance of this estimator is evaluated using multiple sets of test data,considering error distribution and robustness under mild motion blur conditions.Data analysis reveals that the relative position estimator achieves an average absolute error of 0.10 cm in radial distance,0.07 cm in height,and 0.28° in angle across the three degrees of freedom.It also demonstrates robustness against slight motion blur.