Image retrieval is crucial in computer vision and has widespread applications in various fields.However,in patents,images are typically presented in the form of line drawings.Since line draw-ings lack color and texture information,retrieving them still poses significant challenges.This work proposes a Transformer-based line drawing retrieval model that fully leverages the advan-tage of Transformer's long-range dependency modeling to effectively extract global features from line drawings.The model divides the input line drawing into n patches and extracts features among patches through a self-attention mechanism.These features are then processed to obtain 100-di-mensional enhanced features,and finally,retrieval is performed based on the cosine similarity of image features.Experimental results demonstrate that compared to GoogleNet and ResNet50,which are based on convolutional neural networks,the Transformer-based model achieves better performance.