Aiming at the issues of poor information representation ability and incomplete information during the extraction of sign language features,this paper designs a two-stream adaptive enhanced spatial temporal graph convolutional network(TAEST-GCN)for sign language recognition based on isolated words.The network uses human body,hands and face nodes as inputs to construct a two-stream structure based on human joints and bones.The connection between different parts is generated by the adaptive spatial temporal graph convolutional module,ensuring the full utilization of the position and direction informa-tion.Meanwhile,an adaptive multi-scale spatial temporal attention module is built through residual connection to further enhance the convolution ability of the network in both spatial and temporal domain.The effective features extracted from the dual stream network are weighted and fused to classify and output sign language vocabulary.Finally,experiments are carried out on the public Chinese sign language isolated word dataset,achieving accu-racy rates of 95.57%and 89.62%in 100 and 500 categories of words,respectively.