Graph Contrastive Learning(GCL)can extract the self-information of unlabeled data as a self-supervised signal to guide model training,and help alleviate the dependence on labeled data and structural unfairness of Graph Neural Network.It has become a research hotspot in the field of Graph Neural Network.This paper delves into the existing research on GCL from three aspects:data augmentation methods,sample pair construction,and contrastive learning granularity.It analyzes the advantages and disadvantages of different existing GCL methods.On this basis,the existing problems in GCL research were pointed out,and future research directions for GCL were proposed,including Adaptive GCL,Context GCL,Dynamic GCL,Hypergraph Contrastive Learning,Causal Inference GCL,Non-negative sample GCL,and GCL based on Large Language Models.