With the development and popularization of artificial intelligence technologies represented by deep learning,the security issues they continuously expose have become a huge challenge affecting cyberspace secu-rity.Traditional cloud-centric distributed machine learning,which trains models or optimizes model perfor-mance by collecting data from participating parties,is susceptible to security attacks and privacy attacks during the data exchange process,leading to consequences such as a decline in overall system efficiency or the leak-age of private data.Federated learning,as a distributed machine learning paradigm with privacy protection ca-pabilities,exchanges model parameters through frequent communication between clients and parameter servers,training a joint model without the raw data leaving the local area.This greatly reduces the risk of private data leakage and ensures data security to a certain extent.However,as deep learning models become larger and fed-erated learning tasks more complex,communication overhead also increases,eventually becoming a barrier to the application of federated learning.Therefore,the exploration of communication optimization methods for federated learning has become a hot topic.The technical background and workflow of federated learning were introduced,and the sources and impacts of its communication bottlenecks were analyzed.Then,based on the factors affecting communication efficiency,existing federated learning communication optimization methods were comprehensively sorted out and analyzed from optimization objectives such as model parameter compres-sion,model update strategies,system architecture,and communication protocols.The development trend of this research field was also presented.Finally,the problems faced by existing federated learning communication op-timization methods were summarized,and future development trends and research directions were looked for-ward to.