An Accelerated Distributed Online Learning Algorithm Based on Conditional Gradient
The distributed online optimization model based on projected gradient has become a prominent paradigm for online learning due to its simplicity.However,this paradigm is incapable of handling massive data ap-plications because the projection step becomes the computational bottleneck.Recent studies have presented the dis-tributed online conditional gradient algorithms for convex cost functions,which achieved an O(T3/4)regret bound,where T is a time horizon.There are two negative problems for those algorithms.The first one is that the regret bound of those algorithms is worse than the well known regret bound O(√T).The second one is that the conver-gence performance of the presented algorithms has not been analyzed for non-convex cost functions,which are common in practice.In this paper,we propose an accelerated distributed online learning algorithm based on the conditional gradient method over networks,which avoids the prohibitively expensive projection steps by using Frank-Wolfe step.Moreover,when the local cost functions are convex,we show that the regret bound of O(√T)is achieved.When the local cost functions are potentially non-convex,we also show that the algorithm converges to some sta-tionary points at rate of O(√T).Finally,the performance of the proposed algorithm and the theoretical results are verified by simulation experiments.