计算机学报2024,Vol.47Issue(11) :2629-2644.DOI:10.11897/SP.J.1016.2024.02629

适应梯度变化的普适在线凸优化算法

Universal Online Convex Optimization Algorithm with Adaptivity to Gradient-Variation

刘朗麒 张利军
计算机学报2024,Vol.47Issue(11) :2629-2644.DOI:10.11897/SP.J.1016.2024.02629

适应梯度变化的普适在线凸优化算法

Universal Online Convex Optimization Algorithm with Adaptivity to Gradient-Variation

刘朗麒 1张利军2
扫码查看

作者信息

  • 1. 南京大学计算机软件新技术全国重点实验室 南京 210023
  • 2. 南京大学人工智能学院 南京 210023
  • 折叠

摘要

普适在线凸优化算法能够自动适应多类损失函数并进行优化,这使得用户无须自行判别损失函数的类型,降低了在线凸优化技术的使用门槛.虽然现有的普适算法对于多类损失函数的理论保障均达到极小极大最优,但是它们难以针对一般凸函数获得问题相关的理论保障.为解决该问题,本文提出的UAGV算法不仅能够自动适应一般凸与强凸的损失函数,同时首次在平滑条件下对于一般凸损失函数保障了梯度变化界,即能够在损失函数梯度变化缓慢时取得更好的性能.算法整体采用元算法-专家算法的二层结构,在顶层本文创新性地采用具有乐观项的元算法,并针对梯度变化界的形式设计替代损失函数与乐观项,使得其在结合底层专家算法时能够获得相应保障.在多个数据集上的实验结果表明,UAGV算法对于平滑一般凸函数产生的遗憾整体小于现有普适算法,在部分数据集上遗憾减小的幅度超过14%.

Abstract

Contrasting with online convex optimization algorithms designed for specific function types,universal algorithms are able to automatically adapt to various loss functions.This capability eliminates the need for users to correctly classify the type of loss function,thereby lowering the barrier to employing online convex optimization techniques.Previous studies have proposed algorithms that provide minimax optimal theoretical guarantees for several types of loss functions.However,they struggle to attain tighter theoretical guarantees that are correlated with the structure of the problem for general convex functions.To address this issue,we introduce the Universal Online Convex Optimization Algorithm with Adaptivity to Gradient-Variation(UAGV).This novel algorithm automatically adapts to both general convex and strongly convex loss functions.Furthermore,under the smooth condition,UAGV enjoys the gradient-variation bound for general convex loss functions,which is a problem-dependent bound.The algorithm adopts a two-layered structure,with a meta-algorithm in the upper layer and several expert-algorithms in the lower layer.We innovatively adopt the meta-algorithm with an optimism term,which can be interpreted as a prediction of the loss vector.When the optimism term closely matches the loss vector,the meta-algorithm achieves small regret.Thus,we carefully design the surrogate loss function and the optimism term according to the form of gradient-variation bound,enhancing the meta-algorithm's ability of combining decisions generated by expert algorithms and helping to obtain the corresponding theoretical guarantee.Experimental results from several datasets indicate that the UAGV algorithm can effectively track the best expert-algorithm,and its optimization results for smooth general convex functions outperform those of existing universal algorithms.Specifically,the regret of UAGV is over 14%smaller than that of existing algorithms on certain datasets.

关键词

在线凸优化/普适算法/平滑/梯度变化/问题相关界/乐观项

Key words

online convex optimization/universal algorithm/smoothness/gradient-variation/problem-dependent bound/optimism

引用本文复制引用

基金项目

国家自然科学基金(62122037)

国家自然科学基金(U23A20382)

出版年

2024
计算机学报
中国计算机学会 中国科学院计算技术研究所

计算机学报

CSTPCD北大核心
影响因子:3.18
ISSN:0254-4164
段落导航相关论文