Robotics & Machine Learning Daily News2024,Issue(Jun.5) :42-43.

Investigators from Guangxi University Target Machine Learning (Stochastic Three- term Conjugate Gradient Method With Variance Technique for Non-convex Learning)

广西大学目标机器学习(非凸学习方差技术的随机三项共轭梯度法)

Robotics & Machine Learning Daily News2024,Issue(Jun.5) :42-43.

Investigators from Guangxi University Target Machine Learning (Stochastic Three- term Conjugate Gradient Method With Variance Technique for Non-convex Learning)

广西大学目标机器学习(非凸学习方差技术的随机三项共轭梯度法)

扫码查看

摘要

由一名新闻记者兼机器人与机器学习每日新闻编辑-研究人员详细介绍了机器学习的新数据。根据NewsRx编辑在广西的新闻报道,研究表明:“在机器学习的训练过程中,经常使用经验风险损失函数的最小化来衡量模型预测值与实际值之间的差异。随机梯度下降法是这类优化问题的常用方法,但在理论分析中收敛缓慢。”本研究的资金来源于广西科技基础和人才工程。记者从广西大学的研究中得到一句话:“为了解决这个问题,已经有很多基于方差约简技术的算法,如SVRG、SAG、SAGA等,一些学者将传统优化中的共轭G辐射法应用于这些算法,如CGVR、SC GA、SCGN等,基本上可以达到线性收敛速度。”但这些结论往往需要在一些相对较强的假设下建立。在传统的优化方法中,共轭梯度法往往需要使用线搜索技术来获得良好的实验结果。从某种意义上说,线搜索体现了共轭梯度法的一些性质。将改进的三项共轭梯度法和线性搜索技术应用到机器学习中,在较弱的条件假设下,我们得到了与SCGA相同的收敛速度。

Abstract

By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – Researchers detail new data in Machine Learning. According to news reporting out of Guangxi, People’s Republic of Chin a, by NewsRx editors, research stated, “In the training process of machine learn ing, the minimization of the empirical risk loss function is often used to measu re the difference between the model’s predicted value and the real value. Stocha stic gradient descent is very popular for this type of optimization problem, but converges slowly in theoretical analysis.” Financial support for this research came from Guangxi Science and Technology Bas e and Talent Project. Our news journalists obtained a quote from the research from Guangxi University, “To solve this problem, there are already many algorithms with variance reducti on techniques, such as SVRG, SAG, SAGA, etc. Some scholars apply the conjugate g radient method in traditional optimization to these algorithms, such as CGVR, SC GA, SCGN, etc., which can basically achieve linear convergence speed, but these conclusions often need to be established under some relatively strong assumption s. In traditional optimization, the conjugate gradient method often requires the use of line search techniques to achieve good experimental results. In a sense, line search embodies some properties of the conjugate methods. Taking inspirati on from this, we apply the modified three-term conjugate gradient method and lin e search technique to machine learning. In our theoretical analysis, we obtain t he same convergence rate as SCGA under weaker conditional assumptions.”

Key words

Guangxi/People’s Republic of China/Asi a/Cyborgs/Emerging Technologies/Machine Learning/Guangxi University

引用本文复制引用

出版年

2024
Robotics & Machine Learning Daily News

Robotics & Machine Learning Daily News

ISSN:
段落导航相关论文