首页|Reinforcement learning explains various conditional cooperation
Reinforcement learning explains various conditional cooperation
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NSTL
Elsevier
Recent studies show that different update rules are invariant regarding the evolutionary outcomes for a well-mixed population or homogeneous network. In this paper, we investi-gate how the Q-learning algorithm, one of the reinforcement learning methods, affects the evolutionary outcomes in square lattice. Especially, we consider the mixed strategy update rule, among which some agents adopt Q-learning method to update their strategies, the proportion of these agents (these agents are denoted as Artificial Intelligence (AI)) is con-trolled by a simple parameter rho. The rest of other agents, the proportion is denoted by 1 - rho, adopt the Fermi function to update their strategies. Through extensive numerical simulations, we found that the mixed strategy-update rule can facilitate cooperation com-pared with the pure Fermi-function-based update rule. Besides, if the proportion of AI is moderate, cooperators among the whole population exhibit conditional behavior and moody conditional behavior. However, if the whole population adopts the pure Fermi-function-based strategy update rule or the pure Q-learning-based strategy update rule, then cooperators among the whole population exhibit the hump-shaped conditional be-havior. Our results provide a new insight to understand the evolution of cooperation from AI's view. (C)& nbsp;2022 Elsevier Inc. All rights reserved.