首页|Storing, learning and retrieving biased patterns
Storing, learning and retrieving biased patterns
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NSTL
Elsevier
The formal equivalence between the Hopfield network (HN) and the Boltzmann Machine (BM) has been well established in the context of random, unstructured and unbiased patterns to be retrieved and recognised. Here we extend this equivalence to the case of "biased" patterns, that is patterns which display an unbalanced count of positive neurons/pixels: starting from previous results of the bias paradigm for the HN, we construct the BM's equivalent Hamiltonian introducing a constraint parameter for the bias correction. We show analytically and numerically that the parameters suggested by equivalence are fixed points under contrastive divergence evolution when exposed to a dataset of blurred examples of each pattern, also enjoying large basins of attraction when the model suffers of a noisy initialisation. These results are also shown to be robust against increasing storage of the models, and increasing bias in the reference patterns. This picture, together with analytical derivation of HN's phase diagram via self-consistency equations, allows us to enhance our mathematical control on BM's performance when approaching more realistic datasets. (C) 2021 Elsevier Inc. All rights reserved.