首页|Approximation capabilities of neural networks on unbounded domains
Approximation capabilities of neural networks on unbounded domains
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NSTL
Elsevier
? 2021 Elsevier LtdThere is limited study in the literature on the representability of neural networks on unbounded domains. For some application areas, results in this direction provide additional value in the design of learning systems. Motivated by an old option pricing problem, we are led to the study of this subject. For networks with a single hidden layer, we show that under suitable conditions they are capable of universal approximation in Lp(R×[0,1]n) but not in Lp(R2×[0,1]n). For deeper networks, we prove that the ReLU network with two hidden layers is a universal approximator in Lp(Rn).
Benefit of depthUnbounded domainUniversal approximation