首页|面向动态场景去模糊的对偶学习生成对抗网络

面向动态场景去模糊的对偶学习生成对抗网络

扫码查看
针对动态场景下的图像去模糊问题,提出一种对偶学习生成对抗网络(dual learning generative adversarial network,DLGAN),该网络可以在对偶学习的训练模式下使用非成对的模糊图像和清晰图像进行图像去模糊计算,不再要求训练图像集合必须由模糊图像与其对应的清晰图像成对组合而成.DLGAN利用去模糊任务与重模糊任务之间的对偶性建立反馈信号,并使用这个信号约束去模糊任务和重模糊任务从两个不同的方向互相学习和更新,直到收敛.实验结果表明,在结构相似度和可视化评估方面,DLGAN与9种使用成对数据集训练的图像去模糊方法相比具有更好的性能.
Dual learning generative adversarial network for dynamic scene deblurring
For the problem of dynamic scene deblurring,a dual learning generative adversarial network(DLGAN)is proposed in this paper.The network can use unpaired blurry and sharp images to perform image deblurring calculations in the training mode of dual learning,which no longer requires the training image set to be a pair of blurry and their corresponding sharp images.The DLGAN uses the duality between the deblurring task and the reblurring task to establish a feedback signal,and uses this signal to constrain the deblurring task and the reblurring task to learn and update each other from two different directions until convergence.Experimental results show that the DLGAN has a better performance compared to nine image deblurring methods trained with paired datasets in structural similarity and visualization evaluation.

dynamic scene deblurringdual learninggenerative adversarial networkattention-guidedfeature map loss function

纪野、戴亚平、廣田薰、邵帅

展开 >

北京理工大学自动化学院,北京 100081

北京理工大学复杂系统智能控制与决策国家重点实验室,北京 100081

动态场景去模糊 对偶学习 生成对抗网络 注意力引导 特征图损耗函数

国铁集团系统性重大项目北京市自然科学基金项目

P2021T002L191020

2024

控制与决策
东北大学

控制与决策

CSTPCD北大核心
影响因子:1.227
ISSN:1001-0920
年,卷(期):2024.39(4)
  • 16