首页|深度学习光学合成孔径共相闭环实验研究

深度学习光学合成孔径共相闭环实验研究

扫码查看
光学合成孔径是研制大口径望远镜的有效技术途径。合成孔径光电望远镜实际分辨率能达到衍射极限的关键在于实时检测并校正各子孔径之间的平移误差。通过构建轻量化MobileNet卷积神经网络拟合宽波段点扩展函数和平移误差的非线性映射关系,并基于该网络完成了三孔径共相闭环实验。对三孔径合成孔径系统时序施加平移误差,采集相应宽波段点扩展函数,利用点扩展函数-平移误差数据训练轻量化MobileNet网络至收敛。在闭环校正阶段,将训练好的模型部署到嵌入式计算平台中,根据预测输出控制压电反射镜进行误差校正。共相闭环结果表明,该方法每次检测耗时3 ms,具有较好实时性,且能够在±6λ0(λ0=600 nm)的检测范围内,实现26。2 nm的检测精度。通过深度学习共相闭环实验,验证了深度学习方法作为工程级共相解决方案的可行性。
Research on Co-Phasing Closed-Loop Experiment for Optical Synthetic Aperture Using Deep Learning
Objective Optical synthetic aperture is an effective technical approach for developing large aperture telescopes.The key to achieving diffraction limit for the actual resolution of synthetic aperture based opto-electronic telescopes lies in the real-time sensing and correction of piston error between sub-apertures.Among the traditional methods,the specific optics-based methods measure piston errors from the pupil information modulated by specially designed hardware,which inevitably increases the system complexity.The image-based methods can measure piston errors directly from the intensity image,which simplifies the system.However,it does need a large amount of iterative optimization calculation,thus failing to realize instant correction.Recently,deep learning method has contributed to many areas with piston sensing included,which is capable of achieving end-to-end piston sensing by fitting the mapping relationship between piston error and intensity image.Although many efforts have been made to improve the piston sensing performance of the deep learning model,most of the studies still stay in the simulation stage.In the few experimental studies,only piston sensing is implemented while co-phasing closed-loop correction has never been worked out.In the present study,we establish an optical synthetic aperture imaging experimental platform and implement co-phasing closed-loop experiment using deep learning approach.We hope that our research could be helpful for promoting the practical process of deep learning based co-phasing technology.Methods Real-time closed-loop piston error correction is achieved for two-aperture system and three-aperture system,respectively.First,the experimental platform is built,where broadband light is utilized to remove 2π ambiguity and sequence piston errors are loaded to the sub-apertures to generate corresponding training images.Then,a lightweight MobileNet convolutional neural network(CNN)is established to learn the nonlinear mapping relationship between broadband point spread function(PSF)and piston error.By converting standard convolution module into depthwise separable convolution module,MobileNet effectively reduces model parameters and computational complexity while ensuring the overall performance of network,thus realizing fast inferring.When the loss function converges to the minimum stably,the training process is completed and the testing dataset is used to evaluate the performance of the network.In the next step,the well-trained model,which is capable of inferring the piston errors directly from the intensity images,is deployed on an embedded computing platform.When implementing the closed-loop correction,the image captured by charge-coupled device(CCD)is transferred to the computing platform and the instant piston error is obtained through forward inference of model in real time.Finally,piston error correction is carried out by controlling the piezo steering mirror based on the predicted output.Results and Discussions The experimental results show that the lightweight MobileNet deep learning model realizes high-precision piston sensing and a large capture range of±6λ0(λ0=600 nm)is achieved by using 550-650 nm broadband light.For the two-aperture imaging system,the average value of the root mean square error(RMSE)between testing outputs of the network and true piston error values is about 18 nm(Fig.6).Besides,the predicted values are very close to the true values in the whole capture range.In the process of closed-loop correction,the residual curve converges to the zero line rapidly and stably.The initial piston error is 2.3λ0 and the average residual after closed-loop correction is about 0.043λ0.In addition,the PSF image with closed-loop correction is almost the same as the ideal image(Fig.7).Each piston prediction takes about 3 ms for the lightweight MobileNet,while the time is 10 ms for the VGG-19 model.It is evident that our method has significant advantage in real-time performance.Then another experiment is implemented in the three-aperture system,where the average value of RMSE between testing outputs of the network and true piston values is about 30 nm(Fig.9).The average residual after closed-loop correction is about 0.063λ0,which shows a reduced accuracy compared with the correction results of two-aperture system.This is because increasing sub-aperture number will complexify the mapping relationship between the PSF and the piston error.Correspondingly,the training data needed and the difficulty in training will greatly increase.Nevertheless,our study shows that there is little difference in the piston sensing time between the two-aperture system and the three-aperture system,which means the increase of sub-apertures to be measured has little effect on the real-time performance.Conclusions In the present study,deep learning based co-phasing closed-loop experiment of optical synthetic aperture is successfully implemented.This technology uses a single lightweight MobileNet CNN to extract piston information from focused PSF image,thus greatly reducing optical complexity of the system.At the same time,the end-to-end mode further simplifies the sensing process and achieves rapid and robust piston error estimation.Under the experimental conditions established in our study,it takes about 3 ms to complete each detection,which means good real-time performance is achieved.Fine co-phasing control with high sensing accuracy is realized for two-aperture system as well as three-aperture system.In summary,the reliability and superiority of deep learning co-phasing technology in engineering application have been preliminarily verified through the co-phasing closed-loop experiments.

imaging systemsconvolutional neural networkpiston erroroptical synthetic apertureco-phasing closed loop

马霞飞、杨开元、马浩统、杨虎、谢宗良

展开 >

中国科学院光电技术研究所光场调控科学技术全国重点实验室,四川成都 610209

中国科学院光束控制重点实验室,四川成都 610209

中国科学院大学,北京 100049

电子科技大学自动化工程学院,四川成都 610209

展开 >

成像系统 卷积神经网络 平移误差 光学合成孔径 共相闭环

2024

中国激光
中国光学学会 中科院上海光机所

中国激光

CSTPCD北大核心
影响因子:2.204
ISSN:0258-7025
年,卷(期):2024.51(13)