Research on Co-Phasing Closed-Loop Experiment for Optical Synthetic Aperture Using Deep Learning
Objective Optical synthetic aperture is an effective technical approach for developing large aperture telescopes.The key to achieving diffraction limit for the actual resolution of synthetic aperture based opto-electronic telescopes lies in the real-time sensing and correction of piston error between sub-apertures.Among the traditional methods,the specific optics-based methods measure piston errors from the pupil information modulated by specially designed hardware,which inevitably increases the system complexity.The image-based methods can measure piston errors directly from the intensity image,which simplifies the system.However,it does need a large amount of iterative optimization calculation,thus failing to realize instant correction.Recently,deep learning method has contributed to many areas with piston sensing included,which is capable of achieving end-to-end piston sensing by fitting the mapping relationship between piston error and intensity image.Although many efforts have been made to improve the piston sensing performance of the deep learning model,most of the studies still stay in the simulation stage.In the few experimental studies,only piston sensing is implemented while co-phasing closed-loop correction has never been worked out.In the present study,we establish an optical synthetic aperture imaging experimental platform and implement co-phasing closed-loop experiment using deep learning approach.We hope that our research could be helpful for promoting the practical process of deep learning based co-phasing technology.Methods Real-time closed-loop piston error correction is achieved for two-aperture system and three-aperture system,respectively.First,the experimental platform is built,where broadband light is utilized to remove 2π ambiguity and sequence piston errors are loaded to the sub-apertures to generate corresponding training images.Then,a lightweight MobileNet convolutional neural network(CNN)is established to learn the nonlinear mapping relationship between broadband point spread function(PSF)and piston error.By converting standard convolution module into depthwise separable convolution module,MobileNet effectively reduces model parameters and computational complexity while ensuring the overall performance of network,thus realizing fast inferring.When the loss function converges to the minimum stably,the training process is completed and the testing dataset is used to evaluate the performance of the network.In the next step,the well-trained model,which is capable of inferring the piston errors directly from the intensity images,is deployed on an embedded computing platform.When implementing the closed-loop correction,the image captured by charge-coupled device(CCD)is transferred to the computing platform and the instant piston error is obtained through forward inference of model in real time.Finally,piston error correction is carried out by controlling the piezo steering mirror based on the predicted output.Results and Discussions The experimental results show that the lightweight MobileNet deep learning model realizes high-precision piston sensing and a large capture range of±6λ0(λ0=600 nm)is achieved by using 550-650 nm broadband light.For the two-aperture imaging system,the average value of the root mean square error(RMSE)between testing outputs of the network and true piston error values is about 18 nm(Fig.6).Besides,the predicted values are very close to the true values in the whole capture range.In the process of closed-loop correction,the residual curve converges to the zero line rapidly and stably.The initial piston error is 2.3λ0 and the average residual after closed-loop correction is about 0.043λ0.In addition,the PSF image with closed-loop correction is almost the same as the ideal image(Fig.7).Each piston prediction takes about 3 ms for the lightweight MobileNet,while the time is 10 ms for the VGG-19 model.It is evident that our method has significant advantage in real-time performance.Then another experiment is implemented in the three-aperture system,where the average value of RMSE between testing outputs of the network and true piston values is about 30 nm(Fig.9).The average residual after closed-loop correction is about 0.063λ0,which shows a reduced accuracy compared with the correction results of two-aperture system.This is because increasing sub-aperture number will complexify the mapping relationship between the PSF and the piston error.Correspondingly,the training data needed and the difficulty in training will greatly increase.Nevertheless,our study shows that there is little difference in the piston sensing time between the two-aperture system and the three-aperture system,which means the increase of sub-apertures to be measured has little effect on the real-time performance.Conclusions In the present study,deep learning based co-phasing closed-loop experiment of optical synthetic aperture is successfully implemented.This technology uses a single lightweight MobileNet CNN to extract piston information from focused PSF image,thus greatly reducing optical complexity of the system.At the same time,the end-to-end mode further simplifies the sensing process and achieves rapid and robust piston error estimation.Under the experimental conditions established in our study,it takes about 3 ms to complete each detection,which means good real-time performance is achieved.Fine co-phasing control with high sensing accuracy is realized for two-aperture system as well as three-aperture system.In summary,the reliability and superiority of deep learning co-phasing technology in engineering application have been preliminarily verified through the co-phasing closed-loop experiments.