Optimization method for light-field feature extraction in flame temperature field reconstruction
In recent years,the reconstruction of the three-dimensional(3D)temperature field from the flame light-field image by the method of deep learning has been a new direction in radiation thermometry.The traditional temperature field reconstruction network still uses the feature extraction method of the plane image,which not only ignores the 3D ray information recorded in the light-field image but also fails to consider the classification of tracing rays.Therefore,the absence of prior information and the hybridity of different types of features have a negative impact on the reconstruction accuracy of temperature field.This article optimized the feature extraction process in the network.Firstly,the view angle information of the sub-aperture image was added to the network input.Then the spatial and angular features of the light-field were extracted by the double-branch convolution method.Finally,the attention mechanism was used to model the importance of features at different scales.The effectiveness of the above factors on the reconstruction accuracy was verified by orthogonal experiments.The simulation results showed that the Mean Relative Error(MRE)of the temperature field reconstructed by the optimization method was reduced by 44.82%,and the Maximum Relative Error(MMRE)was reduced by 34.76%compared to traditional networks.
temperature field reconstructionoptimizationneural networksnumerical analysis