Robot hand-eye calibration in different spaces based on LSTM network
We present a novel robot hand-eye calibration method in an alien space based on the long short-term memory(LSTM)network,addressing the mismatch between the current robot operating space and camera field of view.Firstly,pixel coordinates of each circle center on the calibration board were extracted and recorded sequentially.Subsequently,the calibration board was translated into the robot workspace using a conveyor belt,and the pose information of the robot end effector was recorded.The LSTM network was then employed for data training to establish the hand-eye mapping relationship.Finally,36 sets of real data were collected as a validation set to assess prediction accuracy.The results indicate that the model trained by this method predicts with an average translation error of only 0.69 mm in the robot base coordinate system.Moreover,for the validation dataset distributed randomly across all workspaces on the conveyor belt,the fluctuation value of translation error is less than 1 mm,effectively affirming the robustness and effectiveness of this method.In comparison to classical planar calibration methods,the proposed method operates efficiently in larger workspaces,provides high calibration accuracy,and can effectively compensate for errors arising from factors such as camera lens distortion and depth value changes.