Neural rendering-based fast scene geometry modeling and retrieval method for digital twin assets
Virtual-real fusion represents a quintessential facet of digital twin technology.In digital twins,virtual scenes are predominantly realized through geometric modeling techniques.To address the challenge of limited auto-mation and excessive dependence on manual intervention in scene geometric modeling,resulting in high costs and in-efficiencies,a digital twin geometric scene modeling approach was proposed.In this approach,the neural rendering technology was introduced to gather point cloud data from physical entities.Subsequently,a deep learning-based method was devised for the semantic mapping of point cloud models to 3D CAD models,which was applied to re-trieving 3D CAD models from the digital twin geometric model asset library.A training dataset was curated,and a digital twin geometric model asset library was constructed for experimental validation.The efficacy of the proposed approach was corroborated through comparative experiments and case studies involving the disassembly of decom-missioned batteries.The results affirmed that the proposed method exhibited significantly lower costs and markedly heightened efficiency than alternative digital twin scene geometric modeling methodologies.
digital twinscene geometric modelingneural renderingmodel retrieval