首页|基于场景流的可变速率动态点云压缩

基于场景流的可变速率动态点云压缩

扫码查看
针对现有的动态点云压缩神经网络需要训练多个网络模型的问题,提出基于场景流的可变速率动态点云压缩网络框架。网络以原始动态点云为输入,利用场景流网络进行运动向量估计,在压缩运动向量和残差的同时,引入通道增益模块对隐向量通道进行评估和缩放,实现可变速率控制。通过综合考虑运动向量损失和率失真损失,设计新的联合训练损失函数,用来端到端地训练整个网络框架。为了解决动态点云数据集缺少真实运动信息标签的问题,基于AMASS数据集制作带有运动向量标签的人体数据集,用于网络的训练。实验结果显示,与现有的基于深度学习动态点云压缩方法相比,该方法的压缩比特率下降了几个数量级,与静态压缩网络单独处理每帧的重构效果相比,该方法有5%~10%的提升。
Variable rate compression of point cloud based on scene flow
A variable-rate dynamic point cloud compression network framework based on scene flow was proposed in order to address the problem of training multiple network models for existing dynamic point cloud compression neural networks.The raw dynamic point cloud was taken as input,and the scene flow network was utilized to estimate motion vectors.A channel gain module was introduced to evaluate and scale the latent vector channels while compressing motion vectors and residuals,achieving variable-rate control.A new joint training loss function was designed to end-to-end train the entire network framework by comprehensively considering the motion vector loss and rate-distortion loss.A human body dataset with motion vector labels was created based on the AMASS dataset for network training in order to solve the problem of lack of real motion information labels in dynamic point cloud datasets.The experimental results show that the compression bit rate of the method decreases by several orders of magnitude compared with existing deep learning-based dynamic point cloud compression methods.The method has a 5%~10%improvement compared with the reconstruction effect of static compression networks processing each frame separately.

dynamic point cloud compressionvariable ratejoint loss functionscene flow network

江照意、邹文钦、郑晟豪、宋超、杨柏林

展开 >

浙江工商大学计算机科学与技术学院,浙江杭州 310018

动态点云压缩 可变速率 联合损失函数 场景流网络

国家自然科学基金浙江省自然科学基金浙江省自然科学基金

62172366LY21F020013LY22F020013

2024

浙江大学学报(工学版)
浙江大学

浙江大学学报(工学版)

CSTPCD北大核心
影响因子:0.625
ISSN:1008-973X
年,卷(期):2024.58(2)
  • 27