首页|DocPedia:unleashing the power of large multimodal model in the frequency domain for versatile document understanding
DocPedia:unleashing the power of large multimodal model in the frequency domain for versatile document understanding
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
万方数据
维普
DocPedia:unleashing the power of large multimodal model in the frequency domain for versatile document understanding
In this work,we present DocPedia,a novel large multimodal model(LMM)for versatile OCR-free document understanding,capable of parsing images up to 2560 × 2560 resolution.Unlike existing studies that either struggle with high-resolution documents or give up the large language model thus vision or language ability constrained,our DocPedia directly processes visual input in the frequency domain rather than the pixel space.The unique characteristic enables DocPedia to capture a greater amount of visual and textual information using a limited number of visual tokens.To consistently enhance both the per-ception and comprehension abilities of our DocPedia,we develop a dual-stage training strategy and enrich instructions/annotations of all training tasks covering multiple document types.Extensive quantitative and qualitative experiments are conducted on various publicly available benchmarks and the results confirm the mutual benefits of jointly learning perception and comprehension tasks.The results provide further evidence of the effectiveness and superior performance of our DocPedia over other methods.