首页|DocPedia:unleashing the power of large multimodal model in the frequency domain for versatile document understanding

DocPedia:unleashing the power of large multimodal model in the frequency domain for versatile document understanding

扫码查看
DocPedia:unleashing the power of large multimodal model in the frequency domain for versatile document understanding
In this work,we present DocPedia,a novel large multimodal model(LMM)for versatile OCR-free document understanding,capable of parsing images up to 2560 × 2560 resolution.Unlike existing studies that either struggle with high-resolution documents or give up the large language model thus vision or language ability constrained,our DocPedia directly processes visual input in the frequency domain rather than the pixel space.The unique characteristic enables DocPedia to capture a greater amount of visual and textual information using a limited number of visual tokens.To consistently enhance both the per-ception and comprehension abilities of our DocPedia,we develop a dual-stage training strategy and enrich instructions/annotations of all training tasks covering multiple document types.Extensive quantitative and qualitative experiments are conducted on various publicly available benchmarks and the results confirm the mutual benefits of jointly learning perception and comprehension tasks.The results provide further evidence of the effectiveness and superior performance of our DocPedia over other methods.

document understandinglarge multimodal modelOCR-freehigh-resolutionfrequency

Hao FENG、Qi LIU、Hao LIU、Jingqun TANG、Wengang ZHOU、Houqiang LI、Can HUANG

展开 >

Department of Electronic Engineering and Information Science,University of Science and Technology of China,Hefei 230027,China

ByteDance Inc.,Shanghai 200433,China

document understanding large multimodal model OCR-free high-resolution frequency

2024

中国科学:信息科学(英文版)
中国科学院

中国科学:信息科学(英文版)

CSTPCDEI
影响因子:0.715
ISSN:1674-733X
年,卷(期):2024.67(12)