首页|How far are we to GPT-4V?Closing the gap to commercial multimodal models with open-source suites

How far are we to GPT-4V?Closing the gap to commercial multimodal models with open-source suites

扫码查看
How far are we to GPT-4V?Closing the gap to commercial multimodal models with open-source suites
In this paper,we introduce InternVL 1.5,an open-source multimodal large language model(MLLM)to bridge the capability gap between open-source and proprietary commercial models in multi-modal understanding.We introduce three simple improvements.(1)Strong vision encoder:we explored a continuous learning strategy for the large-scale vision foundation model—InternViT-6B,boosting its visual understanding capabilities,and making it can be transferred and reused in different LLMs.(2)Dynamic high-resolution:we divide images into tiles ranging from 1 to 40 of 448×448 pixels according to the aspect ratio and resolution of the input images,which supports up to 4K resolution input.(3)High-quality bilingual dataset:we carefully collected a high-quality bilingual dataset that covers common scenes,document images,and annotated them with English and Chinese question-answer pairs,significantly enhancing performance in optical character recognition(OCR)and Chinese-related tasks.We evaluate InternVL 1.5 through a series of benchmarks and comparative studies.Compared to both open-source and proprietary commercial mod-els,InternVL 1.5 shows competitive performance,achieving state-of-the-art results in 8 of 18 multimodal benchmarks.Code and models are available at https://github.com/OpenGVLab/InternVL.

multimodal modelopen-sourcevision encoderdynamic resolutionbilingual dataset

Zhe CHEN、Weiyun WANG、Hao TIAN、Shenglong YE、Zhangwei GAO、Erfei CUI、Wenwen TONG、Kongzhi HU、Jiapeng LUO、Zheng MA、Ji MA、Jiaqi WANG、Xiaoyi DONG、Hang YAN、Hewei GUO、Conghui HE、Botian SHI、Zhenjiang JIN、Chao XU、Bin WANG、Xingjian WEI、Wei LI、Wenjian ZHANG、Bo ZHANG、Pinlong CAI、Licheng WEN、Xiangchao YAN、Min DOU、Lewei LU、Xizhou ZHU、Tong LU、Dahua LIN、Yu QIAO、Jifeng DAI、Wenhai WANG

展开 >

State Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210023,China

Shanghai AI Laboratory,Shanghai 200232,China

School of Computer Science,Fudan University,Shanghai 200433,China

Sense Time Research,Shanghai 200233,China

Department of Information Engineering,The Chinese University of Hong Kong,Hong Kong 999077,China

Department of Electronic Engineering,Tsinghua University,Beijing 100084,China

展开 >

multimodal model open-source vision encoder dynamic resolution bilingual dataset

2024

中国科学:信息科学(英文版)
中国科学院

中国科学:信息科学(英文版)

CSTPCDEI
影响因子:0.715
ISSN:1674-733X
年,卷(期):2024.67(12)