中国科学:信息科学(英文版)2024,Vol.67Issue(12) :1-18.DOI:10.1007/s11432-024-4231-5

How far are we to GPT-4V?Closing the gap to commercial multimodal models with open-source suites

Zhe CHEN Weiyun WANG Hao TIAN Shenglong YE Zhangwei GAO Erfei CUI Wenwen TONG Kongzhi HU Jiapeng LUO Zheng MA Ji MA Jiaqi WANG Xiaoyi DONG Hang YAN Hewei GUO Conghui HE Botian SHI Zhenjiang JIN Chao XU Bin WANG Xingjian WEI Wei LI Wenjian ZHANG Bo ZHANG Pinlong CAI Licheng WEN Xiangchao YAN Min DOU Lewei LU Xizhou ZHU Tong LU Dahua LIN Yu QIAO Jifeng DAI Wenhai WANG
中国科学:信息科学(英文版)2024,Vol.67Issue(12) :1-18.DOI:10.1007/s11432-024-4231-5

How far are we to GPT-4V?Closing the gap to commercial multimodal models with open-source suites

Zhe CHEN 1Weiyun WANG 2Hao TIAN 3Shenglong YE 4Zhangwei GAO 4Erfei CUI 4Wenwen TONG 3Kongzhi HU 3Jiapeng LUO 3Zheng MA 3Ji MA 3Jiaqi WANG 4Xiaoyi DONG 5Hang YAN 4Hewei GUO 3Conghui HE 4Botian SHI 4Zhenjiang JIN 4Chao XU 4Bin WANG 4Xingjian WEI 4Wei LI 4Wenjian ZHANG 4Bo ZHANG 4Pinlong CAI 4Licheng WEN 4Xiangchao YAN 4Min DOU 4Lewei LU 3Xizhou ZHU 6Tong LU 7Dahua LIN 5Yu QIAO 4Jifeng DAI 8Wenhai WANG5
扫码查看

作者信息

  • 1. State Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210023,China;Shanghai AI Laboratory,Shanghai 200232,China
  • 2. School of Computer Science,Fudan University,Shanghai 200433,China;Shanghai AI Laboratory,Shanghai 200232,China
  • 3. Sense Time Research,Shanghai 200233,China
  • 4. Shanghai AI Laboratory,Shanghai 200232,China
  • 5. Department of Information Engineering,The Chinese University of Hong Kong,Hong Kong 999077,China;Shanghai AI Laboratory,Shanghai 200232,China
  • 6. Department of Electronic Engineering,Tsinghua University,Beijing 100084,China;Shanghai AI Laboratory,Shanghai 200232,China;Sense Time Research,Shanghai 200233,China
  • 7. State Key Laboratory for Novel Software Technology,Nanjing University,Nanjing 210023,China
  • 8. Department of Electronic Engineering,Tsinghua University,Beijing 100084,China;Shanghai AI Laboratory,Shanghai 200232,China
  • 折叠

Abstract

In this paper,we introduce InternVL 1.5,an open-source multimodal large language model(MLLM)to bridge the capability gap between open-source and proprietary commercial models in multi-modal understanding.We introduce three simple improvements.(1)Strong vision encoder:we explored a continuous learning strategy for the large-scale vision foundation model—InternViT-6B,boosting its visual understanding capabilities,and making it can be transferred and reused in different LLMs.(2)Dynamic high-resolution:we divide images into tiles ranging from 1 to 40 of 448×448 pixels according to the aspect ratio and resolution of the input images,which supports up to 4K resolution input.(3)High-quality bilingual dataset:we carefully collected a high-quality bilingual dataset that covers common scenes,document images,and annotated them with English and Chinese question-answer pairs,significantly enhancing performance in optical character recognition(OCR)and Chinese-related tasks.We evaluate InternVL 1.5 through a series of benchmarks and comparative studies.Compared to both open-source and proprietary commercial mod-els,InternVL 1.5 shows competitive performance,achieving state-of-the-art results in 8 of 18 multimodal benchmarks.Code and models are available at https://github.com/OpenGVLab/InternVL.

Key words

multimodal model/open-source/vision encoder/dynamic resolution/bilingual dataset

引用本文复制引用

出版年

2024
中国科学:信息科学(英文版)
中国科学院

中国科学:信息科学(英文版)

CSTPCDEI
影响因子:0.715
ISSN:1674-733X
段落导航相关论文