首页|大小模型端云协同进化技术进展

大小模型端云协同进化技术进展

扫码查看
生成式基座大模型正在引发人工智能领域的重大变革,在自然语言处理、多模态理解与内容合成等任务展现通用能力。大模型部署于云侧提供通用智能服务,但面临时延大、个性化不足等关键挑战,小模型部署于端侧捕捉个性化场景数据,但存在泛化性不足的难题。大小模型端云协同技术旨在结合大模型通用能力和小模型专用能力,以协同交互方式学习演化进而赋能下游垂直行业场景。本文以大语言模型和多模态大模型为代表,梳理生成式基座大模型的主流架构、典型预训练技术和适配微调等方法,介绍在大模型背景下模型剪枝、模型量化和知识蒸馏等大模型小型化关键技术的发展历史和研究近况,依据模型间协作目的及协同原理异同,提出大小模型协同训练、协同推理和协同规划的协同进化分类方法,概述端云模型双向蒸馏、模块化设计和生成式智能体等系列代表性新技术、新思路。总体而言,本文从生成式基座大模型、大模型小型化技术和大小模型端云协同方式3个方面探讨大小模型协同进化的国际和国内发展现状,对比优势和差距,并从应用前景、模型架构设计、垂直领域模型融合、个性化和安全可信挑战等层面分析基座赋能发展趋势。
Advances in edge-cloud collaboration and evolution for large-small models
Generative foundation models are facilitating significant transformations in the field of artificial intelligence.They demonstrate general artificial intelligence in diverse research fields,including natural language processing,multi-modal content understanding,imagery,and multimodal content synthesis.Generative foundation models often consist of billions or even hundreds of billions of parameters.Thus,they are often deployed on the cloud side to provide powerful and general intelligent services.However,this type of service can be confronted with crucial challenges in practice,such as high latency induced by communications between the cloud and local devices,and insufficient personalization capabilities due to the fact that servers often do not have access to local data considering privacy concerns.By contrast,low-complexity lightweight models are located at the edge side to capture personalized and dynamic scenario data.However,they may suf-fer from poor generalization.Large and lightweight(or large-small)model collaboration aims to integrate the general intelli-gence of large foundation models and the personalized intelligence of small lightweight models.This integration empowers downstream vertical domain-specific applications through the interaction and collaboration of both types of intelligent mod-els.Large and small model collaboration has recently attracted increasing attention and becomes the focus of research and development in academia and industry.It has also been predicted to be an important trend in technology.We therefore try to thoroughly investigate this area by highlighting recent progress and bringing potential inspirations for related research.In this study,we first overview representative large language models(LLMs)and large multimodal models.We focus on their mainstream Transformer-based model architectures including encoder-only,decoder-only,and encoder-decoder models.Corresponding pre-training technologies such as next sentence prediction,sequence-to-sequence modeling,contrastive learning,and parameter-efficient fine-tuning methods with representatives including low-rank adaptation and prompt tuning are also explored.We then review the development history and the latest advancement of model compression techniques,including model pruning,model quantization,and knowledge distillation in the era of foundation models.Based on the dif-ferences in terms of model collaboration purposes and mechanisms,we propose a new classification method and taxonomies for the large-small model collaboration study,namely,collaborative training,collaborative inference,and collaborative planning.Specifically,we summarize recent and representative methods that consist of dual-directional knowledge distilla-tion between large models at the cloud side and small models deployed at the edge side,modular design of intelligent mod-els that split functional models between the cloud and edge,and generative agents that collaborate to complete more com-plex tasks in an autonomous and intelligent manner.In collaborative training,a main challenge is dealing with the hetero-geneity in data distribution and model architectures between the cloud and client sides.Data privacy may also be a concern during collaborative training,particularly in privacy sensitive cases.Despite much progress in collaborative inference,slic-ing and completing a complicated task in a collective way automatically remain challenging.Furthermore,the communica-tion costs between computing facilities might be another concern.Collective planning is a new paradigm that gains attention with the increasing study and promising progress of LLM-centric agents(LLM agents).This paradigm often involves mul-tiple LLM agents who compete or cooperate together to complete a challenging task.It often leverages emerging capabilities such as in-context learning and chain-of-thoughts of LLMs to automatically dive a complicated task into several subtasks.By completing and assembling different subtasks,the global task can be conducted in a collaborative manner.This scheme finds diverse applications such as developing games and simulating social societies.However,it may suffer from drawbacks inherent in LLMs,including hallucination and adversarial vulnerabilities.Thus,more robust and reliable collaborative planning schemes remain to be investigated.In summary,this work surveys the large-small model collaboration techniques from the perspectives of generative foundation models,model compression,and heterogeneous model collaboration via LLM agents.This work also compares the advantages and disadvantages between international and domestic technology developments in this research realm.We conclude that,although the gaps are narrowing between domestic and advanced international studies in this area,particularly for newly emerging LLM agents,we may still lack original and major break-throughs.Certain notable advantages of domestic progress are closely related to industrial applications due to its rich data resources from industries.Therefore,the development of domain specific LLMs is advanced.In addition,this study envi-sions the applications of large-small model collaboration and discusses certain key challenges and promising directions in this topic.1)The design of efficient model architectures includes developing new model architectures that can achieve low-complexity inference speed while maintaining efficient long-sequence modeling abilities as Transformers and further improv-ing the scalability of mixture-of-expert-based architectures.2)Current model compression methods are mainly designed for vision models.Thus,developing techniques specifically for LLMs and large multimodal models is important to preserve their emergent abilities during compression.3)Existing personalization methods specially focus on discriminative models,and due attention needs to be paid for efficient personalization for generative foundation models.4)Generative intelligence often suffers from fraudulent contents(e.g.,generated fake imagery,deepfake videos,and fake news)and different types of attacks(e.g.,adversarial attacks,the jailing breaking attacks,and the Byzantine attacks).Thus,security and trust-worthy issues arise in their practical applications.Therefore,this study also advocates a deeper investigation of these emerging security threats.Then,it develops effective defenses accordingly to countermeasure these crucial issues during large-small model collaboration for empowering vertical domains more safely.

generative foundation modelsmodel compressionlarge-small model collaborationedge-cloud collabora-tiongenerative agentsgenerative AI

王永威、沈弢、张圣宇、吴帆、赵洲、蔡海滨、吕承飞、马利庄、杨承磊、吴飞

展开 >

浙江大学人工智能研究所,杭州 310058

浙江大学上海高等研究院,上海 201203

上海交通大学计算机科学与工程系,上海 200241

华东师范大学软件工程学院,上海 200062

淘宝(中国)软件有限公司,杭州 310023

山东大学软件学院,济南 250011

展开 >

生成式大模型 大模型小型化 大小模型协同进化 端云协同进化 生成式智能体 生成式人工智能

新一代人工智能国家科技重大专项国家自然科学基金项目国家自然科学基金项目浙江省科技计划项目繁星科学基金项目(浙江大学)

2022ZD011910062037001624416052022C01044

2024

中国图象图形学报
中国科学院遥感应用研究所,中国图象图形学学会 ,北京应用物理与计算数学研究所

中国图象图形学报

CSTPCD北大核心
影响因子:1.111
ISSN:1006-8961
年,卷(期):2024.29(6)