Hybrid SD: Edge-Cloud Collaborative Inference for Stable Diffusion Models

Stable Diffusion Models (SDMs) have shown remarkable proficiency in image synthesis. However, their broad application is impeded by their large model sizes and intensive computational requirements, which typically require expensive cloud servers for deployment. On the flip side, while there are many...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Chenqian Yan, Liu, Songwei, Liu, Hongjian, Peng, Xurui, Wang, Xiaojian, Chen, Fangmin, Fu, Lean, Xing Mei
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 30.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Stable Diffusion Models (SDMs) have shown remarkable proficiency in image synthesis. However, their broad application is impeded by their large model sizes and intensive computational requirements, which typically require expensive cloud servers for deployment. On the flip side, while there are many compact models tailored for edge devices that can reduce these demands, they often compromise on semantic integrity and visual quality when compared to full-sized SDMs. To bridge this gap, we introduce Hybrid SD, an innovative, training-free SDMs inference framework designed for edge-cloud collaborative inference. Hybrid SD distributes the early steps of the diffusion process to the large models deployed on cloud servers, enhancing semantic planning. Furthermore, small efficient models deployed on edge devices can be integrated for refining visual details in the later stages. Acknowledging the diversity of edge devices with differing computational and storage capacities, we employ structural pruning to the SDMs U-Net and train a lightweight VAE. Empirical evaluations demonstrate that our compressed models achieve state-of-the-art parameter efficiency (225.8M) on edge devices with competitive image quality. Additionally, Hybrid SD reduces the cloud cost by 66% with edge-cloud collaborative inference.
ISSN:2331-8422