Text-to-Model: Text-Conditioned Neural Network Diffusion for Train-Once-for-All Personalization

Generative artificial intelligence (GenAI) has made significant progress in understanding world knowledge and generating content from human languages across various modalities, like text-to-text large language models, text-to-image stable diffusion, and text-to-video Sora. While in this paper, we in...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Li, Zexi, Gao, Lingzhi, Wu, Chao
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 23.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Generative artificial intelligence (GenAI) has made significant progress in understanding world knowledge and generating content from human languages across various modalities, like text-to-text large language models, text-to-image stable diffusion, and text-to-video Sora. While in this paper, we investigate the capability of GenAI for text-to-model generation, to see whether GenAI can comprehend hyper-level knowledge embedded within AI itself parameters. Specifically, we study a practical scenario termed train-once-for-all personalization, aiming to generate personalized models for diverse end-users and tasks using text prompts. Inspired by the recent emergence of neural network diffusion, we present Tina, a text-conditioned neural network diffusion for train-once-for-all personalization. Tina leverages a diffusion transformer model conditioned on task descriptions embedded using a CLIP model. Despite the astronomical number of potential personalized tasks (e.g., \(1.73\times10^{13}\)), by our design, Tina demonstrates remarkable in-distribution and out-of-distribution generalization even trained on small datasets (\(\sim 1000\)). We further verify whether and how \Tina understands world knowledge by analyzing its capabilities under zero-shot/few-shot image prompts, different numbers of personalized classes, prompts of natural language descriptions, and predicting unseen entities.
ISSN:2331-8422