On the Trustworthiness Landscape of State-of-the-art Generative Models: A Survey and Outlook

Diffusion models and large language models have emerged as leading-edge generative models, revolutionizing various aspects of human life. However, their practical implementation has also exposed inherent risks, bringing to light their potential downsides and sparking concerns about their trustworthi...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of computer vision Vol. 133; no. 7; pp. 4317 - 4348
Main Authors Fan, Mingyuan, Wang, Chengyu, Chen, Cen, Liu, Yang, Huang, Jun
Format Journal Article
LanguageEnglish
Published New York Springer US 01.07.2025
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Diffusion models and large language models have emerged as leading-edge generative models, revolutionizing various aspects of human life. However, their practical implementation has also exposed inherent risks, bringing to light their potential downsides and sparking concerns about their trustworthiness. Despite the wealth of literature on this subject, a comprehensive survey that specifically delves into the intersection of large-scale generative models and their trustworthiness remains largely absent. To bridge this gap, this paper investigates both long-standing and emerging threats associated with these models across four fundamental dimensions: 1) privacy, 2) security, 3) fairness, and 4) responsibility. Based on our investigation results, we develop an extensive survey that outlines the trustworthiness of large generative models. Following that, we provide practical recommendations and identify promising research directions for generative AI, ultimately promoting the trustworthiness of these models and benefiting society as a whole.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0920-5691
1573-1405
DOI:10.1007/s11263-025-02375-w