Stable Diffusion for Down Syndrome Face Generation

The growing need for diverse and inclusive datasets in computer vision has prompted research into synthetic image generation for underrepresented groups. This paper explores using Stable Diffusion XL (SD-XL 1.0-base) to generate highquality synthetic images of individuals with Down syndrome. By leve...

Full description

Saved in:
Bibliographic Details
Published inInternational Conference on Bio-engineering for Smart Technologies (Online) pp. 1 - 4
Main Authors Saeed, Muhammad Ahmad, Zein, Hazem, Nait-Ali, Amine
Format Conference Proceeding
LanguageEnglish
Published IEEE 14.05.2025
Subjects
Online AccessGet full text
ISSN2831-4352
DOI10.1109/BioSMART66413.2025.11046078

Cover

Loading…
More Information
Summary:The growing need for diverse and inclusive datasets in computer vision has prompted research into synthetic image generation for underrepresented groups. This paper explores using Stable Diffusion XL (SD-XL 1.0-base) to generate highquality synthetic images of individuals with Down syndrome. By leveraging Low-Rank Adaptation (LoRA) and DreamBooth for concept-specific adjustments, our approach accurately reproduces key facial features. A curated dataset of 26 highresolution images was used to fine-tune the model. By training a CNN-based classification system using synthetic data, we demonstrate ResNet152V2 achieves 98.33% accuracy, underscoring the viability of synthetic images for data processing applications. The results enhance representation of individuals with Down syndrome in computer vision while addressing data scarcity and privacy concerns in biomedical imaging.
ISSN:2831-4352
DOI:10.1109/BioSMART66413.2025.11046078