Multi‐style cartoonization: Leveraging multiple datasets with generative adversarial networks
Scene cartoonization aims to convert photos into stylized cartoons. While generative adversarial networks (GANs) can generate high‐quality images, previous methods focus on individual images or single styles, ignoring relationships between datasets. We propose a novel multi‐style scene cartoonizatio...
Saved in:
Published in | Computer animation and virtual worlds Vol. 35; no. 3 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Chichester
Wiley Subscription Services, Inc
01.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Scene cartoonization aims to convert photos into stylized cartoons. While generative adversarial networks (GANs) can generate high‐quality images, previous methods focus on individual images or single styles, ignoring relationships between datasets. We propose a novel multi‐style scene cartoonization GAN that leverages multiple cartoon datasets jointly. Our main technical contribution is a multi‐branch style encoder that disentangles representations to model styles as distributions over entire datasets rather than images. Combined with a multi‐task discriminator and perceptual losses optimizing across collections, our model achieves state‐of‐the‐art diverse stylization while preserving semantics. Experiments demonstrate that by learning from inter‐dataset relationships, our method translates photos into cartoon images with improved realism and ion fidelity compared to prior arts, without iterative re‐training for new styles.
We introduce a multi‐style scene cartoonization GAN aiming to enhance the technique of photo‐to‐cartoon conversion. By amalgamating multiple cartoon datasets and employing innovative encoding methods, our model achieves more realistic and cartoon effects, surpassing previous approaches. By capturing relationships between datasets, we can provide high‐quality cartoon images without the need for tedious iterative retraining, marking a subtle but significant advancement in the field. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1546-4261 1546-427X |
DOI: | 10.1002/cav.2269 |