A Fast Scenario Transfer Approach for Portrait Styles Through Collaborative Awareness of Convolutional Neural Network and Generative Adversarial Learning

The fast scenario transfer for portrait styles has been a universal concern in the area of image processing. Conventionally, it was realized mostly by manual manipulation based on expertise experience, which costs a large amount of human labor. To fit the growing business volume in the era of Intern...

Full description

Saved in:
Bibliographic Details
Published inJournal of circuits, systems, and computers Vol. 33; no. 7
Main Authors Wang, Yajie, Liang, Shaolin
Format Journal Article
LanguageEnglish
Published Singapore World Scientific Publishing Company 15.05.2024
World Scientific Publishing Co. Pte., Ltd
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The fast scenario transfer for portrait styles has been a universal concern in the area of image processing. Conventionally, it was realized mostly by manual manipulation based on expertise experience, which costs a large amount of human labor. To fit the growing business volume in the era of Internet 4.0, it is supposed to investigate reliable autonomous workflow that generates portrait-style results via intelligent algorithms. Therefore, this paper proposes a fast scenario transfer method for portrait styles through collaborative awareness of convolutional neural network (CNN) and generative learning (GAL). On the one hand, a CNN structure is designed to make feature extraction towards the initial images, which can be used for following image generation. On the other hand, the GAL is utilized to generate new images with post-transferred scenario styles, so that the expected scenario style transfer results can be output by generating new images through long-term iterations. Especially, the loss function is designed by simultaneously considering local and global characteristics. Finally, the analysis and experimental results show that the correction process can be completed by injecting the extracted input face features into the feature maps of corresponding sizes in the generation process using a pre-trained feature extractor. Thus, the face conversion efficiency can be improved.
Bibliography:This paper was recommended by Regional Editor Takuro Sato.
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0218-1266
1793-6454
DOI:10.1142/S0218126624501214