AffectGAN: Affect-Based Generative Art Driven by Semantics

This paper introduces a novel method for generating artistic images that express particular affective states. Leveraging state-of-the-art deep learning methods for visual generation (through generative adversarial networks), semantic models from OpenAI, and the annotated dataset of the visual art en...

Full description

Saved in:
Bibliographic Details
Published in2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) pp. 01 - 07
Main Authors Galanos, Theodoros, Liapis, Antonios, Yannakakis, Georgios N.
Format Conference Proceeding
LanguageEnglish
Published IEEE 28.09.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper introduces a novel method for generating artistic images that express particular affective states. Leveraging state-of-the-art deep learning methods for visual generation (through generative adversarial networks), semantic models from OpenAI, and the annotated dataset of the visual art encyclopedia WikiArt, our AffectGAN model is able to generate images based on specific or broad semantic prompts and intended affective outcomes. A small dataset of 32 images generated by AffectGAN is annotated by 50 participants in terms of the particular emotion they elicit, as well as their quality and novelty. Results show that for most instances the intended emotion used as a prompt for image generation matches the participants' responses. This small-scale study brings forth a new vision towards blending affective computing with computational creativity, enabling generative systems with intentionality in terms of the emotions they wish their output to elicit.
DOI:10.1109/ACIIW52867.2021.9666317