RAMAS: Russian Multimodal Corpus of Dyadic Interaction for Affective Computing

Emotion expression encompasses various types of information, including face and eye movement, voice and body motion. Emotions collected from real conversations are difficult to classify using one channel. That is why multimodal techniques have recently become more popular in automatic emotion recogn...

Full description

Saved in:
Bibliographic Details
Published inSpeech and Computer Vol. 11096; pp. 501 - 510
Main Authors Perepelkina, Olga, Kazimirova, Evdokia, Konstantinova, Maria
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2018
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Emotion expression encompasses various types of information, including face and eye movement, voice and body motion. Emotions collected from real conversations are difficult to classify using one channel. That is why multimodal techniques have recently become more popular in automatic emotion recognition. Multimodal databases that include audio, video, 3D motion capture and physiology data are quite rare. We collected The Russian Acted Multimodal Affective Set (RAMAS) − the first multimodal corpus in Russian language. Our database contains approximately 7 h of high-quality close-up video recordings of faces, speech, motion-capture data and such physiological signals as electro-dermal activity and photoplethysmogram. The subjects were 10 actors who played out interactive dyadic scenarios. Each scenario involved one of the basic emotions: Anger, Sadness, Disgust, Happiness, Fear or Surprise, and such characteristics of social interaction like Domination and Submission. In order to note emotions that subjects really felt during the process we asked them to fill in short questionnaires (self-reports) after each played scenario. The records were marked by 21 annotators (at least five annotators marked each scenario). We present our multimodal data collection, annotation process, inter-rater agreement analysis and the comparison between self-reports and received annotations. RAMAS is an open database that provides research community with multimodal data of faces, speech, gestures and physiology interrelation. Such material is useful for various investigations and automatic affective systems development.
ISBN:3319995782
9783319995786
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-319-99579-3_52