Automatic Musical Composition System Based on Emotion Recognition by Face Images
The effect of music on human emotion has been studied for a long time. Research on emotions for music, for example the research on such as feelings and impressions when listening to music, has been established as one research field. However, although there were many studies that cause an emotion fro...
Saved in:
Published in | Journal of Japan Society for Fuzzy Theory and Intelligent Informatics Vol. 32; no. 6; pp. 975 - 986 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Iizuka
Japan Society for Fuzzy Theory and Intelligent Informatics
15.12.2020
Japan Science and Technology Agency |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The effect of music on human emotion has been studied for a long time. Research on emotions for music, for example the research on such as feelings and impressions when listening to music, has been established as one research field. However, although there were many studies that cause an emotion from a music, few researches on creating a music from an emotion have been performed.Therefore, in this study, we focus on facial expressions as emotional representation and aim to create a music that matches the emotion recognized from a facial image. For example, the system, which generates a bright and pleasant music using a laughing face image, or a dark and sad music using a crying face image automatically, will be constructed. Russell’s circumplex model was used for emotion recognition, and Hevner’s circular scale was used to generate music corresponding to these emotions. By using this system, for example, it will become possible to create a suitable BGM for the scene with only the actor’s face image in the production of movies. In this study, the above-mentioned system was constructed and the efficiency of this system was confirmed by conducting the Kansei evaluation experiment. |
---|---|
AbstractList | The effect of music on human emotion has been studied for a long time. Research on emotions for music, for example the research on such as feelings and impressions when listening to music, has been established as one research field. However, although there were many studies that cause an emotion from a music, few researches on creating a music from an emotion have been performed. Therefore, in this study, we focus on facial expressions as emotional representation and aim to create a music that matches the emotion recognized from a facial image. For example, the system, which generates a bright and pleasant music using a laughing face image, or a dark and sad music using a crying face image automatically, will be constructed. Russell’s circumplex model was used for emotion recognition, and Hevner’s circular scale was used to generate music corresponding to these emotions. By using this system, for example, it will become possible to create a suitable BGM for the scene with only the actor’s face image in the production of movies. In this study, the above-mentioned system was constructed and the efficiency of this system was confirmed by conducting the Kansei evaluation experiment. |
Author | MAEDA, Yoichiro FUJITA, Hibiki COOPER, Eric W. KAMEI, Katsuari |
Author_xml | – sequence: 1 fullname: MAEDA, Yoichiro organization: College of Information Science and Engineering, Ritsumeikan University – sequence: 2 fullname: FUJITA, Hibiki organization: Department of Sound Director and Visual Art Production, Institute of Sound Arts – sequence: 3 fullname: KAMEI, Katsuari organization: College of Information Science and Engineering, Ritsumeikan University – sequence: 4 fullname: COOPER, Eric W. organization: College of Information Science and Engineering, Ritsumeikan University |
BookMark | eNpNkE1PwzAMhiM0JMbgyrkS55Z8NukJjWmDSUMgPs5Rmiaj1dqMpD3s3xNWNHGJY_t5bfm9BJPOdQaAGwQzglh-1wRn-4zgLJcFZ2dgioRAKceQTOKfUJ7yQuQX4DKEBsK8gAxNwet86F2r-lonz0OotdolC9fuXaj72nXJ-yH0pk0eVDBVEvNl6471N6PdthuZ8pCslDbJulVbE67AuVW7YK7_4gx8rpYfi6d08_K4Xsw3qUY5YanFVCmhKwspthWhRuRMG45LK4wqKsWhLRGMByCOK4qrQhNdwVJYRhBBlJEZuB3n7r37HkzoZeMG38WVElNOWc4Ew5HKRkp7F4I3Vu593Sp_kAjKX9fk0TVJsDy6FgX3o6AJfTznhCsfLdqZ_3h8ouLU0V_KS9ORH1pqelA |
Cites_doi | 10.2307/1416710 10.1111/j.1469-8986.1995.tb02094.x 10.1109/CSSE.2008.1203 10.1177/0305735696241007 10.1037/h0077714 10.1093/oso/9780192631886.003.0014 10.1007/s10710-017-9307-y 10.1037/h0055778 |
ContentType | Journal Article |
Copyright | 2020 Japan Society for Fuzzy Theory and Intelligent Informatics Copyright Japan Science and Technology Agency 2020 |
Copyright_xml | – notice: 2020 Japan Society for Fuzzy Theory and Intelligent Informatics – notice: Copyright Japan Science and Technology Agency 2020 |
DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
DOI | 10.3156/jsoft.32.6_975 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Computer and Information Systems Abstracts |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Mathematics Music |
EISSN | 1881-7203 |
EndPage | 986 |
ExternalDocumentID | 10_3156_jsoft_32_6_975 article_jsoft_32_6_32_975_article_char_en |
GroupedDBID | 29K 2WC 5GY ACGFS ALMA_UNASSIGNED_HOLDINGS CS3 D-I E3Z EBS EJD JSF KQ8 OK1 RJT RNS AAYXX ABJNI CITATION 7SC 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c1635-f24aa8cdf042fd34e865ce72bf8ea9da70fb10881172d42d9c3cd0b8f53131453 |
ISSN | 1347-7986 |
IngestDate | Fri Sep 13 07:27:01 EDT 2024 Wed Aug 28 12:34:02 EDT 2024 Wed Apr 05 03:16:21 EDT 2023 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | true |
Issue | 6 |
Language | English |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c1635-f24aa8cdf042fd34e865ce72bf8ea9da70fb10881172d42d9c3cd0b8f53131453 |
OpenAccessLink | https://www.jstage.jst.go.jp/article/jsoft/32/6/32_975/_article/-char/en |
PQID | 2474565852 |
PQPubID | 2048415 |
PageCount | 12 |
ParticipantIDs | proquest_journals_2474565852 crossref_primary_10_3156_jsoft_32_6_975 jstage_primary_article_jsoft_32_6_32_975_article_char_en |
PublicationCentury | 2000 |
PublicationDate | 2020/12/15 |
PublicationDateYYYYMMDD | 2020-12-15 |
PublicationDate_xml | – month: 12 year: 2020 text: 2020/12/15 day: 15 |
PublicationDecade | 2020 |
PublicationPlace | Iizuka |
PublicationPlace_xml | – name: Iizuka |
PublicationTitle | Journal of Japan Society for Fuzzy Theory and Intelligent Informatics |
PublicationTitleAlternate | J. SOFT |
PublicationYear | 2020 |
Publisher | Japan Society for Fuzzy Theory and Intelligent Informatics Japan Science and Technology Agency |
Publisher_xml | – name: Japan Society for Fuzzy Theory and Intelligent Informatics – name: Japan Science and Technology Agency |
References | [5] P. N. Juslin and R. Timmers: “Expression and communication of emotion in music performance,” in Handbook of Music and Emotion: Theory, Research, and Applications, P. N. Juslin ed., Oxford University Press, pp. 453-489, 2010. [12] R. Valenti, A. Jaimes, and N. Sebe: “Sonify your face: facial expressions for sound generation,” Proc. of the 18th ACM Int. Conf. on Multimedia, Oct. 25-29, Firenze, pp. 1363-1372, 2010. [2] P. N. Juslin and J. A. Sloboda: Music and Emotion: Theory and Researeh, Oxford University Press, pp. 309-337, 2001. [13] 濱治世, 鈴木直人, 濱保久: 感情心理学への招待―感情・情緒へのアプローチ, サイエンス社, 2001. [4] A. Gabrielsson and P. N. Juslin: “Emotional expression in music performance: Between the performer’s intention and the listener’s experience,” Psychophysiology of Music, Vol.24, No.1, pp. 68-91, 1996. [17] 浅野雅子, 古根川円, 中島祥好: “音楽心理学の動向について―音楽知覚,音楽と感情,音楽療法を中心に,” 芸術工学研究, Vol.12, pp. 83-95, 2010. [22] フリー写真素材「写真AC」: https://www.photo-ac.com/ [accessed Nov. 12, 2020] [10] M. Scirea, J. Togelius, P. Eklund, and S. Rini: “Affective evolutionary music composition with MetaCompose,” Genetic Programming and Evolvable Machines, Vol.18, pp. 433-465, 2017. [11] 清水柚里奈, 菅野沙也, 伊藤貴之, 嵯峨山茂樹: “動画解析・印象推定による動画BGMの自動生成,” 第7回データ工学と情報マネジメントに関するフォーラム, 3月2-4日, 郡山, F2-3, 2015. [21] フリー素材サイト「ぱくたそ」: https://www.pakutaso.com/ [accessed Nov. 12, 2020] [15] J. A. Russell: “A circumplex model of affect,” J. of Personality and Social Psychology, Vol.39, No.6, pp. 1161-1178, 1980. [23] パブリックドメインQ: 著作権フリー画像素材集, https://publicdomainq.net/ [accessed Nov. 12, 2020] [7] 古根川円, 中島祥好, 上田和夫: “音楽で表現しうる感情を示す言葉の収集,” 日本音楽知覚認知学会平成21年度春季研究発表会資料, 6月6-7日, 調布, pp. 1-6, 2009. [9] H. Zhu, S. Wang, and Z. Wang: “Emotional Music Generation Using Interactive Genetic Algorithm,” 2008 Int. Conf. on Computer Science and Software Engineering, pp. 345-348, 2008. [8] K. Zhao, S. Li, J. Cai, H. Wang, and J. Wang: “An Emotional Symbolic Music Generation System based on LSTM Networks,” 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conf. (ITNEC), Mar. 15-17, Chengdu, pp. 2039-2043, 2019. [19] OMRON: 製品紹介 HVCシリーズ ヒューマンビジョンコンポ (HVC-P2)B5T-007001, https://plus-sensing.omron.co.jp/product/hvc-p2.html [accessed Nov. 12, 2020] [16] C. Witvliet and S. Vrana: “Psychophysiological responses as indices of affective dimensions,” Psychophysiology, Vol.32, No.5, pp. 436-443, 1995. [20] 前田陽一郎, 丹羽俊明, 山本昌幸: “大域結合写像によるインタラクティブカオティックサウンド生成システムおよび音楽的要素の導入,” 日本知能情報ファジィ学会誌, Vol.18, No.4, pp. 507-518, 2006. [6] 谷口高士: 音楽と感情, 北大路書房, 2003. [18] C. D. Schubart: Idenn zu einer Aesthetik der Tonkunst, Wentworth Press, 1806. [1] K. Hevner: “The affective character of the major and minor modes in music,” American J. of Psychology, Vol.47, No.1, pp. 103-118, 1935. [3] H. J. Schlosberg: “The description of facial expressions in terms of two dimensions,” J. of Experimental Psychology, Vol.44, No.4, pp. 229-237, 1952. [14] C. E. Seashore: Psychology of Music, Reprinted version, Dover Publications, 1967. [24] 北川祐編著: ポピュラー音楽理論, リットーミュージック, 2004. 11 12 13 1 2 3 4 5 6 7 8 9 10 |
References_xml | – ident: 1 doi: 10.2307/1416710 – ident: 5 – ident: 12 doi: 10.1111/j.1469-8986.1995.tb02094.x – ident: 10 – ident: 13 – ident: 7 doi: 10.1109/CSSE.2008.1203 – ident: 4 doi: 10.1177/0305735696241007 – ident: 11 doi: 10.1037/h0077714 – ident: 2 doi: 10.1093/oso/9780192631886.003.0014 – ident: 6 – ident: 8 doi: 10.1007/s10710-017-9307-y – ident: 9 – ident: 3 doi: 10.1037/h0055778 |
SSID | ssj0069051 ssib023159992 ssib029852168 ssib002484266 ssib001106413 ssib017172185 ssib035767187 ssib002222523 ssib000937382 |
Score | 2.172952 |
Snippet | The effect of music on human emotion has been studied for a long time. Research on emotions for music, for example the research on such as feelings and... |
SourceID | proquest crossref jstage |
SourceType | Aggregation Database Publisher |
StartPage | 975 |
SubjectTerms | automatic musical composition system BGM Emotion recognition Emotions face images Kansei evaluation Laughing Music Object recognition |
Title | Automatic Musical Composition System Based on Emotion Recognition by Face Images |
URI | https://www.jstage.jst.go.jp/article/jsoft/32/6/32_975/_article/-char/en https://www.proquest.com/docview/2474565852/abstract/ |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
ispartofPNX | Journal of Japan Society for Fuzzy Theory and Intelligent Informatics, 2020/12/15, Vol.32(6), pp.975-986 |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3Nb9MwFLfK4AAHxKfoGMgHJA5VSpM4iX0Mo1VbVDakVuwWJXYMAa2dlvSw_s38ETzHTuoMkMa4RImTWo3f1--9vPeM0BuahjLISOgQmTGHgMFyUspSx81dkkvJecRVQH_xKZyuyPwsOOv1flpZS9sqG_LdH-tKbkNVGAO6qirZf6BsOykMwDnQF45AYTjeiMbxttrolqv1ds11HOC8ScMyzcgH78FOCfVNYKx37FFIUecMwTmAz0kKoj07B71S_gWpzsGgrjv5nZPtbndl6vpNRnHT2bMamAqnysqjX8TjD3Gt7TcF_1Zcblq2Wc1ny_rOtMiKH0Wr_-PFeGayPUrg4vbG8cnJqU7eUBp88GVohy28OgVEF25qk_J__1wra59ETsSaVtp6jFLXUZ-WbQ2_j6BeU9dM79piLL-Z6bpR8cHFVUalBLs49L1hmLQ_6zTqNmyQ1A8mvpeE6gDPJs0dVUgHfHsH3fUiFqhQwcfPFhhmqsuUVcUM3jrpgC3QvTZYJVTBq-bajZQ3v-9EBMA9APTfzucxClBt71z64GkCMGnDE6Hq1FaHJ8yq6han6u3fdd-9A-HufQcv5uvvUKbGZ8tH6KFhVxzrRXiMevn6CXpgtduEq0Xbo7h8ik5b6cFGerAlPVhLD66lB8O1kR5sSQ_OrrCSHqyl5xlaTcbL46lj9hdxOHghgSM9kqaUCwmGSwqf5DQMeB55maR5ykQajWTmghV2YWEF8QTjPhejjEqwW75LAv85Olhv1vkLhEecCZoB-RgovlEe0VQEUohIqA0fuMf76G2zZMmFbiOTgPutFtfmF1jcPqJ6RdvnbsxXfXTU0CAxWqpMPBIppw1of3j7mV-i-3sZPkIH1eU2fwVgvMpe1zz8C8Iw3Uw |
link.rule.ids | 315,786,790,27957,27958 |
linkProvider | Colorado Alliance of Research Libraries |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Automatic+Musical+Composition+System+Based+on+Emotion+Recognition+by+Face+Images&rft.jtitle=Journal+of+Japan+Society+for+Fuzzy+Theory+and+Intelligent+Informatics&rft.au=MAEDA%2C+Yoichiro&rft.au=FUJITA%2C+Hibiki&rft.au=KAMEI%2C+Katsuari&rft.au=COOPER%2C+Eric+W.&rft.date=2020-12-15&rft.pub=Japan+Society+for+Fuzzy+Theory+and+Intelligent+Informatics&rft.issn=1347-7986&rft.eissn=1881-7203&rft.volume=32&rft.issue=6&rft.spage=975&rft.epage=986&rft_id=info:doi/10.3156%2Fjsoft.32.6_975&rft.externalDocID=article_jsoft_32_6_32_975_article_char_en |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1347-7986&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1347-7986&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1347-7986&client=summon |