Estimation of the Quality of Experience During Video Streaming From Facial Expression and Gaze Direction
This article investigates the possibility to estimate the perceived Quality of Experience (QoE) automatically and unobtrusively by analyzing the face of the consumer of video streaming services, from which facial expression and gaze direction are extracted. If effective, this would be a valuable too...
Saved in:
Published in | IEEE eTransactions on network and service management Vol. 17; no. 4; pp. 2702 - 2716 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.12.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 1932-4537 1932-4537 |
DOI | 10.1109/TNSM.2020.3018303 |
Cover
Loading…
Abstract | This article investigates the possibility to estimate the perceived Quality of Experience (QoE) automatically and unobtrusively by analyzing the face of the consumer of video streaming services, from which facial expression and gaze direction are extracted. If effective, this would be a valuable tool for the monitoring of personal QoE during video streaming services without asking the user to provide feedback, with great advantages for service management. Additionally, this would eliminate the bias of subjective tests and would avoid bothering the viewers with questions to collect opinions and feedback. The performed analysis relies on two different experiments: i) a crowdsourcing test, where the videos are subject to impairments caused by long initial delays and re-buffering events; ii) a laboratory test, where the videos are affected by blurring effects. The facial Action Units (AU) that represent the contractions of specific facial muscles together with the position of the eyes' pupils are extracted to identify the correlation between perceived quality and facial expressions. An SVM with a quadratic kernel and a k-NN classifier have been tested to predict the QoE from these features. These have also been combined with measured application-level parameters to improve the quality prediction. From the performed experiments, it results that the best performance is obtained with the k-NN classifier by combining all the described features and after training it with both the datasets, with a prediction accuracy as high as 93.9% outperforming the state of the art achievements. |
---|---|
AbstractList | This article investigates the possibility to estimate the perceived Quality of Experience (QoE) automatically and unobtrusively by analyzing the face of the consumer of video streaming services, from which facial expression and gaze direction are extracted. If effective, this would be a valuable tool for the monitoring of personal QoE during video streaming services without asking the user to provide feedback, with great advantages for service management. Additionally, this would eliminate the bias of subjective tests and would avoid bothering the viewers with questions to collect opinions and feedback. The performed analysis relies on two different experiments: i) a crowdsourcing test, where the videos are subject to impairments caused by long initial delays and re-buffering events; ii) a laboratory test, where the videos are affected by blurring effects. The facial Action Units (AU) that represent the contractions of specific facial muscles together with the position of the eyes' pupils are extracted to identify the correlation between perceived quality and facial expressions. An SVM with a quadratic kernel and a k-NN classifier have been tested to predict the QoE from these features. These have also been combined with measured application-level parameters to improve the quality prediction. From the performed experiments, it results that the best performance is obtained with the k-NN classifier by combining all the described features and after training it with both the datasets, with a prediction accuracy as high as 93.9% outperforming the state of the art achievements. |
Author | Voigt-Antons, Jan-Niklas Moller, Sebastian Floris, Alessandro Porcu, Simone Atzori, Luigi |
Author_xml | – sequence: 1 givenname: Simone orcidid: 0000-0003-0792-1200 surname: Porcu fullname: Porcu, Simone email: simone.porcu@unica.it organization: DIEE, University of Cagliari, UdR CNIT of Cagliari, Cagliari, Italy – sequence: 2 givenname: Alessandro orcidid: 0000-0002-8745-1327 surname: Floris fullname: Floris, Alessandro email: alessandro.floris84@unica.it organization: DIEE, University of Cagliari, UdR CNIT of Cagliari, Cagliari, Italy – sequence: 3 givenname: Jan-Niklas orcidid: 0000-0002-2786-9262 surname: Voigt-Antons fullname: Voigt-Antons, Jan-Niklas email: jan-niklas.voigt-antons@tu-berlin.de organization: Quality and Usability Lab, Technische Universität Berlin, Berlin, Germany – sequence: 4 givenname: Luigi orcidid: 0000-0003-1350-3574 surname: Atzori fullname: Atzori, Luigi email: l.atzori@ieee.org organization: DIEE, University of Cagliari, UdR CNIT of Cagliari, Cagliari, Italy – sequence: 5 givenname: Sebastian orcidid: 0000-0003-3057-0760 surname: Moller fullname: Moller, Sebastian email: sebastian.moeller@tu-berlin.de organization: Quality and Usability Lab, Technische Universität Berlin, Berlin, Germany |
BookMark | eNp9kM1KAzEURoMo2FYfQNwEXE_Nz7SZWUptq1AVaXU7pMkdmzKd1CQD1qd3YouIC1c3F77zXXK66Li2NSB0QUmfUpJfLx7nD31GGOlzQjNO-BHq0JyzJB1wcfzrfYq63q8JGWQ0Zx20GvtgNjIYW2Nb4rAC_NzIyoRdXMcfW3AGagX4tnGmfsOvRoPF8-BAbuI-cXaDJ1IZWcW0A-9jlaw1nsrPFjMOVGw_QyelrDycH2YPvUzGi9FdMnua3o9uZonifBiSZck0p6UqQWcC9FLmZMklLSXTOeGQUalEprUSVLSfSSVRkuVDqQhTAgap5j10te_dOvvegA_F2jaubk8WLB1mQjAmaJsS-5Ry1nsHZaFM-LYQnDRVQUkRtRZRaxG1FgetLUn_kFvXCnS7f5nLPWMA4CefU0EzQfgXP0iGjw |
CODEN | ITNSC4 |
CitedBy_id | crossref_primary_10_1007_s11042_023_16184_1 crossref_primary_10_1109_TAI_2022_3207450 crossref_primary_10_1109_TITS_2022_3167685 crossref_primary_10_1109_TMC_2024_3390208 crossref_primary_10_3390_electronics11071011 crossref_primary_10_3390_jsan10010011 crossref_primary_10_1016_j_future_2024_107623 crossref_primary_10_1145_3517240 crossref_primary_10_1007_s41233_022_00049_w crossref_primary_10_3390_fi14010005 crossref_primary_10_1109_ACCESS_2024_3420103 crossref_primary_10_1016_j_imavis_2024_104961 crossref_primary_10_1145_3638251 crossref_primary_10_3389_frvir_2021_630731 |
Cites_doi | 10.1109/ISPS.2018.8379009 10.1109/MNET.2015.7340419 10.1016/j.ijhcs.2007.10.011 10.1109/CVPRW.2010.5543262 10.1109/TMM.2013.2291663 10.1109/LNET.2020.2984721 10.1007/978-3-319-02681-7_9 10.1109/TMC.2015.2461216 10.1109/ACCESS.2019.2920477 10.1109/TNSM.2019.2926720 10.1109/MCOM.2018.1701156 10.1117/12.863508 10.1109/TNSM.2017.2785298 10.1109/GlobalSIP.2014.7032269 10.1016/j.comnet.2015.07.003 10.1109/TNSM.2019.2942716 10.1109/QoMEX.2017.7965631 10.1109/INFCOMW.2019.8845109 10.1016/j.patcog.2016.07.026 10.1109/QoMEX.2014.6982332 10.1186/2192-1962-2-7 10.1109/JSTSP.2009.2015375 10.1016/j.image.2016.01.011 10.1109/TMM.2019.2903722 10.1109/FG.2018.00019 10.1007/s11760-019-01494-5 10.2352/ISSN.2470-1173.2016.16.HVEI-117 10.1109/TMM.2015.2477042 10.1109/QoMEX.2019.8743186 10.1145/3176648 10.1109/TMM.2018.2844085 10.1109/TNSM.2016.2537645 10.1109/ICIN.2019.8685917 10.1109/JSTSP.2012.2191936 10.1109/FG.2015.7163105 10.1109/TBC.2018.2822869 10.1145/3204949.3208124 10.1109/LSP.2017.2691160 10.1109/ICCV.2015.428 10.1109/TMM.2016.2598092 10.1016/j.patrec.2015.01.013 10.1109/TNSM.2014.2377691 10.2352/ISSN.2470-1173.2017.14.HVEI-124 10.1109/TMM.2018.2827782 10.1109/JSTSP.2016.2609843 10.1109/COMSNETS48256.2020.9027383 10.1109/QoMEX.2017.7965687 10.1088/1741-2552/ab1673 10.1109/QoMEX.2019.8743281 10.1016/j.image.2016.12.001 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
DBID | 97E RIA RIE AAYXX CITATION |
DOI | 10.1109/TNSM.2020.3018303 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1932-4537 |
EndPage | 2716 |
ExternalDocumentID | 10_1109_TNSM_2020_3018303 9171870 |
Genre | orig-research |
GrantInformation_xml | – fundername: Italian Ministry of University and Research (MIUR), within the Smart Cities framework grantid: Project Netergit, ID: PON04a200490 |
GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABJNI ABQJQ ABVLG ACGFO ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AIBXA AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 EBS EJD HZ~ IES IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE AAYXX CITATION RIG |
ID | FETCH-LOGICAL-c336t-bf2d31fcfed87edba90b3a1fa2d903e81ac78ddc7175374a0ca296ac02c7e54d3 |
IEDL.DBID | RIE |
ISSN | 1932-4537 |
IngestDate | Mon Jun 30 06:47:47 EDT 2025 Tue Jul 01 01:55:18 EDT 2025 Thu Apr 24 23:09:34 EDT 2025 Wed Aug 27 02:33:50 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c336t-bf2d31fcfed87edba90b3a1fa2d903e81ac78ddc7175374a0ca296ac02c7e54d3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-8745-1327 0000-0003-3057-0760 0000-0003-0792-1200 0000-0003-1350-3574 0000-0002-2786-9262 |
OpenAccessLink | https://ieeexplore.ieee.org/document/9171870 |
PQID | 2468772271 |
PQPubID | 85504 |
PageCount | 15 |
ParticipantIDs | crossref_citationtrail_10_1109_TNSM_2020_3018303 proquest_journals_2468772271 ieee_primary_9171870 crossref_primary_10_1109_TNSM_2020_3018303 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2020-Dec. 2020-12-00 20201201 |
PublicationDateYYYYMMDD | 2020-12-01 |
PublicationDate_xml | – month: 12 year: 2020 text: 2020-Dec. |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE eTransactions on network and service management |
PublicationTitleAbbrev | T-NSM |
PublicationYear | 2020 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 ref15 ref58 ref14 ref53 ref55 ref11 ref54 ref10 ref17 ref16 ref19 ref18 he (ref60) 2008 ref51 ref50 kroupi (ref20) 2014 ref46 he (ref59) 2008 ref45 ref42 ref41 ref44 ref43 ref49 ref8 ref7 (ref1) 2020 ref9 ref4 ref3 ref40 ghadiyaram (ref47) 2016 ref35 ref34 (ref6) 2008 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref39 ref38 le callet (ref5) 2013 baltrušaitis (ref56) 2015; 6 hoßfeld (ref52) 2018 ref24 ref23 ref26 ref25 ref64 crété-roffet (ref61) 2007 (ref63) 2017 ref22 ref21 ref28 ref27 bao (ref62) 2020 ekman (ref57) 1978 ref29 hoßfeld (ref48) 2011 |
References_xml | – ident: ref12 doi: 10.1109/ISPS.2018.8379009 – ident: ref13 doi: 10.1109/MNET.2015.7340419 – ident: ref25 doi: 10.1016/j.ijhcs.2007.10.011 – ident: ref58 doi: 10.1109/CVPRW.2010.5543262 – year: 2020 ident: ref1 publication-title: Cisco Visual Networking Index Forecast and Trends 2017-2022 White Paper – ident: ref49 doi: 10.1109/TMM.2013.2291663 – ident: ref34 doi: 10.1109/LNET.2020.2984721 – ident: ref45 doi: 10.1007/978-3-319-02681-7_9 – ident: ref33 doi: 10.1109/TMC.2015.2461216 – year: 2016 ident: ref47 publication-title: LIVE Mobile Stall Video Database – ident: ref38 doi: 10.1109/ACCESS.2019.2920477 – year: 1978 ident: ref57 publication-title: Facial Action Coding System A Technique for the Measurement of Facial Movement – ident: ref8 doi: 10.1109/TNSM.2019.2926720 – ident: ref35 doi: 10.1109/MCOM.2018.1701156 – ident: ref24 doi: 10.1117/12.863508 – ident: ref10 doi: 10.1109/TNSM.2017.2785298 – ident: ref46 doi: 10.1109/GlobalSIP.2014.7032269 – ident: ref50 doi: 10.1016/j.comnet.2015.07.003 – ident: ref3 doi: 10.1109/TNSM.2019.2942716 – ident: ref51 doi: 10.1109/QoMEX.2017.7965631 – ident: ref29 doi: 10.1109/INFCOMW.2019.8845109 – ident: ref40 doi: 10.1016/j.patcog.2016.07.026 – ident: ref21 doi: 10.1109/QoMEX.2014.6982332 – ident: ref14 doi: 10.1186/2192-1962-2-7 – start-page: ei 6492 year: 2007 ident: ref61 article-title: The blur effect: Perception and estimation with a new no-reference perceptual blur metric publication-title: SPIE Elect Imag Symp Conf Human Vision and Elect Imag – ident: ref26 doi: 10.1109/JSTSP.2009.2015375 – ident: ref19 doi: 10.1016/j.image.2016.01.011 – ident: ref4 doi: 10.1109/TMM.2019.2903722 – year: 2018 ident: ref52 publication-title: Confidence interval estimators for MOS values – year: 2020 ident: ref62 publication-title: Image Blur Metrics – ident: ref54 doi: 10.1109/FG.2018.00019 – ident: ref27 doi: 10.1007/s11760-019-01494-5 – ident: ref22 doi: 10.2352/ISSN.2470-1173.2016.16.HVEI-117 – ident: ref43 doi: 10.1109/TMM.2015.2477042 – ident: ref16 doi: 10.1109/QoMEX.2019.8743186 – start-page: 2135 year: 2014 ident: ref20 article-title: EEG correlates during video quality perception publication-title: Proc 22nd Eur Signal Process Conf (EUSIPCO) – ident: ref2 doi: 10.1145/3176648 – ident: ref41 doi: 10.1109/TMM.2018.2844085 – ident: ref9 doi: 10.1109/TNSM.2016.2537645 – ident: ref15 doi: 10.1109/ICIN.2019.8685917 – ident: ref17 doi: 10.1109/JSTSP.2012.2191936 – start-page: 1322 year: 2008 ident: ref60 article-title: ADASYN: Adaptive synthetic sampling approach for imbalanced learning publication-title: Proc IEEE Int Joint Conf Neural Netw (IEEE World Congr Comput Intell ) – year: 2013 ident: ref5 publication-title: European Network on Quality of Experience in Multimedia Systems and Services (COST Action IC 1003) Version 1 2 – year: 2017 ident: ref63 publication-title: Models and tools for quality assessment of streamed media – ident: ref53 doi: 10.1109/FG.2015.7163105 – ident: ref39 doi: 10.1109/TBC.2018.2822869 – ident: ref64 doi: 10.1145/3204949.3208124 – ident: ref37 doi: 10.1109/LSP.2017.2691160 – year: 2008 ident: ref6 publication-title: Subjective video quality assessment methods for multimedia applications – ident: ref55 doi: 10.1109/ICCV.2015.428 – ident: ref44 doi: 10.1109/TMM.2016.2598092 – ident: ref30 doi: 10.1016/j.patrec.2015.01.013 – ident: ref7 doi: 10.1109/TNSM.2014.2377691 – start-page: 494 year: 2011 ident: ref48 article-title: Quantification of YouTube QoE via crowdsourcing publication-title: Proc IEEE Int Symp Multimedia – ident: ref23 doi: 10.2352/ISSN.2470-1173.2017.14.HVEI-124 – ident: ref42 doi: 10.1109/TMM.2018.2827782 – ident: ref11 doi: 10.1109/JSTSP.2016.2609843 – ident: ref36 doi: 10.1109/COMSNETS48256.2020.9027383 – ident: ref32 doi: 10.1109/QoMEX.2017.7965687 – volume: 6 start-page: 1 year: 2015 ident: ref56 article-title: Cross-dataset learning and person-specific normalisation for automatic action unit detection publication-title: Proc 11th IEEE Int Conf Workshops Autom Face Gesture Recognit (FG) – ident: ref18 doi: 10.1088/1741-2552/ab1673 – ident: ref31 doi: 10.1109/QoMEX.2019.8743281 – start-page: 1322 year: 2008 ident: ref59 article-title: ADASYN: Adaptive synthetic sampling approach for imbalanced learning publication-title: Proc IEEE Int Joint Conf Neural Netw (IEEE World Congr Comput Intell – ident: ref28 doi: 10.1016/j.image.2016.12.001 |
SSID | ssj0058192 |
Score | 2.347533 |
Snippet | This article investigates the possibility to estimate the perceived Quality of Experience (QoE) automatically and unobtrusively by analyzing the face of the... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 2702 |
SubjectTerms | Blurring Brain modeling Classifiers Electroencephalography Face Face recognition facial expressions Feature extraction Feedback gaze direction Laboratory tests machine learning Muscles QoE estimation Quality of experience Streaming media Two dimensional displays User experience video key quality indicators Video streaming Video transmission |
Title | Estimation of the Quality of Experience During Video Streaming From Facial Expression and Gaze Direction |
URI | https://ieeexplore.ieee.org/document/9171870 https://www.proquest.com/docview/2468772271 |
Volume | 17 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED6VTjDwKohCQR6YEGmdOM8RQaMKqV1oUbfI8UMgaIJKOsCvx3acggAhtliyI0vf2b7PvvsO4Dz3KEniPHaIlKGjC287FCfc8RmVmMkAU5O3Np6Eo5l_Ow_mLbhc58IIIUzwmejrT_OWz0u20ldlA0UtXGVfG7ChiFudq9XsuoEW9rKvli5OBtPJ3VixP0-RUqystqmJZc8dU0jlx-5rjpR0B8bNZOpIkqf-qsr77P2bTuN_Z7sL29a3RFe1MexBSxT7sPVFcbADD0O1pOtsRVRKpLw_VKtovOnmp-4xujHpi-j-kYsS6adrutDtdFkuUEr1PbvuXQfRFogWHOmwIWR30LI4gFk6nF6PHFtswWGEhJWTS48TVzIpeBwJntME54S6kno8wUTELmVRzDmLtLRn5FPMqJeElGGPRSLwOTmEdlEW4giQEfwJpauoSuL7HFOfBzIkeawGcOU_dAE3UGTMKpHrghjPmWEkOMk0eplGL7PodeFiPeSlluH4q3NHo7HuaIHoQq_BO7ML9TXz_DBWBMOL3OPfR53Apv53HcHSg3a1XIlT5YdU-ZkxwA-kPNu7 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwED6VMgADb0ShgAcmRIoT5zkiICrQdKEgtsjxQyAgQdAO8OuxHacgQIgtlmzF0nf23fnuvgPYLzxKkriIHSJl6OjG2w7FCXd8RiVmMsDU1K1lw7B_7V_cBrctOJzWwgghTPKZ6OlPE8vnFZvop7Ij5Vq4Sr5mYFbp_cCtq7WaezfQ1F42buni5Gg0vMqU_-cptxQruW26YlnNY1qp_Lh_jVJJlyBrtlPnkjz0JuOix96_MTX-d7_LsGitS3Rci8MKtES5CgtfOAfX4O5MHeq6XhFVEin7D9U8Gm96-Ml8jE5NASO6ueeiQjp4TZ_0OH2pnlBK9Uu7nl2n0ZaIlhzpxCFk79CqXIfr9Gx00ndsuwWHERKOnUJ6nLiSScHjSPCCJrgg1JXU4wkmInYpi2LOWaTJPSOfYka9JKQMeywSgc_JBrTLqhSbgAzlTyhd5awkvs8x9XkgQ1LEagFXFkQHcANFziwXuW6J8ZgbnwQnuUYv1-jlFr0OHEyXPNdEHH9NXtNoTCdaIDrQbfDO7VF9zT0_jJWL4UXu1u-r9mCuP8oG-eB8eLkN8_o_dT5LF9rjl4nYUVbJuNg1wvgBQXPfBA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Estimation+of+the+Quality+of+Experience+During+Video+Streaming+From+Facial+Expression+and+Gaze+Direction&rft.jtitle=IEEE+eTransactions+on+network+and+service+management&rft.au=Porcu%2C+Simone&rft.au=Floris%2C+Alessandro&rft.au=Voigt-Antons%2C+Jan-Niklas&rft.au=Atzori%2C+Luigi&rft.date=2020-12-01&rft.pub=IEEE&rft.eissn=1932-4537&rft.volume=17&rft.issue=4&rft.spage=2702&rft.epage=2716&rft_id=info:doi/10.1109%2FTNSM.2020.3018303&rft.externalDocID=9171870 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1932-4537&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1932-4537&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1932-4537&client=summon |