RhythmNet: End-to-End Heart Rate Estimation From Face via Spatial-Temporal Representation
Heart rate (HR) is an important physiological signal that reflects the physical and emotional status of a person. Traditional HR measurements usually rely on contact monitors, which may cause inconvenience and discomfort. Recently, some methods have been proposed for remote HR estimation from face v...
Saved in:
Published in | IEEE transactions on image processing Vol. 29; pp. 2409 - 2423 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Heart rate (HR) is an important physiological signal that reflects the physical and emotional status of a person. Traditional HR measurements usually rely on contact monitors, which may cause inconvenience and discomfort. Recently, some methods have been proposed for remote HR estimation from face videos; however, most of them focus on well-controlled scenarios, their generalization ability into less-constrained scenarios (e.g., with head movement, and bad illumination) are not known. At the same time, lacking large-scale HR databases has limited the use of deep models for remote HR estimation. In this paper, we propose an end-to-end RhythmNet for remote HR estimation from the face. In RyhthmNet, we use a spatial-temporal representation encoding the HR signals from multiple ROI volumes as its input. Then the spatial-temporal representations are fed into a convolutional network for HR estimation. We also take into account the relationship of adjacent HR measurements from a video sequence via Gated Recurrent Unit (GRU) and achieves efficient HR measurement. In addition, we build a large-scale multi-modal HR database (named as VIPL-HR 1 ), which contains 2,378 visible light videos (VIS) and 752 near-infrared (NIR) videos of 107 subjects. Our VIPL-HR database contains various variations such as head movements, illumination variations, and acquisition device changes, replicating a less-constrained scenario for HR estimation. The proposed approach outperforms the state-of-the-art methods on both the public-domain and our VIPL-HR databases. 1 VIPL-HR is available at: http://vipl.ict.ac.cn/view_database.php?id=15. |
---|---|
AbstractList | Heart rate (HR) is an important physiological signal that reflects the physical and emotional status of a person. Traditional HR measurements usually rely on contact monitors, which may cause inconvenience and discomfort. Recently, some methods have been proposed for remote HR estimation from face videos; however, most of them focus on well-controlled scenarios, their generalization ability into less-constrained scenarios (e.g., with head movement, and bad illumination) are not known. At the same time, lacking large-scale HR databases has limited the use of deep models for remote HR estimation. In this paper, we propose an end-to-end RhythmNet for remote HR estimation from the face. In RyhthmNet, we use a spatial-temporal representation encoding the HR signals from multiple ROI volumes as its input. Then the spatial-temporal representations are fed into a convolutional network for HR estimation. We also take into account the relationship of adjacent HR measurements from a video sequence via Gated Recurrent Unit (GRU) and achieves efficient HR measurement. In addition, we build a large-scale multi-modal HR database (named as VIPL-HR 1 ), which contains 2,378 visible light videos (VIS) and 752 near-infrared (NIR) videos of 107 subjects. Our VIPL-HR database contains various variations such as head movements, illumination variations, and acquisition device changes, replicating a less-constrained scenario for HR estimation. The proposed approach outperforms the state-of-the-art methods on both the public-domain and our VIPL-HR databases. 1 VIPL-HR is available at: http://vipl.ict.ac.cn/view_database.php?id=15 Heart rate (HR) is an important physiological signal that reflects the physical and emotional status of a person. Traditional HR measurements usually rely on contact monitors, which may cause inconvenience and discomfort. Recently, some methods have been proposed for remote HR estimation from face videos; however, most of them focus on well-controlled scenarios, their generalization ability into less-constrained scenarios (e.g., with head movement, and bad illumination) are not known. At the same time, lacking large-scale HR databases has limited the use of deep models for remote HR estimation. In this paper, we propose an end-to-end RhythmNet for remote HR estimation from the face. In RyhthmNet, we use a spatial-temporal representation encoding the HR signals from multiple ROI volumes as its input. Then the spatial-temporal representations are fed into a convolutional network for HR estimation. We also take into account the relationship of adjacent HR measurements from a video sequence via Gated Recurrent Unit (GRU) and achieves efficient HR measurement. In addition, we build a large-scale multi-modal HR database (named as VIPL-HR 1 ), which contains 2,378 visible light videos (VIS) and 752 near-infrared (NIR) videos of 107 subjects. Our VIPL-HR database contains various variations such as head movements, illumination variations, and acquisition device changes, replicating a less-constrained scenario for HR estimation. The proposed approach outperforms the state-of-the-art methods on both the public-domain and our VIPL-HR databases. 1 VIPL-HR is available at: http://vipl.ict.ac.cn/view_database.php?id=15. Heart rate (HR) is an important physiological signal that reflects the physical and emotional status of a person. Traditional HR measurements usually rely on contact monitors, which may cause inconvenience and discomfort. Recently, some methods have been proposed for remote HR estimation from face videos; however, most of them focus on well-controlled scenarios, their generalization ability into less-constrained scenarios (e.g., with head movement, and bad illumination) are not known. At the same time, lacking large-scale HR databases has limited the use of deep models for remote HR estimation. In this paper, we propose an end-to-end RhythmNet for remote HR estimation from the face. In RyhthmNet, we use a spatial-temporal representation encoding the HR signals from multiple ROI volumes as its input. Then the spatial-temporal representations are fed into a convolutional network for HR estimation. We also take into account the relationship of adjacent HR measurements from a video sequence via Gated Recurrent Unit (GRU) and achieves efficient HR measurement. In addition, we build a large-scale multi-modal HR database (named as VIPL-HRVIPL-HR is available at: ), which contains 2,378 visible light videos (VIS) and 752 near-infrared (NIR) videos of 107 subjects. Our VIPL-HR database contains various variations such as head movements, illumination variations, and acquisition device changes, replicating a less-constrained scenario for HR estimation. The proposed approach outperforms the state-of-the-art methods on both the public-domain and our VIPL-HR databases. Heart rate (HR) is an important physiological signal that reflects the physical and emotional status of a person. Traditional HR measurements usually rely on contact monitors, which may cause inconvenience and discomfort. Recently, some methods have been proposed for remote HR estimation from face videos; however, most of them focus on well-controlled scenarios, their generalization ability into less-constrained scenarios (e.g., with head movement, and bad illumination) are not known. At the same time, lacking large-scale HR databases has limited the use of deep models for remote HR estimation. In this paper, we propose an end-to-end RhythmNet for remote HR estimation from the face. In RyhthmNet, we use a spatial-temporal representation encoding the HR signals from multiple ROI volumes as its input. Then the spatial-temporal representations are fed into a convolutional network for HR estimation. We also take into account the relationship of adjacent HR measurements from a video sequence via Gated Recurrent Unit (GRU) and achieves efficient HR measurement. In addition, we build a large-scale multi-modal HR database (named as VIPL-HRVIPL-HR is available at: ), which contains 2,378 visible light videos (VIS) and 752 near-infrared (NIR) videos of 107 subjects. Our VIPL-HR database contains various variations such as head movements, illumination variations, and acquisition device changes, replicating a less-constrained scenario for HR estimation. The proposed approach outperforms the state-of-the-art methods on both the public-domain and our VIPL-HR databases.Heart rate (HR) is an important physiological signal that reflects the physical and emotional status of a person. Traditional HR measurements usually rely on contact monitors, which may cause inconvenience and discomfort. Recently, some methods have been proposed for remote HR estimation from face videos; however, most of them focus on well-controlled scenarios, their generalization ability into less-constrained scenarios (e.g., with head movement, and bad illumination) are not known. At the same time, lacking large-scale HR databases has limited the use of deep models for remote HR estimation. In this paper, we propose an end-to-end RhythmNet for remote HR estimation from the face. In RyhthmNet, we use a spatial-temporal representation encoding the HR signals from multiple ROI volumes as its input. Then the spatial-temporal representations are fed into a convolutional network for HR estimation. We also take into account the relationship of adjacent HR measurements from a video sequence via Gated Recurrent Unit (GRU) and achieves efficient HR measurement. In addition, we build a large-scale multi-modal HR database (named as VIPL-HRVIPL-HR is available at: ), which contains 2,378 visible light videos (VIS) and 752 near-infrared (NIR) videos of 107 subjects. Our VIPL-HR database contains various variations such as head movements, illumination variations, and acquisition device changes, replicating a less-constrained scenario for HR estimation. The proposed approach outperforms the state-of-the-art methods on both the public-domain and our VIPL-HR databases. |
Author | Niu, Xuesong Shan, Shiguang Chen, Xilin Han, Hu |
Author_xml | – sequence: 1 givenname: Xuesong orcidid: 0000-0001-7737-4287 surname: Niu fullname: Niu, Xuesong email: xuesong.niu@vipl.ict.ac.cn organization: Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China – sequence: 2 givenname: Shiguang orcidid: 0000-0002-8348-392X surname: Shan fullname: Shan, Shiguang email: sgshan@ict.ac.cn organization: School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, China – sequence: 3 givenname: Hu orcidid: 0000-0001-6010-1792 surname: Han fullname: Han, Hu email: hanhu@ict.ac.cn organization: Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences (CAS), Beijing, China – sequence: 4 givenname: Xilin orcidid: 0000-0003-3024-4404 surname: Chen fullname: Chen, Xilin email: xlchen@ict.ac.cn organization: Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/31647433$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kctrGzEQxkVJaBK390IgCHrpZV29H70VY9eBkBTXOeS0yNox2bC72kpyIP99lNrNIYecZhh-37y-M3Q0hAEQ-kLJlFJiv68vf08ZoXbKrNCMiA_olFpBK0IEOyo5kbrSVNgTdJbSAyFUSKo-ohNOldCC81N0t7p_yvf9NeQfeD40VQ5VCXgJLma8chnwPOW2d7kNA17E0OOF84AfW4f_jKXqumoN_Rii6_AKxggJhvyP_oSOt65L8PkQJ-h2MV_PltXVza_L2c-rygvKc6WUNQq8kdxLCpZ4phvYACinBGw2ZusluAasYBocEU5qANpophx30myBT9C3fd8xhr87SLnu2-Sh69wAYZdqxomRjAqjCvr1DfoQdnEo2xWKF0BoSwt1caB2mx6aeozl_vhU_39aAcge8DGkFGH7ilBSv_hSF1_qF1_qgy9Fot5IfLt_U46u7d4Tnu-FLQC8zjFGWyUNfwZnuJkT |
CODEN | IIPRE4 |
CitedBy_id | crossref_primary_10_1016_j_dsp_2023_104288 crossref_primary_10_1016_j_patcog_2023_109421 crossref_primary_10_1109_TCSVT_2023_3332408 crossref_primary_10_1109_TIFS_2023_3293949 crossref_primary_10_1186_s12938_022_01037_z crossref_primary_10_1016_j_engappai_2023_106642 crossref_primary_10_1109_JBHI_2022_3220967 crossref_primary_10_1109_TMM_2024_3363660 crossref_primary_10_1109_TIP_2023_3330108 crossref_primary_10_1145_3706062 crossref_primary_10_1109_LSP_2021_3089908 crossref_primary_10_1109_TIM_2021_3109398 crossref_primary_10_1109_TIM_2024_3476556 crossref_primary_10_1007_s13246_024_01401_4 crossref_primary_10_1016_j_compbiomed_2021_105146 crossref_primary_10_1109_JBHI_2023_3273557 crossref_primary_10_1109_JBHI_2023_3307942 crossref_primary_10_1109_JSEN_2024_3483629 crossref_primary_10_3233_THC_220322 crossref_primary_10_1109_TIM_2023_3256470 crossref_primary_10_1109_TIM_2023_3329095 crossref_primary_10_1109_TIM_2024_3428631 crossref_primary_10_1016_j_compbiomed_2023_106592 crossref_primary_10_1109_LSP_2020_3007086 crossref_primary_10_1109_JBHI_2024_3433461 crossref_primary_10_3390_s20247021 crossref_primary_10_1007_s00371_022_02624_w crossref_primary_10_1109_LSP_2022_3185964 crossref_primary_10_1016_j_cviu_2021_103327 crossref_primary_10_1016_j_vrih_2020_10_002 crossref_primary_10_1109_JBHI_2023_3345486 crossref_primary_10_1007_s11042_024_19697_5 crossref_primary_10_1016_j_jfranklin_2023_10_013 crossref_primary_10_1007_s11042_023_16794_9 crossref_primary_10_1109_JBHI_2023_3346057 crossref_primary_10_1109_MSP_2021_3106285 crossref_primary_10_1016_j_compind_2024_104227 crossref_primary_10_1109_JBHI_2022_3172705 crossref_primary_10_1109_TPAMI_2024_3415112 crossref_primary_10_1016_j_vrih_2022_07_001 crossref_primary_10_1016_j_bspc_2024_106736 crossref_primary_10_3390_bioengineering9110638 crossref_primary_10_1016_j_eswa_2024_126310 crossref_primary_10_1016_j_iot_2024_101117 crossref_primary_10_1109_TCSVT_2023_3301962 crossref_primary_10_3390_s22020688 crossref_primary_10_3390_electronics11091473 crossref_primary_10_1109_TIM_2022_3205679 crossref_primary_10_1109_TPAMI_2023_3298650 crossref_primary_10_1109_JBHI_2021_3124967 crossref_primary_10_1016_j_knosys_2024_112262 crossref_primary_10_1109_TIM_2022_3209750 crossref_primary_10_1109_TIM_2020_3041083 crossref_primary_10_1007_s11760_024_03809_7 crossref_primary_10_1109_THMS_2022_3207755 crossref_primary_10_32604_cmc_2022_027985 crossref_primary_10_7717_peerj_cs_929 crossref_primary_10_1145_3699771 crossref_primary_10_1109_TCSVT_2023_3307700 crossref_primary_10_1016_j_patcog_2025_111511 crossref_primary_10_1109_TIM_2022_3170976 crossref_primary_10_1109_JBHI_2022_3220656 crossref_primary_10_1088_1742_6596_2366_1_012043 crossref_primary_10_1109_TIFS_2024_3380848 crossref_primary_10_3390_bioengineering11030251 crossref_primary_10_3390_informatics9030057 crossref_primary_10_3390_s21186296 crossref_primary_10_1016_j_bspc_2025_107677 crossref_primary_10_1016_j_eswa_2021_115596 crossref_primary_10_1145_3517225 crossref_primary_10_1007_s11263_025_02388_5 crossref_primary_10_1016_j_bspc_2023_105608 crossref_primary_10_1016_j_compeleceng_2023_109022 crossref_primary_10_1109_TIM_2022_3212524 crossref_primary_10_1109_TCE_2023_3333321 crossref_primary_10_3389_fphys_2024_1428351 crossref_primary_10_1002_jbio_202400119 crossref_primary_10_3934_mbe_2022241 crossref_primary_10_1088_1361_6579_ac98f1 crossref_primary_10_3390_computers12020043 crossref_primary_10_1007_s11063_022_11097_w crossref_primary_10_1109_JBHI_2024_3363006 crossref_primary_10_1109_TCSVT_2024_3422849 crossref_primary_10_1109_TIM_2022_3229704 crossref_primary_10_1109_TII_2024_3485749 crossref_primary_10_1109_TIM_2023_3331414 crossref_primary_10_1016_j_dsp_2024_104525 crossref_primary_10_3390_app10238630 crossref_primary_10_1109_JIOT_2023_3302159 crossref_primary_10_1016_j_bspc_2023_105687 crossref_primary_10_1016_j_cmpb_2022_106771 crossref_primary_10_1016_j_bspc_2024_107105 crossref_primary_10_1007_s42486_024_00158_9 crossref_primary_10_3390_s24247937 crossref_primary_10_1109_JIOT_2021_3131742 crossref_primary_10_1016_j_displa_2024_102852 crossref_primary_10_1109_ACCESS_2025_3552585 crossref_primary_10_3233_THC_241104 crossref_primary_10_1109_TIM_2024_3497058 crossref_primary_10_3390_app14209551 crossref_primary_10_1109_ACCESS_2020_3045387 crossref_primary_10_1109_TCSS_2024_3356713 crossref_primary_10_1109_TCSVT_2022_3227348 crossref_primary_10_1109_TIM_2024_3375409 crossref_primary_10_1088_1361_6579_ad2f5d crossref_primary_10_1016_j_cviu_2021_103246 crossref_primary_10_1016_j_compbiomed_2024_108873 crossref_primary_10_3390_electronics13071334 crossref_primary_10_3390_s23062963 crossref_primary_10_3390_s25010100 crossref_primary_10_1016_j_compbiomed_2022_106307 crossref_primary_10_1016_j_knosys_2023_110608 crossref_primary_10_1016_j_bspc_2022_104002 crossref_primary_10_31590_ejosat_1221945 crossref_primary_10_1016_j_bspc_2022_104487 crossref_primary_10_1109_TIM_2021_3129498 crossref_primary_10_3390_bioengineering10121428 crossref_primary_10_1007_s11517_023_02884_1 crossref_primary_10_1145_3528223_3530161 crossref_primary_10_1109_JSEN_2024_3407816 crossref_primary_10_1007_s11263_023_01758_1 crossref_primary_10_1109_ACCESS_2023_3284465 crossref_primary_10_1109_TIM_2020_3021222 crossref_primary_10_1109_TIM_2021_3060572 crossref_primary_10_1109_TIM_2022_3141149 crossref_primary_10_3390_technologies12010002 crossref_primary_10_1109_JBHI_2023_3236631 crossref_primary_10_1109_JIOT_2023_3308477 crossref_primary_10_1109_TMM_2024_3405720 crossref_primary_10_1016_j_engappai_2021_104447 crossref_primary_10_1016_j_bspc_2024_106598 crossref_primary_10_1109_TIM_2025_3545182 crossref_primary_10_1109_TIM_2022_3217867 crossref_primary_10_1364_BOE_457774 crossref_primary_10_1109_JSEN_2024_3496115 crossref_primary_10_1109_JSEN_2024_3429163 crossref_primary_10_1088_1361_6579_ad1458 crossref_primary_10_1109_JBHI_2024_3400869 crossref_primary_10_1134_S0030400X22080057 crossref_primary_10_1016_j_knosys_2021_108048 crossref_primary_10_1109_TIM_2024_3432140 crossref_primary_10_1109_TIP_2021_3094739 crossref_primary_10_1155_2021_6626974 crossref_primary_10_1109_JBHI_2023_3252091 crossref_primary_10_1117_1_JEI_32_4_043037 crossref_primary_10_1109_TCDS_2021_3062370 crossref_primary_10_1016_j_engappai_2023_107772 crossref_primary_10_3390_bioengineering11080743 crossref_primary_10_1016_j_bspc_2022_103609 crossref_primary_10_1109_TPAMI_2024_3367910 crossref_primary_10_1109_ACCESS_2023_3281898 crossref_primary_10_1109_JBHI_2021_3051176 crossref_primary_10_1109_TCDS_2021_3131197 crossref_primary_10_3390_s23167297 |
Cites_doi | 10.1109/TCSVT.2014.2364415 10.1109/TMM.2017.2672198 10.1109/CVPR.2017.502 10.1109/TBME.2014.2323695 10.1109/BTAS.2017.8272752 10.1109/FG.2017.30 10.1109/ICASSP.2014.6854440 10.1109/TBME.2013.2266196 10.1109/TBME.2015.2390261 10.1109/IPTA.2016.7820970 10.1088/0967-3334/35/9/1913 10.1364/OE.18.010762 10.1109/CVPR.2016.90 10.1109/FG.2018.00043 10.1109/CVPR.2016.263 10.1109/ICCV.2015.415 10.1109/CVPR.2013.440 10.3115/v1/D14-1179 10.1117/1.JBO.21.11.117001 10.1109/T-AFFC.2011.25 10.1109/ROMAN.2014.6926392 10.1364/OE.16.021434 10.1109/TBME.2010.2086456 10.1117/1.JBO.20.4.048002 10.1364/BOE.6.001565 10.1109/CVPR.2014.543 10.1109/TBME.2015.2508602 10.1109/TBME.2014.2356291 10.1109/TBME.2016.2609282 10.1117/1.JBO.17.7.077011 10.1109/BTAS.2017.8272721 10.1364/BOE.8.001965 10.1145/3242969.3264982 10.1109/FG.2017.17 10.1145/2185520.2335416 10.1109/CVPR.2018.00554 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TIP.2019.2947204 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998-Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | Technology Research Database PubMed MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering |
EISSN | 1941-0042 |
EndPage | 2423 |
ExternalDocumentID | 31647433 10_1109_TIP_2019_2947204 8879658 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61672496; 61702486 funderid: 10.13039/501100001809 – fundername: Chinese Academy of Sciences grantid: GJHZ1843 funderid: 10.13039/501100002367 – fundername: National Key R&D Program of China grantid: 2017YFA0700800 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYOK AAYXX CITATION RIG NPM Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c413t-66986ec853c51e90c27debee6a64ebb8fc5eade9427ea04a57ee1d726a3a58fe3 |
IEDL.DBID | RIE |
ISSN | 1057-7149 1941-0042 |
IngestDate | Fri Jul 11 07:16:21 EDT 2025 Mon Jun 30 10:14:55 EDT 2025 Wed Feb 19 02:09:35 EST 2025 Tue Jul 01 02:03:21 EDT 2025 Thu Apr 24 23:01:22 EDT 2025 Wed Aug 27 02:40:56 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c413t-66986ec853c51e90c27debee6a64ebb8fc5eade9427ea04a57ee1d726a3a58fe3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0002-8348-392X 0000-0001-6010-1792 0000-0003-3024-4404 0000-0001-7737-4287 |
PMID | 31647433 |
PQID | 2338634791 |
PQPubID | 85429 |
PageCount | 15 |
ParticipantIDs | ieee_primary_8879658 crossref_primary_10_1109_TIP_2019_2947204 crossref_citationtrail_10_1109_TIP_2019_2947204 proquest_miscellaneous_2308521486 pubmed_primary_31647433 proquest_journals_2338634791 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2020-01-01 |
PublicationDateYYYYMMDD | 2020-01-01 |
PublicationDate_xml | – month: 01 year: 2020 text: 2020-01-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: New York |
PublicationTitle | IEEE transactions on image processing |
PublicationTitleAbbrev | TIP |
PublicationTitleAlternate | IEEE Trans Image Process |
PublicationYear | 2020 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref35 ref34 ref12 ref15 ref36 ref14 ref31 ref30 ref33 niu (ref13) 2018 ref32 ref10 ref2 irani (ref11) 2014; 3 ref1 ref17 ref38 ref19 ref18 kingma (ref37) 2014 chen (ref26) 2018 ref24 ref23 cui (ref43) 2018 ref25 ref20 ref42 ref41 ref22 ref21 yu (ref39) 2019 ref28 ref27 ref29 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 lewandowska (ref16) 2011 |
References_xml | – ident: ref19 doi: 10.1109/TCSVT.2014.2364415 – ident: ref31 doi: 10.1109/TMM.2017.2672198 – ident: ref38 doi: 10.1109/CVPR.2017.502 – start-page: 1 year: 2019 ident: ref39 article-title: Remote photoplethysmograph signal measurement from facial videos using Spatio-temporal networks publication-title: Proc BMVC – ident: ref14 doi: 10.1109/TBME.2014.2323695 – ident: ref23 doi: 10.1109/BTAS.2017.8272752 – ident: ref42 doi: 10.1109/FG.2017.30 – ident: ref25 doi: 10.1109/ICASSP.2014.6854440 – start-page: 405 year: 2011 ident: ref16 article-title: Measuring pulse rate with a webcam-A non-contact method for evaluating cardiac activity publication-title: Proc FedCSIS – ident: ref4 doi: 10.1109/TBME.2013.2266196 – ident: ref27 doi: 10.1109/TBME.2015.2390261 – ident: ref28 doi: 10.1109/IPTA.2016.7820970 – start-page: 140 year: 2018 ident: ref43 article-title: Improving 2D face recognition via discriminative face depth estimation publication-title: Proc IEEE ICB – ident: ref20 doi: 10.1088/0967-3334/35/9/1913 – ident: ref1 doi: 10.1364/OE.18.010762 – start-page: 349 year: 2018 ident: ref26 article-title: DeepPhys: Video-based physiological measurement using convolutional attention networks publication-title: Proc ECCV – ident: ref34 doi: 10.1109/CVPR.2016.90 – ident: ref12 doi: 10.1109/FG.2018.00043 – ident: ref6 doi: 10.1109/CVPR.2016.263 – ident: ref15 doi: 10.1109/ICCV.2015.415 – ident: ref3 doi: 10.1109/CVPR.2013.440 – ident: ref35 doi: 10.3115/v1/D14-1179 – ident: ref40 doi: 10.1117/1.JBO.21.11.117001 – start-page: 562 year: 2018 ident: ref13 article-title: VIPL-HR: A multi-modal database for pulse estimation from less-constrained face video publication-title: Proc ACCV – ident: ref8 doi: 10.1109/T-AFFC.2011.25 – ident: ref10 doi: 10.1109/ROMAN.2014.6926392 – ident: ref30 doi: 10.1364/OE.16.021434 – ident: ref2 doi: 10.1109/TBME.2010.2086456 – ident: ref33 doi: 10.1117/1.JBO.20.4.048002 – ident: ref21 doi: 10.1364/BOE.6.001565 – year: 2014 ident: ref37 article-title: Adam: A method for stochastic optimization publication-title: arXiv 1412 6980 – ident: ref5 doi: 10.1109/CVPR.2014.543 – ident: ref24 doi: 10.1109/TBME.2015.2508602 – ident: ref18 doi: 10.1109/TBME.2014.2356291 – ident: ref7 doi: 10.1109/TBME.2016.2609282 – ident: ref17 doi: 10.1117/1.JBO.17.7.077011 – volume: 3 start-page: 118 year: 2014 ident: ref11 article-title: Improved pulse detection from head motions using DCT publication-title: Proc VISAPP – ident: ref9 doi: 10.1109/BTAS.2017.8272721 – ident: ref22 doi: 10.1364/BOE.8.001965 – ident: ref36 doi: 10.1145/3242969.3264982 – ident: ref32 doi: 10.1109/FG.2017.17 – ident: ref29 doi: 10.1145/2185520.2335416 – ident: ref41 doi: 10.1109/CVPR.2018.00554 |
SSID | ssj0014516 |
Score | 2.6908493 |
Snippet | Heart rate (HR) is an important physiological signal that reflects the physical and emotional status of a person. Traditional HR measurements usually rely on... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 2409 |
SubjectTerms | end-to-end learning Estimation Head Head movement Heart rate Illumination Image color analysis Remote heart rate estimation Replicating Representations rPPG Skin spatial-temporal representation Webcams |
Title | RhythmNet: End-to-End Heart Rate Estimation From Face via Spatial-Temporal Representation |
URI | https://ieeexplore.ieee.org/document/8879658 https://www.ncbi.nlm.nih.gov/pubmed/31647433 https://www.proquest.com/docview/2338634791 https://www.proquest.com/docview/2308521486 |
Volume | 29 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3da9RAEB_aPumDra0fsVVW8EVw73LZZPfWN5E7TsEixxXqU9jdTGixTaTNFdq_vrObvaCi4lMC2dwl_GYyv9n5AnhTWSFsnSIvVD3leaUtt9IJUjyB0jpppA0JssdycZJ_Pi1Ot-DdUAuDiCH5DEf-NMTyq9at_VbZmBTC9yrZhm1y3PparSFi4AfOhshmobgi2r8JSaZ6vPr01edw6VGmcz-T5RcTFGaq_J1eBjMz34Uvmwfss0u-j9adHbm733o3_u8b7MGjyDfZh15AHsMWNvuwG7kni5p9vQ8Pf2pMeADflme33dnlMXbv2aypeNdyOrAFqUXHlkRP2Yw-DX3VI5tftZdsbhyym3PD_Ixjkmm-6nteXbBlyLWNJU7NEziZz1YfFzwOYeCO7FvHpdRTiY6suismqFOXqYqAR0IxR2untSt8zrXOM4UmzU2hECeVyqQRppjWKJ7CTtM2-BwYkUtnlfN7KC7XqI3CakIWVNXEKTJTJDDe4FK62KHcD8q4KIOnkuqSkCw9kmVEMoG3wx0_-u4c_1h74PEY1kUoEjjaQF9G9b0uM3Lcpa-xnSTwerhMiuejKabBdu3XEFvNyJuUCTzrRWb4beG7tOVCvPjzfx7Cg8y77WEn5wh2uqs1viRu09lXQajvAbyl9OE |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fb9MwED6N8QB7YLABCwwwEi9IuE3jxK73NqFWHWwVqjppPEW2c9EQW4K2dNL46zk7aQQIEE-JFKdN9N3lvvP9AnhTWCFsGSPPVDnmaaEtt9IJUjyB0jpppA0JsnM5O00_nGVnG_Cur4VBxJB8hgN_GmL5Re1WfqtsSArhe5Xcgbtk97OkrdbqYwZ-5GyIbWaKKyL-66BkrIfLo08-i0sPEp36qSy_GKEwVeXvBDMYmuk2nKwfsc0v-TpYNXbgvv_WvfF_3-EhPOgYJztsReQRbGC1A9sd-2Sdbl_vwNZPrQl34fPi_LY5v5xjc8AmVcGbmtOBzUgxGrYggsom9HFo6x7Z9Kq-ZFPjkN18McxPOSap5su269UFW4Rs267IqXoMp9PJ8v2Md2MYuCML13Ap9ViiI7vushHq2CWqIOiRcEzR2nHpMp91rdNEoYlTkynEUaESaYTJxiWKJ7BZ1RXuASN66axyfhfFpRq1UViMyIaqklhFYrIIhmtcctf1KPejMi7y4KvEOickc49k3iEZwdv-jm9tf45_rN31ePTrOigi2F9Dn3cKfJ0n5LpLX2U7iuB1f5lUz8dTTIX1yq8hvpqQPykjeNqKTP_bwvdpS4V49uf_fAX3ZsuT4_z4aP7xOdxPvBMf9nX2YbO5WuELYjqNfRkE_AerAfgr |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=RhythmNet%3A+End-to-end+Heart+Rate+Estimation+from+Face+via+Spatial-temporal+Representation&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Niu%2C+Xuesong&rft.au=Shan%2C+Shiguang&rft.au=Han%2C+Hu&rft.au=Chen%2C+Xilin&rft.date=2020-01-01&rft.eissn=1941-0042&rft_id=info:doi/10.1109%2FTIP.2019.2947204&rft_id=info%3Apmid%2F31647433&rft.externalDocID=31647433 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |