TSRN: two-stage refinement network for temporal action segmentation

In high-level video semantic understanding, continuous action segmentation is a challenging task aimed at segmenting an untrimmed video and labeling each segment with predefined labels over time. However, the accuracy of segment predictions is limited by confusing information in video sequences, suc...

Full description

Saved in:
Bibliographic Details
Published inPattern analysis and applications : PAA Vol. 26; no. 3; pp. 1375 - 1393
Main Authors Tian, Xiaoyan, Jin, Ye, Tang, Xianglong
Format Journal Article
LanguageEnglish
Published London Springer London 01.08.2023
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
Abstract In high-level video semantic understanding, continuous action segmentation is a challenging task aimed at segmenting an untrimmed video and labeling each segment with predefined labels over time. However, the accuracy of segment predictions is limited by confusing information in video sequences, such as ambiguous frames during action boundaries or over-segmentation errors due to the lack of semantic relations. In this work, we present a two-stage refinement network (TSRN) to improve temporal action segmentation. We first capture global relations over an entire video sequence using a multi-head self-attention mechanism in the novel transformer temporal convolutional network and model temporal relations in each action segment. Then, we introduce a dual-attention spatial pyramid pooling network to fuse features from macroscale and microscale perspectives, providing more accurate classification results from the initial prediction. In addition, a joint loss function mitigates over-segmentation. Compared with state-of-the-art methods, the proposed TSRN substantially improves temporal action segmentation on three challenging datasets (i.e., 50Salads, Georgia Tech Egocentric Activities, and Breakfast).
AbstractList In high-level video semantic understanding, continuous action segmentation is a challenging task aimed at segmenting an untrimmed video and labeling each segment with predefined labels over time. However, the accuracy of segment predictions is limited by confusing information in video sequences, such as ambiguous frames during action boundaries or over-segmentation errors due to the lack of semantic relations. In this work, we present a two-stage refinement network (TSRN) to improve temporal action segmentation. We first capture global relations over an entire video sequence using a multi-head self-attention mechanism in the novel transformer temporal convolutional network and model temporal relations in each action segment. Then, we introduce a dual-attention spatial pyramid pooling network to fuse features from macroscale and microscale perspectives, providing more accurate classification results from the initial prediction. In addition, a joint loss function mitigates over-segmentation. Compared with state-of-the-art methods, the proposed TSRN substantially improves temporal action segmentation on three challenging datasets (i.e., 50Salads, Georgia Tech Egocentric Activities, and Breakfast).
Author Jin, Ye
Tian, Xiaoyan
Tang, Xianglong
Author_xml – sequence: 1
  givenname: Xiaoyan
  surname: Tian
  fullname: Tian, Xiaoyan
  organization: Harbin Institute of Technology
– sequence: 2
  givenname: Ye
  surname: Jin
  fullname: Jin, Ye
  email: jinye@hit.edu.cn
  organization: Harbin Institute of Technology
– sequence: 3
  givenname: Xianglong
  surname: Tang
  fullname: Tang, Xianglong
  organization: Harbin Institute of Technology
BookMark eNp9kE1LAzEQhoNUsK3-AU8LnqOZze4m602KX1AUtIK3kKRJ2dpNapIi_nt3XVHw0EsmM7zPfLwTNHLeGYROgZwDIewidm9RYJJTTACqCvMDNIaCUszK8nX0-y_gCE1iXBNCKc35GM0Wz08Pl1n68DgmuTJZMLZxpjUuZc505fCWWR-yZNqtD3KTSZ0a77JoVr1G9skxOrRyE83JT5yil5vrxewOzx9v72dXc6wp1AnntpSs5qCMUTVIu9SKVkQxrkrFQC8NqeoagFa0Kq0FyZRmtLKcEqqJ5opO0dnQdxv8-87EJNZ-F1w3UuS8IIyzHHin4oNKBx9jd47QzbBnCrLZCCCit0wMlonOMvFtmejR_B-6DU0rw-d-iA5Q7MRuZcLfVnuoL3nsgFk
CitedBy_id crossref_primary_10_1007_s11042_024_18684_0
crossref_primary_10_1016_j_engappai_2025_110334
crossref_primary_10_3390_math12010057
crossref_primary_10_1007_s00371_024_03598_7
crossref_primary_10_1007_s11042_023_17276_8
crossref_primary_10_1007_s00530_024_01451_4
crossref_primary_10_1007_s11760_024_03199_w
crossref_primary_10_1109_TPAMI_2024_3509434
crossref_primary_10_1109_JSEN_2024_3381928
Cites_doi 10.1007/s10489-020-01933-8
10.1109/TPAMI.2015.2389824
10.1016/j.neucom.2020.03.066
10.1109/TPAMI.2021.3132068
10.1109/TPAMI.2020.3021756
10.1007/s10044-021-00989-7
10.1007/s10489-019-01587-1
10.1016/j.jpdc.2018.06.012
10.1371/journal.pcbi.1008935
10.1007/s10044-019-00821-3
10.18653/v1/P19-1340
10.1109/CVPR.2019.00369
10.1109/CVPR46437.2021.01653
10.1109/CVPR.2016.216
10.1109/WACV45572.2020.9093535
10.1109/CVPR.2018.00705
10.1109/ICCV48922.2021.00676
10.1109/CVPR.2015.7298935
10.1109/ICCV.2017.324
10.1109/ICCVW.2017.95
10.1007/978-3-030-58595-2_3
10.1109/CVPR.2017.502
10.1109/CVPR.2014.105
10.1109/CVPR.2019.01228
10.1109/CVPR42600.2020.00947
10.1109/CVPR.2015.7298878
10.1109/CVPR.2011.5995444
10.1109/CVPR.2012.6247808
10.1109/CVPR46437.2021.00681
10.1109/CVPR.2017.140
10.1109/ICCV.2015.510
10.1109/CVPR42600.2020.01404
10.1109/CVPR.2018.00745
10.1109/WACV.2016.7477701
10.1109/CVPR.2017.113
10.1109/ICCV.2019.00718
10.1109/CVPR.2014.286
10.1109/WACV48630.2021.00237
10.1109/CVPR.2017.787
10.1007/978-3-030-01234-2_1
10.1109/CVPR.2012.6247801
10.1609/aaai.v35i4.16377
10.1007/978-3-319-46487-9_3
10.1145/2493432.2493482
10.1007/978-3-642-40760-4_43
10.1109/CVPR.2017.106
10.1109/ICCV.2019.00638
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
DBID AAYXX
CITATION
DOI 10.1007/s10044-023-01166-8
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList

DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 1433-755X
EndPage 1393
ExternalDocumentID 10_1007_s10044_023_01166_8
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 51935005
  funderid: http://dx.doi.org/10.13039/501100001809
– fundername: Natural Science Foundation of Heilongjiang Province of China
  grantid: LH2021F023
– fundername: Science & Technology Planned Project of Heilongjiang Province of China
  grantid: GA21C031
– fundername: Basic Research Key Project
  grantid: JCKY20200603C010
GroupedDBID -59
-5G
-BR
-EM
-Y2
-~C
.86
.DC
.VR
06D
0R~
0VY
123
1N0
1SB
203
29O
2J2
2JN
2JY
2KG
2LR
2P1
2VQ
2~H
30V
4.4
406
408
409
40D
40E
5VS
67Z
6NX
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACSNA
ACZOJ
ADHHG
ADHIR
ADINQ
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEBTG
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFGCZ
AFLOW
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
B-.
BA0
BDATZ
BGNMA
BSONS
CAG
COF
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
EBLON
EBS
EIOEI
EJD
ESBYG
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ6
GQ7
GQ8
GXS
H13
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
IWAJR
IXC
IXD
IXE
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
J9A
JBSCW
JCJTX
JZLTJ
KDC
KOV
LAS
LLZTM
M4Y
MA-
N2Q
N9A
NB0
NPVJJ
NQJWS
NU0
O9-
O93
O9J
OAM
P2P
P9O
PF0
PT4
PT5
QOS
R89
R9I
RIG
RNI
ROL
RPX
RSV
RZK
S16
S1Z
S27
S3B
SAP
SCO
SDH
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7X
Z81
Z83
Z88
ZMTXR
~A9
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
ABRTQ
ID FETCH-LOGICAL-c319t-2f5a7981beeb91afdcb360b78b5b71cde06991136365ff1a7bc736f8303c0c8b3
IEDL.DBID U2A
ISSN 1433-7541
IngestDate Fri Jul 25 03:47:48 EDT 2025
Tue Jul 01 01:15:18 EDT 2025
Thu Apr 24 23:04:18 EDT 2025
Fri Feb 21 02:42:52 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 3
Keywords Temporal action segmentation
Refinement network
Over-segmentation
Video semantic understanding
Self-attention
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-2f5a7981beeb91afdcb360b78b5b71cde06991136365ff1a7bc736f8303c0c8b3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
PQID 2840787218
PQPubID 2043691
PageCount 19
ParticipantIDs proquest_journals_2840787218
crossref_citationtrail_10_1007_s10044_023_01166_8
crossref_primary_10_1007_s10044_023_01166_8
springer_journals_10_1007_s10044_023_01166_8
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-08-01
PublicationDateYYYYMMDD 2023-08-01
PublicationDate_xml – month: 08
  year: 2023
  text: 2023-08-01
  day: 01
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: Heidelberg
PublicationTitle Pattern analysis and applications : PAA
PublicationTitleAbbrev Pattern Anal Applic
PublicationYear 2023
Publisher Springer London
Springer Nature B.V
Publisher_xml – name: Springer London
– name: Springer Nature B.V
References Li, Abufarha, Liu, Cheng, Gall (CR14) 2020
Levenshtein (CR41) 1966; 10
CR38
CR37
CR36
CR35
CR31
CR30
Li, Sun, Zhang (CR52) 2021; 44
CR4
Pan, Liu, Sangaiah, Muhammad (CR2) 2018; 120
CR6
Cheng, Qiu, Jiang, Zhu (CR28) 2021; 24
CR5
CR8
CR7
CR9
CR49
CR48
CR47
CR46
Febin, Jayasree, Joy (CR1) 2020; 23
CR45
CR44
CR43
CR42
CR40
Stenum, Rossi, Roemmich (CR3) 2021; 17
He, Zhang, Ren, Sun (CR39) 2015; 37
CR19
CR18
CR17
CR16
CR15
CR13
CR12
CR11
CR10
CR53
CR51
CR50
Wang, Yuan, Wang (CR23) 2020; 407
Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, Polosukhin (CR32) 2017; 30
He, Wen, Wang, Li (CR33) 2021; 51
CR29
CR27
CR26
CR25
CR24
Wang, Xiong, Wang, Nian (CR34) 2020; 50
CR22
CR21
CR20
1166_CR51
1166_CR50
1166_CR12
1166_CR11
1166_CR10
1166_CR53
1166_CR16
1166_CR15
1166_CR13
1166_CR19
1166_CR18
1166_CR17
Z Pan (1166_CR2) 2018; 120
Z Li (1166_CR52) 2021; 44
SJ Li (1166_CR14) 2020
1166_CR22
1166_CR21
L He (1166_CR33) 2021; 51
1166_CR20
1166_CR27
1166_CR26
1166_CR5
1166_CR25
1166_CR4
1166_CR24
1166_CR29
1166_CR7
1166_CR6
1166_CR9
1166_CR8
1166_CR30
VI Levenshtein (1166_CR41) 1966; 10
1166_CR31
1166_CR38
IP Febin (1166_CR1) 2020; 23
1166_CR37
1166_CR36
1166_CR35
1166_CR40
1166_CR45
A Vaswani (1166_CR32) 2017; 30
1166_CR44
1166_CR43
K He (1166_CR39) 2015; 37
1166_CR42
1166_CR49
1166_CR48
1166_CR47
1166_CR46
J Stenum (1166_CR3) 2021; 17
X Cheng (1166_CR28) 2021; 24
J Wang (1166_CR34) 2020; 50
D Wang (1166_CR23) 2020; 407
References_xml – ident: CR45
– ident: CR22
– volume: 51
  start-page: 2128
  issue: 4
  year: 2021
  end-page: 2143
  ident: CR33
  article-title: Vehicle theft recognition from surveillance video based on spatiotemporal attention
  publication-title: Appl Intell
  doi: 10.1007/s10489-020-01933-8
– ident: CR49
– volume: 37
  start-page: 1904
  issue: 9
  year: 2015
  end-page: 1916
  ident: CR39
  article-title: Spatial pyramid pooling in deep convolutional networks for visual recognition
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2015.2389824
– ident: CR4
– ident: CR16
– ident: CR51
– ident: CR12
– ident: CR35
– ident: CR29
– ident: CR8
– ident: CR25
– ident: CR42
– ident: CR21
– ident: CR46
– ident: CR19
– volume: 407
  start-page: 63
  year: 2020
  end-page: 71
  ident: CR23
  article-title: Gated forward refinement network for action segmentation
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2020.03.066
– ident: CR15
– ident: CR50
– ident: CR11
– ident: CR9
– volume: 44
  start-page: 9904
  issue: 12
  year: 2021
  end-page: 9917
  ident: CR52
  article-title: CTNet: context-based tandem network for semantic segmentation
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2021.3132068
– year: 2020
  ident: CR14
  article-title: Ms-tcn++: Multi-stage temporal convolutional network for action segmentation
  publication-title: IEEE Trans Pattern Anal
  doi: 10.1109/TPAMI.2020.3021756
– ident: CR36
– ident: CR5
– ident: CR26
– volume: 24
  start-page: 1347
  issue: 3
  year: 2021
  end-page: 1355
  ident: CR28
  article-title: An improved small object detection method based on Yolo V3
  publication-title: Pattern Anal Appl
  doi: 10.1007/s10044-021-00989-7
– volume: 50
  start-page: 1045
  issue: 4
  year: 2020
  end-page: 1056
  ident: CR34
  article-title: ADSCNet: asymmetric depthwise separable convolution for semantic segmentation in real-time
  publication-title: Appl Intell
  doi: 10.1007/s10489-019-01587-1
– ident: CR18
– ident: CR43
– volume: 120
  start-page: 182
  year: 2018
  end-page: 194
  ident: CR2
  article-title: Visual attention feature (VAF): a novel strategy for visual tracking based on cloud platform in intelligent surveillance systems
  publication-title: J Parallel Distr Com
  doi: 10.1016/j.jpdc.2018.06.012
– ident: CR47
– volume: 10
  start-page: 707
  issue: 8
  year: 1966
  end-page: 710
  ident: CR41
  article-title: Binary codes capable of correcting deletions, insertions, and reversals
  publication-title: Soviet physics doklady
– ident: CR37
– ident: CR53
– ident: CR30
– volume: 17
  issue: 4
  year: 2021
  ident: CR3
  article-title: Two-dimensional video-based analysis of human gait using pose estimation
  publication-title: Plos Comput Biol
  doi: 10.1371/journal.pcbi.1008935
– ident: CR10
– ident: CR6
– ident: CR40
– ident: CR27
– ident: CR44
– ident: CR48
– ident: CR38
– ident: CR17
– ident: CR31
– ident: CR13
– volume: 23
  start-page: 611
  issue: 2
  year: 2020
  end-page: 623
  ident: CR1
  article-title: Violence detection in videos for an intelligent surveillance system using MoBSIFT and movement filtering algorithm
  publication-title: Pattern Anal Appl
  doi: 10.1007/s10044-019-00821-3
– ident: CR7
– volume: 30
  start-page: 5998
  year: 2017
  end-page: 6008
  ident: CR32
  article-title: Attention is all you need
  publication-title: Adv Neural Inf Process Syst
– ident: CR24
– ident: CR20
– ident: 1166_CR27
  doi: 10.18653/v1/P19-1340
– volume: 120
  start-page: 182
  year: 2018
  ident: 1166_CR2
  publication-title: J Parallel Distr Com
  doi: 10.1016/j.jpdc.2018.06.012
– ident: 1166_CR11
  doi: 10.1109/CVPR.2019.00369
– ident: 1166_CR26
  doi: 10.1109/CVPR46437.2021.01653
– volume: 44
  start-page: 9904
  issue: 12
  year: 2021
  ident: 1166_CR52
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2021.3132068
– ident: 1166_CR9
  doi: 10.1109/CVPR.2016.216
– volume: 10
  start-page: 707
  issue: 8
  year: 1966
  ident: 1166_CR41
  publication-title: Soviet physics doklady
– volume: 30
  start-page: 5998
  year: 2017
  ident: 1166_CR32
  publication-title: Adv Neural Inf Process Syst
– ident: 1166_CR49
– ident: 1166_CR25
  doi: 10.1109/WACV45572.2020.9093535
– ident: 1166_CR21
  doi: 10.1109/CVPR.2018.00705
– ident: 1166_CR30
  doi: 10.1109/ICCV48922.2021.00676
– ident: 1166_CR44
  doi: 10.1109/CVPR.2015.7298935
– ident: 1166_CR38
  doi: 10.1109/ICCV.2017.324
– ident: 1166_CR53
  doi: 10.1109/ICCVW.2017.95
– ident: 1166_CR12
  doi: 10.1007/978-3-030-58595-2_3
– ident: 1166_CR42
  doi: 10.1109/CVPR.2017.502
– year: 2020
  ident: 1166_CR14
  publication-title: IEEE Trans Pattern Anal
  doi: 10.1109/TPAMI.2020.3021756
– volume: 50
  start-page: 1045
  issue: 4
  year: 2020
  ident: 1166_CR34
  publication-title: Appl Intell
  doi: 10.1007/s10489-019-01587-1
– ident: 1166_CR7
– ident: 1166_CR19
  doi: 10.1109/CVPR.2014.105
– volume: 24
  start-page: 1347
  issue: 3
  year: 2021
  ident: 1166_CR28
  publication-title: Pattern Anal Appl
  doi: 10.1007/s10044-021-00989-7
– volume: 23
  start-page: 611
  issue: 2
  year: 2020
  ident: 1166_CR1
  publication-title: Pattern Anal Appl
  doi: 10.1007/s10044-019-00821-3
– ident: 1166_CR22
  doi: 10.1109/CVPR.2019.01228
– ident: 1166_CR15
  doi: 10.1109/CVPR42600.2020.00947
– ident: 1166_CR43
  doi: 10.1109/CVPR.2015.7298878
– ident: 1166_CR18
  doi: 10.1109/CVPR.2011.5995444
– ident: 1166_CR40
  doi: 10.1109/CVPR.2012.6247808
– ident: 1166_CR31
  doi: 10.1109/CVPR46437.2021.00681
– ident: 1166_CR51
  doi: 10.1109/CVPR.2017.140
– ident: 1166_CR8
  doi: 10.1109/ICCV.2015.510
– ident: 1166_CR24
  doi: 10.1109/CVPR42600.2020.01404
– ident: 1166_CR35
  doi: 10.1109/CVPR.2018.00745
– ident: 1166_CR20
– ident: 1166_CR29
  doi: 10.1109/WACV.2016.7477701
– volume: 51
  start-page: 2128
  issue: 4
  year: 2021
  ident: 1166_CR33
  publication-title: Appl Intell
  doi: 10.1007/s10489-020-01933-8
– ident: 1166_CR10
  doi: 10.1109/CVPR.2017.113
– ident: 1166_CR6
  doi: 10.1109/ICCV.2019.00718
– volume: 37
  start-page: 1904
  issue: 9
  year: 2015
  ident: 1166_CR39
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2015.2389824
– ident: 1166_CR47
  doi: 10.1109/CVPR.2014.286
– volume: 17
  issue: 4
  year: 2021
  ident: 1166_CR3
  publication-title: Plos Comput Biol
  doi: 10.1371/journal.pcbi.1008935
– ident: 1166_CR13
  doi: 10.1109/WACV48630.2021.00237
– ident: 1166_CR4
  doi: 10.1109/CVPR.2017.787
– ident: 1166_CR5
– ident: 1166_CR36
  doi: 10.1007/978-3-030-01234-2_1
– ident: 1166_CR46
  doi: 10.1109/CVPR.2012.6247801
– ident: 1166_CR16
  doi: 10.1609/aaai.v35i4.16377
– ident: 1166_CR48
  doi: 10.1007/978-3-319-46487-9_3
– ident: 1166_CR17
  doi: 10.1145/2493432.2493482
– ident: 1166_CR45
  doi: 10.1007/978-3-642-40760-4_43
– volume: 407
  start-page: 63
  year: 2020
  ident: 1166_CR23
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2020.03.066
– ident: 1166_CR37
  doi: 10.1109/CVPR.2017.106
– ident: 1166_CR50
  doi: 10.1109/ICCV.2019.00638
SSID ssj0033328
Score 2.3805392
Snippet In high-level video semantic understanding, continuous action segmentation is a challenging task aimed at segmenting an untrimmed video and labeling each...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1375
SubjectTerms Computer Science
Labels
Pattern Recognition
Segmentation
Segments
Semantics
Theoretical Advances
Title TSRN: two-stage refinement network for temporal action segmentation
URI https://link.springer.com/article/10.1007/s10044-023-01166-8
https://www.proquest.com/docview/2840787218
Volume 26
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LTwIxEG4ULl58G1EkPXjTJnS7bXe9EQISjRwUEjxtaLflgouRNf59p0sratTEa5_J9DFfO_PNIHTOeaKmxlBiDTUkzjknKnd_8DbSgOVYLqwjON8NxWAc30z4xJPClsHbPZgkq5v6E9mtHccEdAxxxgNBkk1U5_B2d45c46gT7l_GWJVRFYAAI5LH1FNlfh7jqzpaY8xvZtFK2_R30baHibizWtc9tGGKfbTjISP2B3IJRSErQyg7QN3Rw_3wCpdvCwLIb2YwTApQ0v0C4mLl9I0BqWIflGqOV9QGvDSzJ09EKg7RuN8bdQfEp0ogGs5QSSLLpzIFCGqMSunU5lox0VYyUVxJqnPTFgAEKRNMcGvpVCotmbAJKDDd1oliR6hWLApzjGDSREYKcJ6lOpbSsTyMUFKlxuY2jWQD0SCxTPs44i6dxTxbR0B2Us5Aylkl5SxpoIuPPs-rKBp_tm6Ghcj8iVpmoEYBzcB7Faovw-Ksq38f7eR_zU_RVlTtD-fj10S18uXVnAHuKFUL1TvXj7e9VrXd3gGLZ87z
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LTwMhEJ5UPejFt_EtBz0pSVkW2Jp4MGpTH-1B26S3tbDgRVdja4z_xx_qsGWtGjXx4BVY2AzDzAfzAtgWItE9axl1llkaZ0JQnfk3eBcZxHI8k84HODdbstGJz7qiW4HXMham8HYvTZKFpP4Q7FaNY4o6hnrjgaRJcKU8ty_PeFHrH5we467uRFH9pH3UoKGWADXIZAMaOdFTNcRo1uoa67nMaC6rWiVaaMVMZqsSkRLjkkvhHOspbRSXLkEJb6om0RznHYMJBB-JPzud6LCU95zzooIrAg9OlYhZCM35_p8_q78Rpv1ihi20W30WpgMsJYdDPpqDis3nYSZAVBIEQB-byioQZdsCHLWvLlv7ZPB8TxFp3liCiyJ09a-OJB86mRNExiQkwbolw1AK0rc3dyHwKV-Ezr-QcwnG8_vcLgMumqhII650zMRK-agSK7XSNesyV4vUCrCSYqkJect9-YzbdJRx2VM5RSqnBZXTZAV23795GGbt-HX0erkRaTjB_RTVNqInvB9j9165OaPun2db_dvwLZhstJsX6cVp63wNpqKCV7x_4TqMDx6f7AZinoHeLFiOwPV_8_gbWkoK-w
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3Pb9MwFH7aioR22fgxRFnZfIATs1bHsZ1M4lCtVCuDCsEq7RZqx96lpNUSVPFf8SfuOXHoQGzSDrvajh09P_t9tt_3HsAbIRI9s5ZRZ5mlcS4E1bm_g3eRQSzHc-k8wfnzRJ5O448X4mIDfrdcmNrbvX2SbDgNPkpTUR0tc3d0g_jWj2OK9ob6hwRJk-BWeWZ_rfDQVr4fD3GG30bR6MP5ySkNeQWoQYWraOTETKWI16zVKZu53Ggu-1olWmjFTG77ElET45JL4RybKW0Uly7B3d70TaI59rsJj2LPPsYVNI0G7d7POa-zuSII4VSJmAWazv__-W9TuMa3_zzJ1pZu9AS2A0Qlg0annsKGLZ7BToCrJGwGJRa1GSHasudwcv7t6-SYVKsFRdR5aQkOijDW30CSonE4J4iSSQiINScNrYKU9vJHIEEVuzB9EHG-gE6xKOxLwEETFWnEmI6ZWCnPMLFSK51al7s0Ul1grcQyE2KY-1Qa82wdfdlLOUMpZ7WUs6QL7_58s2wieNzZutdORBZWc5mhCUckhWdlrD5sJ2ddfXtvr-7X_AAefxmOsk_jydkebEW1qnhXwx50qquf9jXCn0rv1xpH4PtDq_g1IhYPLg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=TSRN%3A+two-stage+refinement+network+for+temporal+action+segmentation&rft.jtitle=Pattern+analysis+and+applications+%3A+PAA&rft.au=Tian%2C+Xiaoyan&rft.au=Jin%2C+Ye&rft.au=Tang%2C+Xianglong&rft.date=2023-08-01&rft.issn=1433-7541&rft.eissn=1433-755X&rft.volume=26&rft.issue=3&rft.spage=1375&rft.epage=1393&rft_id=info:doi/10.1007%2Fs10044-023-01166-8&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s10044_023_01166_8
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1433-7541&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1433-7541&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1433-7541&client=summon