VSS-Net: Visual Semantic Self-Mining Network for Video Summarization

Video summarization, with the target to detect valuable segments given untrimmed videos, is a meaningful yet understudied topic. Previous methods primarily consider inter-frame and inter-shot temporal dependencies, which might be insufficient to pinpoint important content due to limited valuable inf...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 34; no. 4; pp. 2775 - 2788
Main Authors Zhang, Yunzuo, Liu, Yameng, Kang, Weili, Tao, Ran
Format Journal Article
LanguageEnglish
Published New York IEEE 01.04.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Video summarization, with the target to detect valuable segments given untrimmed videos, is a meaningful yet understudied topic. Previous methods primarily consider inter-frame and inter-shot temporal dependencies, which might be insufficient to pinpoint important content due to limited valuable information that can be learned. To address this limitation, we elaborate on a Visual Semantic Self-mining Network (VSS-Net), a novel summarization framework motivated by the widespread success of cross-modality learning tasks. VSS-Net initially adopts a two-stream structure consisting of a Context Representation Graph (CRG) and a Video Semantics Encoder (VSE). They are jointly exploited to establish the groundwork for further boosting the capability of content awareness. Specifically, CRG is constructed using an edge-set strategy tailored to the hierarchical structure of videos, enriching visual features with local and non-local temporal cues from temporal order and visual relationship perspectives. Meanwhile, by learning visual similarity across features, VSE adaptively acquires an instructive video-level semantic representation of the input video from coarse to fine. Subsequently, the two streams converge in a Context-Semantics Interaction Layer (CSIL) to achieve sophisticated information exchange across frame-level temporal cues and video-level semantic representation, guaranteeing informative representations and boosting the sensitivity to important segments. Eventually, importance scores are predicted utilizing a prediction head, followed by key shot selection. We evaluate the proposed framework and demonstrate its effectiveness and superiority against state-of-the-art methods on the widely used benchmarks.
AbstractList Video summarization, with the target to detect valuable segments given untrimmed videos, is a meaningful yet understudied topic. Previous methods primarily consider inter-frame and inter-shot temporal dependencies, which might be insufficient to pinpoint important content due to limited valuable information that can be learned. To address this limitation, we elaborate on a Visual Semantic Self-mining Network (VSS-Net), a novel summarization framework motivated by the widespread success of cross-modality learning tasks. VSS-Net initially adopts a two-stream structure consisting of a Context Representation Graph (CRG) and a Video Semantics Encoder (VSE). They are jointly exploited to establish the groundwork for further boosting the capability of content awareness. Specifically, CRG is constructed using an edge-set strategy tailored to the hierarchical structure of videos, enriching visual features with local and non-local temporal cues from temporal order and visual relationship perspectives. Meanwhile, by learning visual similarity across features, VSE adaptively acquires an instructive video-level semantic representation of the input video from coarse to fine. Subsequently, the two streams converge in a Context-Semantics Interaction Layer (CSIL) to achieve sophisticated information exchange across frame-level temporal cues and video-level semantic representation, guaranteeing informative representations and boosting the sensitivity to important segments. Eventually, importance scores are predicted utilizing a prediction head, followed by key shot selection. We evaluate the proposed framework and demonstrate its effectiveness and superiority against state-of-the-art methods on the widely used benchmarks.
Author Liu, Yameng
Zhang, Yunzuo
Tao, Ran
Kang, Weili
Author_xml – sequence: 1
  givenname: Yunzuo
  orcidid: 0000-0001-7499-4835
  surname: Zhang
  fullname: Zhang, Yunzuo
  email: zhangyunzuo888@sina.com
  organization: School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang, China
– sequence: 2
  givenname: Yameng
  orcidid: 0000-0002-5991-3889
  surname: Liu
  fullname: Liu, Yameng
  email: liuym4647@sina.com
  organization: School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang, China
– sequence: 3
  givenname: Weili
  orcidid: 0000-0002-0062-7351
  surname: Kang
  fullname: Kang, Weili
  email: wayleek@sina.com
  organization: School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang, China
– sequence: 4
  givenname: Ran
  orcidid: 0000-0002-5243-7189
  surname: Tao
  fullname: Tao, Ran
  email: rantao@bit.edu.cn
  organization: School of Information and Electronics, Beijing Institute of Technology, Beijing, China
BookMark eNp9kDtPwzAUhS1UJNrCH0AMkZhTbN84sdlQKQ-pwJDS1XISB7mkcXEcIfj1uI8BMTDdo6vz3ccZoUFrW43QOcETQrC4Wkzz5WJCMYUJAKFA2REaEsZ4TClmg6AxIzGnhJ2gUdetMCYJT7Ihul3mefys_XW0NF2vmijXa9V6UwbR1PGTaU37FgXDp3XvUW1d8FXaRnm_XitnvpU3tj1Fx7VqOn12qGP0ejdbTB_i-cv94_RmHpdUpD7mKU-LWmVUcSAFw5oXvBKgSlpqnFUqyyARoS2qkukqLZjgDEiW1ARzRWkBY3S5n7tx9qPXnZcr27s2rJSAAVIiME-Di-9dpbNd53QtS-N3d3qnTCMJltvM5C4zuc1MHjILKP2DbpwJf379D13sIaO1_gVQEAwS-AFvKnjI
CODEN ITCTEM
CitedBy_id crossref_primary_10_1117_1_JEI_34_1_013005
crossref_primary_10_32604_cmc_2024_053488
crossref_primary_10_1007_s00371_025_03865_1
crossref_primary_10_1016_j_bspc_2025_107724
crossref_primary_10_3390_ai6020040
crossref_primary_10_1117_1_JEI_33_3_033009
crossref_primary_10_1007_s11760_025_03999_8
crossref_primary_10_1016_j_measurement_2025_117193
crossref_primary_10_1109_ACCESS_2024_3503276
crossref_primary_10_1038_s41598_024_80830_3
crossref_primary_10_1016_j_dsp_2024_104938
crossref_primary_10_1016_j_engappai_2025_110428
crossref_primary_10_1016_j_eswa_2025_126594
crossref_primary_10_1038_s41598_025_85961_9
crossref_primary_10_1109_ACCESS_2024_3517639
crossref_primary_10_1145_3715098
Cites_doi 10.1109/CVPR.2019.01054
10.1109/TIP.2023.3286254
10.1007/978-3-030-01264-9_12
10.1145/3343031.3351056
10.1109/LSP.2022.3192753
10.1007/978-3-030-01258-8_22
10.1109/CVPR.2015.7298594
10.1109/TCSVT.2020.3044600
10.1109/ICCV.2017.563
10.1109/CVPR.2018.00773
10.1109/TIP.2020.2985868
10.1609/aaai.v32i1.11297
10.1016/j.image.2023.116943
10.1145/3512527.3531404
10.1109/TCSVT.2022.3197819
10.1007/978-3-319-10599-4_35
10.1145/3326362
10.1109/TIP.2017.2708902
10.1145/3123266.3123328
10.1109/ISM52913.2021.00045
10.1109/CVPR.2016.120
10.1016/j.neucom.2021.10.039
10.1109/TIP.2022.3143699
10.1609/aaai.v36i3.20216
10.1007/978-3-030-21074-8_4
10.1109/TCSVT.2017.2771247
10.1016/j.patcog.2020.107567
10.1007/978-3-319-10584-0_33
10.1016/j.ins.2017.12.020
10.1109/ICME51207.2021.9428318
10.1109/WACV56688.2023.00554
10.1109/CVPR.2015.7299154
10.1109/TCSVT.2016.2539638
10.1109/TIP.2020.3039886
10.1609/aaai.v32i1.12255
10.1109/iccvw54120.2021.00361
10.1109/CVPR.2012.6247852
10.1109/CVPR42600.2020.01082
10.1016/j.eswa.2022.119467
10.1609/aaai.v33i01.33019143
10.1109/TCSVT.2018.2883305
10.1109/CVPR52688.2022.01025
10.1109/CVPR.2019.00778
10.1109/LSP.2022.3227525
10.1007/s11042-016-3569-x
10.1109/TCSVT.2023.3240464
10.1109/TCSVT.2022.3202531
10.1109/TCSVT.2020.3037883
10.48550/ARXIV.1706.03762
10.1016/j.patcog.2022.108840
10.1109/TCSVT.2021.3076097
10.1109/TIP.2016.2601493
10.1109/CVPR.2019.00135
10.1016/j.compeleceng.2021.107618
10.1109/CVPR.2014.322
10.1109/TCSVT.2019.2890899
10.1109/ICCVW.2017.144
10.1016/j.patcog.2021.108312
10.1109/TPAMI.2022.3186506
10.1109/CVPR52688.2022.00522
10.1007/978-3-319-46478-7_47
10.1109/CVPR.2017.318
10.1109/TCSVT.2018.2830102
10.1109/LSP.2022.3219361
10.1162/neco.1997.9.8.1735
10.1109/CVPR52688.2022.00098
10.1109/TBIOM.2021.3065735
10.1109/TCSVT.2019.2904996
10.1016/j.patrec.2010.08.004
10.1109/TIP.2017.2695887
10.1109/TPAMI.2021.3072117
10.1016/j.patrec.2020.12.016
10.1109/TCSVT.2021.3085907
10.1109/TNNLS.2021.3119969
10.1609/aaai.v33i01.33018537
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2023.3312325
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 2788
ExternalDocumentID 10_1109_TCSVT_2023_3312325
10239534
Genre orig-research
GrantInformation_xml – fundername: Science and Technology Project of Hebei Education Department
  grantid: ZD2022100; QN2017132
– fundername: Central Guidance on Local Science and Technology Development Fund
  grantid: 226Z0501G
– fundername: Natural Science Foundation of Hebei Province
  grantid: F2022210007; F2017210161
  funderid: 10.13039/501100003787
– fundername: National Natural Science Foundation of China
  grantid: 61702347; 62027801
  funderid: 10.13039/501100001809
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c296t-8686bfa72a831b50e8b8d93ac2ce07da7734950e9dc5ed6b59853174f108a22b3
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 11:50:42 EDT 2025
Tue Jul 01 00:41:23 EDT 2025
Thu Apr 24 22:58:00 EDT 2025
Wed Aug 27 02:17:08 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c296t-8686bfa72a831b50e8b8d93ac2ce07da7734950e9dc5ed6b59853174f108a22b3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-5991-3889
0000-0001-7499-4835
0000-0002-0062-7351
0000-0002-5243-7189
PQID 3033619086
PQPubID 85433
PageCount 14
ParticipantIDs proquest_journals_3033619086
ieee_primary_10239534
crossref_citationtrail_10_1109_TCSVT_2023_3312325
crossref_primary_10_1109_TCSVT_2023_3312325
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2024-04-01
PublicationDateYYYYMMDD 2024-04-01
PublicationDate_xml – month: 04
  year: 2024
  text: 2024-04-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2024
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref56
ref15
ref59
ref14
ref53
ref52
ref11
ref55
ref10
ref54
ref17
ref16
ref19
ref18
Wu (ref47) 2021
ref51
ref50
Li (ref12)
ref46
ref45
ref48
ref42
ref41
ref44
ref43
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref40
ref80
ref35
ref79
ref34
ref78
ref36
ref31
ref75
ref30
ref74
ref33
ref77
ref32
ref76
ref2
ref1
ref39
ref38
Nagrani (ref71); 34
ref70
ref73
ref72
Simonyan (ref58); 27
ref24
ref68
ref23
ref67
ref26
ref25
Lei (ref37); 34
ref69
ref20
ref64
ref63
ref22
ref66
ref21
ref65
ref28
ref27
ref29
ref60
ref62
ref61
References_xml – ident: ref52
  doi: 10.1109/CVPR.2019.01054
– ident: ref28
  doi: 10.1109/TIP.2023.3286254
– volume: 27
  start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref58
  article-title: Two-stream convolutional networks for action recognition in videos
– ident: ref22
  doi: 10.1007/978-3-030-01264-9_12
– ident: ref76
  doi: 10.1145/3343031.3351056
– ident: ref70
  doi: 10.1109/LSP.2022.3192753
– ident: ref32
  doi: 10.1007/978-3-030-01258-8_22
– ident: ref80
  doi: 10.1109/CVPR.2015.7298594
– ident: ref6
  doi: 10.1109/TCSVT.2020.3044600
– ident: ref68
  doi: 10.1109/ICCV.2017.563
– ident: ref25
  doi: 10.1109/CVPR.2018.00773
– ident: ref66
  doi: 10.1109/TIP.2020.2985868
– ident: ref29
  doi: 10.1609/aaai.v32i1.11297
– ident: ref17
  doi: 10.1016/j.image.2023.116943
– ident: ref65
  doi: 10.1145/3512527.3531404
– ident: ref21
  doi: 10.1109/TCSVT.2022.3197819
– volume: 34
  start-page: 14200
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref71
  article-title: Attention bottlenecks for multimodal fusion
– ident: ref72
  doi: 10.1007/978-3-319-10599-4_35
– ident: ref54
  doi: 10.1145/3326362
– ident: ref19
  doi: 10.1109/TIP.2017.2708902
– ident: ref24
  doi: 10.1145/3123266.3123328
– ident: ref40
  doi: 10.1109/ISM52913.2021.00045
– ident: ref74
  doi: 10.1109/CVPR.2016.120
– ident: ref78
  doi: 10.1016/j.neucom.2021.10.039
– ident: ref48
  doi: 10.1109/TIP.2022.3143699
– volume: 34
  start-page: 11846
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref37
  article-title: Detecting moments and highlights in videos via natural language queries
– ident: ref55
  doi: 10.1609/aaai.v36i3.20216
– ident: ref34
  doi: 10.1007/978-3-030-21074-8_4
– ident: ref5
  doi: 10.1109/TCSVT.2017.2771247
– ident: ref2
  doi: 10.1016/j.patcog.2020.107567
– ident: ref39
  doi: 10.1007/978-3-319-10584-0_33
– ident: ref44
  doi: 10.1016/j.ins.2017.12.020
– ident: ref42
  doi: 10.1109/ICME51207.2021.9428318
– ident: ref63
  doi: 10.1109/WACV56688.2023.00554
– ident: ref10
  doi: 10.1109/CVPR.2015.7299154
– ident: ref4
  doi: 10.1109/TCSVT.2016.2539638
– ident: ref41
  doi: 10.1109/TIP.2020.3039886
– ident: ref30
  doi: 10.1609/aaai.v32i1.12255
– ident: ref51
  doi: 10.1109/iccvw54120.2021.00361
– ident: ref45
  doi: 10.1109/CVPR.2012.6247852
– ident: ref67
  doi: 10.1109/CVPR42600.2020.01082
– ident: ref9
  doi: 10.1016/j.eswa.2022.119467
– year: 2021
  ident: ref47
  article-title: ERA: Entity relationship aware video summarization with Wasserstein GAN
  publication-title: arXiv:2109.02625
– ident: ref14
  doi: 10.1609/aaai.v33i01.33019143
– ident: ref60
  doi: 10.1109/TCSVT.2018.2883305
– ident: ref38
  doi: 10.1109/CVPR52688.2022.01025
– ident: ref73
  doi: 10.1109/CVPR.2019.00778
– ident: ref57
  doi: 10.1109/LSP.2022.3227525
– ident: ref43
  doi: 10.1007/s11042-016-3569-x
– ident: ref20
  doi: 10.1109/TCSVT.2023.3240464
– ident: ref62
  doi: 10.1109/TCSVT.2022.3202531
– ident: ref8
  doi: 10.1109/TCSVT.2020.3037883
– ident: ref33
  doi: 10.48550/ARXIV.1706.03762
– ident: ref64
  doi: 10.1016/j.patcog.2022.108840
– ident: ref27
  doi: 10.1109/TCSVT.2021.3076097
– start-page: 1
  volume-title: Proc. 11th Int. Workshop Image Anal. Multimedia Interact. Services (WIAMIS)
  ident: ref12
  article-title: Multi-video summarization based on video-MMR
– ident: ref3
  doi: 10.1109/TIP.2016.2601493
– ident: ref23
  doi: 10.1109/CVPR.2019.00135
– ident: ref49
  doi: 10.1016/j.compeleceng.2021.107618
– ident: ref13
  doi: 10.1109/CVPR.2014.322
– ident: ref15
  doi: 10.1109/TCSVT.2019.2890899
– ident: ref18
  doi: 10.1109/ICCVW.2017.144
– ident: ref35
  doi: 10.1016/j.patcog.2021.108312
– ident: ref79
  doi: 10.1109/TPAMI.2022.3186506
– ident: ref53
  doi: 10.1109/CVPR52688.2022.00522
– ident: ref50
  doi: 10.1007/978-3-319-46478-7_47
– ident: ref46
  doi: 10.1109/CVPR.2017.318
– ident: ref59
  doi: 10.1109/TCSVT.2018.2830102
– ident: ref1
  doi: 10.1109/LSP.2022.3219361
– ident: ref31
  doi: 10.1162/neco.1997.9.8.1735
– ident: ref69
  doi: 10.1109/CVPR52688.2022.00098
– ident: ref61
  doi: 10.1109/TBIOM.2021.3065735
– ident: ref26
  doi: 10.1109/TCSVT.2019.2904996
– ident: ref11
  doi: 10.1016/j.patrec.2010.08.004
– ident: ref75
  doi: 10.1109/TIP.2017.2695887
– ident: ref56
  doi: 10.1109/TPAMI.2021.3072117
– ident: ref16
  doi: 10.1016/j.patrec.2020.12.016
– ident: ref36
  doi: 10.1109/TCSVT.2021.3085907
– ident: ref77
  doi: 10.1109/TNNLS.2021.3119969
– ident: ref7
  doi: 10.1609/aaai.v33i01.33018537
SSID ssj0014847
Score 2.60237
Snippet Video summarization, with the target to detect valuable segments given untrimmed videos, is a meaningful yet understudied topic. Previous methods primarily...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2775
SubjectTerms Cognitive tasks
Computational modeling
Context
Context modeling
Feature extraction
Graphical representations
information exchange
Learning
Predictions
Segments
self-mining
semantic representation
Semantics
Streaming media
Target detection
Task analysis
temporal cues
Video data
Video summarization
Visualization
Title VSS-Net: Visual Semantic Self-Mining Network for Video Summarization
URI https://ieeexplore.ieee.org/document/10239534
https://www.proquest.com/docview/3033619086
Volume 34
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELagEwy8EeWlDGwoaRLHic2GClWFRJeUqltkO2cJAS2iycKv5-ykqIBAbB7OkXVn--6L77sj5IJlQiKuwNtPg7aUnNhXUgq_jAy678yAciXz70fp8CG5m7JpS1Z3XBgAcMlnENihe8sv57q2v8p6tsyAYDRZJ-uI3Bqy1ueTQcJdNzGMFyKfoyNbMmRC0Rv388k4sI3CA0ptDMG-eCHXVuXHXewczGCbjJZLa_JKnoK6UoF-_1a18d9r3yFbbajpXTd7Y5eswWyPbK4UINwnN5M890dQXXmTx0WNwjm8oKofNQ6ejX_vukd4oyZV3MP4FuVKmHu5o7y1FM4D8jC4HfeHfttXwdexSCufpzxVRmax5DRSLASueCmo1LGGMCtlllGETSGIUjMoU8UE-nRELiYKuYxjRQ9JZzafwRHxTCiZZJFWYSmS1CTC0CgxKYBGXMcz0yXRUs-FbouO294Xz4UDH6EonG0Ka5uitU2XXH7OeW1KbvwpfWCVvSLZ6LlLTpf2LNpjuSjQX1NEjAjjjn-ZdkI28Ottbs4p6VRvNZxh2FGpc7fdPgAy2dFZ
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3NbtQwEB6VcoAe-G3FQoEc4ISSJnac2EgcUEu1pd29ZLvqLdjOWKpodxGbFYJ34VV4NsZOdlVAcKvEzYdxong-zXwTzw_AC1EqTXEFWT-L1pfksNhoreImc-S-S4cmtMwfjYvhaf7-TJxtwPd1LQwihuQzTPwy3OU3c7v0v8r2fJsBJXje51Ae49cvFKEt3hwdkDpfMnb4brI_jPshArFlqmhjWcjCOF0yLXlmRIrSyEZxbZnFtGx0WXKKEVJUjRXYFEYocmBE012WSs2Y4fTcG3CTiIZgXXnY-pIil2F-GTGULJbkOlc1Oanam-xX00niR5MnnHvWIn7xe2GQyx_WP7i0w7vwY3UYXSbLx2TZmsR--61P5H97WvfgTk-mo7cd-u_DBs4ewNaVFosP4WBaVfEY29fR9HyxJOEKLwlM55YWFy4ehfkY0bhLho-IwZNcg_OoCkV9fZHqNpxey2fswOZsPsNHELlUCy0ya9JG5YXLleNZ7gpES5GrLN0AspVea9u3VffTPS7qEF6lqg5YqD0W6h4LA3i13vOpayryT-ltr9wrkp1eB7C7wk_dG55FTYyEU0xMgerjv2x7DreGk9FJfXI0Pn4Ct-lNfSbSLmy2n5f4lEhWa54FqEfw4brR8hM5vS42
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=VSS-Net%3A+Visual+Semantic+Self-Mining+Network+for+Video+Summarization&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Zhang%2C+Yunzuo&rft.au=Liu%2C+Yameng&rft.au=Kang%2C+Weili&rft.au=Tao%2C+Ran&rft.date=2024-04-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=34&rft.issue=4&rft.spage=2775&rft.epage=2788&rft_id=info:doi/10.1109%2FTCSVT.2023.3312325&rft.externalDocID=10239534
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon