Beyond Scalar Neuron: Adopting Vector-Neuron Capsules for Long-Term Person Re-Identification

Current person re-identification (re-ID) works mainly focus on the short-term scenario where a person is less likely to change clothes. However, in the long-term re-ID scenario, a person has a great chance to change clothes. A sophisticated re-ID system should take such changes into account. To faci...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 30; no. 10; pp. 3459 - 3471
Main Authors Huang, Yan, Xu, Jingsong, Wu, Qiang, Zhong, Yi, Zhang, Peng, Zhang, Zhaoxiang
Format Journal Article
LanguageEnglish
Published New York IEEE 01.10.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Current person re-identification (re-ID) works mainly focus on the short-term scenario where a person is less likely to change clothes. However, in the long-term re-ID scenario, a person has a great chance to change clothes. A sophisticated re-ID system should take such changes into account. To facilitate the study of long-term re-ID, this paper introduces a large-scale re-ID dataset called "Celeb-reID" to the community. Unlike previous datasets, the same person can change clothes in the proposed Celeb-reID dataset. Images of Celeb-reID are acquired from the Internet using street snap-shots of celebrities. There is a total of 1,052 IDs with 34,186 images making Celeb-reID being the largest long-term re-ID dataset so far. To tackle the challenge of cloth changes, we propose to use vector-neuron (VN) capsules instead of the traditional scalar neurons (SN) to design our network. Compared with SN, one extra-dimensional information in VN can perceive cloth changes of the same person. We introduce a well-designed ReIDCaps network and integrate capsules to deal with the person re-ID task. Soft Embedding Attention (SEA) and Feature Sparse Representation (FSR) mechanisms are adopted in our network for performance boosting. Experiments are conducted on the proposed long-term re-ID dataset and two common short-term re-ID datasets. Comprehensive analyses are given to demonstrate the challenge exposed in our datasets. Experimental results show that our ReIDCaps can outperform existing state-of-the-art methods by a large margin in the long-term scenario. The new dataset and code will be released to facilitate future researches.
AbstractList Current person re-identification (re-ID) works mainly focus on the short-term scenario where a person is less likely to change clothes. However, in the long-term re-ID scenario, a person has a great chance to change clothes. A sophisticated re-ID system should take such changes into account. To facilitate the study of long-term re-ID, this paper introduces a large-scale re-ID dataset called “Celeb-reID” to the community. Unlike previous datasets, the same person can change clothes in the proposed Celeb-reID dataset. Images of Celeb-reID are acquired from the Internet using street snap-shots of celebrities. There is a total of 1,052 IDs with 34,186 images making Celeb-reID being the largest long-term re-ID dataset so far. To tackle the challenge of cloth changes, we propose to use vector-neuron (VN) capsules instead of the traditional scalar neurons (SN) to design our network. Compared with SN, one extra-dimensional information in VN can perceive cloth changes of the same person. We introduce a well-designed ReIDCaps network and integrate capsules to deal with the person re-ID task. Soft Embedding Attention (SEA) and Feature Sparse Representation (FSR) mechanisms are adopted in our network for performance boosting. Experiments are conducted on the proposed long-term re-ID dataset and two common short-term re-ID datasets. Comprehensive analyses are given to demonstrate the challenge exposed in our datasets. Experimental results show that our ReIDCaps can outperform existing state-of-the-art methods by a large margin in the long-term scenario. The new dataset and code will be released to facilitate future researches.
Author Huang, Yan
Xu, Jingsong
Zhang, Peng
Zhang, Zhaoxiang
Wu, Qiang
Zhong, Yi
Author_xml – sequence: 1
  givenname: Yan
  orcidid: 0000-0002-1363-5318
  surname: Huang
  fullname: Huang, Yan
  organization: Global Big Data Technologies Centre (GBDTC), School of Electrical and Data Engineering, University of Technology Sydney, Ultimo, NSW, Australia
– sequence: 2
  givenname: Jingsong
  orcidid: 0000-0002-9102-3616
  surname: Xu
  fullname: Xu, Jingsong
  organization: Global Big Data Technologies Centre (GBDTC), School of Electrical and Data Engineering, University of Technology Sydney, Ultimo, NSW, Australia
– sequence: 3
  givenname: Qiang
  orcidid: 0000-0001-5641-2483
  surname: Wu
  fullname: Wu, Qiang
  organization: Global Big Data Technologies Centre (GBDTC), School of Electrical and Data Engineering, University of Technology Sydney, Ultimo, NSW, Australia
– sequence: 4
  givenname: Yi
  surname: Zhong
  fullname: Zhong, Yi
  email: yi.zhong@bit.edu.cn
  organization: School of Information and Electronics, Beijing Institute of Technology, Beijing, China
– sequence: 5
  givenname: Peng
  orcidid: 0000-0001-6794-7352
  surname: Zhang
  fullname: Zhang, Peng
  organization: Global Big Data Technologies Centre (GBDTC), School of Electrical and Data Engineering, University of Technology Sydney, Ultimo, NSW, Australia
– sequence: 6
  givenname: Zhaoxiang
  orcidid: 0000-0003-2648-3875
  surname: Zhang
  fullname: Zhang, Zhaoxiang
  organization: Research Center for Brain-Inspired Intelligence, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, China
BookMark eNp9kMtOwzAQRS1UJNrCD8AmEusUP-LEZlciHpUqQDR0hRSl9qRK1drBThb9e9KmYsGC1Yxm5szVvSM0MNYAQtcETwjB8i5LF8tsQjGREyojgSU7Q0PCuQgpxXzQ9ZiTUFDCL9DI-w3GJBJRMkRfD7C3RgcLVWwLF7xC66y5D6ba1k1l1sESVGNd2M-DtKh9uwUflNYFc2vWYQZuF7yD8932A8KZBtNUZaWKprLmEp2XxdbD1amO0efTY5a-hPO351k6nYeKxaQJI6oLGVHGJQgdKx1zTggQKkDFcaQgYVDqRK54LHTCV7IkyUoLjJlYgSAg2Rjd9n9rZ79b8E2-sa0znWROo84nZ4zi7or2V8pZ7x2Uee2qXeH2OcH5IcX8mGJ-SDE_pdhB4g-kquZornFFtf0fvenRCgB-tYRIOtMR-wE5koH1
CODEN ITCTEM
CitedBy_id crossref_primary_10_1109_LSP_2023_3262447
crossref_primary_10_3390_s20185365
crossref_primary_10_1109_TIP_2023_3277389
crossref_primary_10_1016_j_asoc_2024_111891
crossref_primary_10_1109_TMM_2023_3331569
crossref_primary_10_1109_TMM_2021_3092579
crossref_primary_10_1109_TIP_2025_3531217
crossref_primary_10_1016_j_patcog_2023_109669
crossref_primary_10_1109_TIP_2023_3341762
crossref_primary_10_2139_ssrn_4098678
crossref_primary_10_2139_ssrn_4098679
crossref_primary_10_1016_j_heliyon_2022_e12086
crossref_primary_10_1109_TMM_2020_3028461
crossref_primary_10_1109_TIP_2024_3374634
crossref_primary_10_1109_TPAMI_2023_3334741
crossref_primary_10_1007_s11042_024_18440_4
crossref_primary_10_1109_TCI_2020_3037413
crossref_primary_10_1109_TIP_2022_3183469
crossref_primary_10_11834_jig_230022
crossref_primary_10_1109_ACCESS_2024_3385342
crossref_primary_10_1109_ACCESS_2024_3385782
crossref_primary_10_1109_TPAMI_2024_3381184
crossref_primary_10_1109_TCSVT_2022_3216769
crossref_primary_10_1109_TNNLS_2023_3329384
crossref_primary_10_1109_TGRS_2022_3225267
crossref_primary_10_1109_TMM_2021_3114539
crossref_primary_10_1109_TIFS_2025_3539079
crossref_primary_10_1109_TMM_2021_3067760
crossref_primary_10_1145_3599730
crossref_primary_10_1109_TMM_2023_3334975
crossref_primary_10_1109_TCSVT_2021_3088446
crossref_primary_10_1016_j_imavis_2023_104843
crossref_primary_10_1007_s11263_024_02315_0
crossref_primary_10_1016_j_neunet_2024_106477
crossref_primary_10_1109_TIFS_2022_3158058
crossref_primary_10_1109_TIFS_2025_3550063
crossref_primary_10_1109_TIP_2023_3310307
crossref_primary_10_1109_TCSVT_2022_3147813
crossref_primary_10_1109_TIFS_2024_3428371
crossref_primary_10_1109_TIP_2023_3279673
crossref_primary_10_1109_TPAMI_2021_3122444
crossref_primary_10_1109_TCSVT_2023_3241988
crossref_primary_10_1109_TIFS_2024_3414667
crossref_primary_10_1016_j_imavis_2024_105400
crossref_primary_10_1109_LSP_2020_2972768
crossref_primary_10_1007_s11760_023_02913_4
crossref_primary_10_1117_1_JEI_33_1_013045
crossref_primary_10_1109_TCSVT_2021_3128214
crossref_primary_10_1109_TIP_2022_3207024
crossref_primary_10_1016_j_patrec_2020_12_017
crossref_primary_10_1016_j_imavis_2021_104335
crossref_primary_10_1109_TCSVT_2020_3031303
crossref_primary_10_1145_3447715
crossref_primary_10_1007_s00371_024_03741_4
crossref_primary_10_1016_j_neucom_2024_127480
Cites_doi 10.1109/ICCV.2017.322
10.1016/j.neucom.2017.02.055
10.1109/CVPR.2017.143
10.1109/CVPR.2018.00225
10.1109/ICCV.2017.405
10.1109/WACV.2018.00060
10.1109/ICCV.2015.133
10.1109/CVPR.2017.243
10.1109/CVPR.2018.00745
10.1109/ICRA.2014.6907518
10.1109/CVPR.2018.00243
10.1109/ICCV.2017.410
10.1109/CVPR.2016.138
10.1109/ACCESS.2018.2803789
10.1007/978-1-4471-6296-4
10.1109/CVPR.2014.27
10.1007/978-3-642-33863-2_43
10.1109/CVPR.2018.00016
10.1109/ACCESS.2018.2872804
10.1049/iet-cvi.2018.5402
10.1109/CVPR.2017.357
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2019.2948093
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 3471
ExternalDocumentID 10_1109_TCSVT_2019_2948093
8873614
Genre orig-research
GrantInformation_xml – fundername: Australian Government Research Training Program Scholarship
  funderid: 10.13039/100015539
– fundername: Beijing Institute of Technology Research Fund Program for Young Scholars
  funderid: 10.13039/501100012236
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c361t-42da942359e8d6cd65511e128ec664ce73efd79b568d75b9f17bd80038be81e93
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 02:27:14 EDT 2025
Tue Jul 01 00:41:13 EDT 2025
Thu Apr 24 23:10:57 EDT 2025
Wed Aug 27 02:31:54 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 10
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c361t-42da942359e8d6cd65511e128ec664ce73efd79b568d75b9f17bd80038be81e93
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-9102-3616
0000-0001-6794-7352
0000-0003-2648-3875
0000-0001-5641-2483
0000-0002-1363-5318
PQID 2448453320
PQPubID 85433
PageCount 13
ParticipantIDs crossref_primary_10_1109_TCSVT_2019_2948093
crossref_citationtrail_10_1109_TCSVT_2019_2948093
ieee_primary_8873614
proquest_journals_2448453320
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2020-10-01
PublicationDateYYYYMMDD 2020-10-01
PublicationDate_xml – month: 10
  year: 2020
  text: 2020-10-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2020
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref34
ref12
ref15
ref30
gray (ref2) 2008
ref32
ref10
ref1
lalonde (ref26) 2018
ref19
sabour (ref17) 2017
ref18
hinton (ref28) 2018
guanshuo (ref16) 2018
srivastava (ref31) 2014; 15
hermans (ref11) 2017
zheng (ref14) 2017; 14
yu (ref13) 2017
ref24
ref23
ref25
ref21
yumin (ref22) 2018
kingma (ref33) 2014
ref29
ref8
sun (ref20) 2018
ref7
ref9
ref4
ref3
ref6
ref5
jaiswal (ref27) 2018
References_xml – ident: ref29
  doi: 10.1109/ICCV.2017.322
– ident: ref21
  doi: 10.1016/j.neucom.2017.02.055
– ident: ref34
  doi: 10.1109/CVPR.2017.143
– ident: ref15
  doi: 10.1109/CVPR.2018.00225
– ident: ref5
  doi: 10.1109/ICCV.2017.405
– ident: ref9
  doi: 10.1109/WACV.2018.00060
– start-page: 3856
  year: 2017
  ident: ref17
  article-title: Dynamic routing between capsules
  publication-title: Proc Adv Neural Inf Process Syst (NIPS)
– ident: ref4
  doi: 10.1109/ICCV.2015.133
– start-page: 501
  year: 2018
  ident: ref20
  article-title: Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline)
  publication-title: Proc Eur Conf Comput Vis (ECCV)
– start-page: 1
  year: 2018
  ident: ref28
  article-title: Matrix capsules with EM routing
  publication-title: Proc Int Conf Learn Represent (ICLR)
– ident: ref30
  doi: 10.1109/CVPR.2017.243
– volume: 14
  start-page: 13
  year: 2017
  ident: ref14
  article-title: A discriminatively learned CNN embedding for person reidentification
  publication-title: ACM Trans Multimedia Comput Commun Appl
– ident: ref32
  doi: 10.1109/CVPR.2018.00745
– start-page: 418
  year: 2018
  ident: ref22
  article-title: Part-aligned bilinear representations for person re-identification
  publication-title: Proc Eur Conf Comput Vis (ECCV)
– ident: ref7
  doi: 10.1109/ICRA.2014.6907518
– start-page: 274
  year: 2018
  ident: ref16
  article-title: Learning discriminative features with multiple granularities for person re-identification
  publication-title: Proc ACM Int Conf Multimedia (ACMMM)
– ident: ref23
  doi: 10.1109/CVPR.2018.00243
– ident: ref12
  doi: 10.1109/ICCV.2017.410
– year: 2014
  ident: ref33
  article-title: Adam: A method for stochastic optimization
  publication-title: arXiv 1412 6980
– volume: 15
  start-page: 1929
  year: 2014
  ident: ref31
  article-title: Dropout: A simple way to prevent neural networks from overfitting
  publication-title: J Mach Learn Res
– year: 2018
  ident: ref26
  article-title: Capsules for object segmentation
  publication-title: arXiv 1804 04241
– start-page: 526
  year: 2018
  ident: ref27
  article-title: CapsuleGAN: Generative adversarial capsule network
  publication-title: Proc Eur Conf Comput Vis (ECCV)
– ident: ref8
  doi: 10.1109/CVPR.2016.138
– ident: ref25
  doi: 10.1109/ACCESS.2018.2803789
– ident: ref1
  doi: 10.1007/978-1-4471-6296-4
– ident: ref3
  doi: 10.1109/CVPR.2014.27
– ident: ref6
  doi: 10.1007/978-3-642-33863-2_43
– ident: ref18
  doi: 10.1109/CVPR.2018.00016
– ident: ref19
  doi: 10.1109/ACCESS.2018.2872804
– ident: ref24
  doi: 10.1049/iet-cvi.2018.5402
– year: 2017
  ident: ref11
  article-title: In defense of the triplet loss for person re-identification
  publication-title: arXiv 1703 07737
– year: 2017
  ident: ref13
  article-title: The devil is in the middle: Exploiting mid-level representations for cross-domain instance matching
  publication-title: arXiv 1711 08106
– start-page: 262
  year: 2008
  ident: ref2
  article-title: Viewpoint invariant pedestrian recognition with an ensemble of localized features
  publication-title: Proc Eur Conf Comput Vis (ECCV)
– ident: ref10
  doi: 10.1109/CVPR.2017.357
SSID ssj0014847
Score 2.614358
Snippet Current person re-identification (re-ID) works mainly focus on the short-term scenario where a person is less likely to change clothes. However, in the...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 3459
SubjectTerms Cameras
Cloth
cloth change
Datasets
Face
Image acquisition
Internet
Lighting
long-term scenario
Neurons
Person re-identification
Security
Surveillance
vector-neuron capsules
Title Beyond Scalar Neuron: Adopting Vector-Neuron Capsules for Long-Term Person Re-Identification
URI https://ieeexplore.ieee.org/document/8873614
https://www.proquest.com/docview/2448453320
Volume 30
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELZKJxh4FUShIA9skDZOnDRmqyqqCgFC9KEOSJEfFwZQivpY-PWcnbTiJcTmwZYsf7bvO_vuO0LOJQ8zEHgANTPa44EfecjChSeFRHdIqjBSLtriPu6P-M0kmlTI5ToXBgBc8Bk0bdP95ZupXtqnshYeiDC2Vas30HErcrXWPwY8ccXEkC4wL0E7tkqQ8UVr2B2MhzaKSzQDwRNfhF-MkKuq8uMqdvalt0PuVjMrwkpemsuFaur3b6KN_536LtkuiSbtFDtjj1Qg3ydbn-QHa-SpSF-hA8RJzqjT6civaMdM32wwNB27F_1CvyOnXYkO9SvMKdJcejvNn70h3ur0wVF2-ghekfOblY-AB2TUux52-15ZbcHTOLMF4mSkQHIVCUhMrE2MXIoBmi_Qccw1tEPITFuoKE5MO1IiY21lEvuzqCBhIMJDUs2nORwRmvmKM4il5lZdTwmkPMCyLAslUwGOqRO2Wv5Ul1LktiLGa-pcEl-kDrLUQpaWkNXJxXrMWyHE8WfvmsVg3bNc_jpprFBOy7M6T5HgJBxZb-Af_z7qhGwG1st2IXwNUl3MlnCKVGShztwe_AB-WtnR
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9NAEB5V7QF6aIFQkbbAHuCEnHrtteNF4lCFVgkNFaJu1EMldx9jDq2cqEmE4LfwV_hvzK6dqDzELVJvPuzKsufbmW92XgCvlIhLlHQADbcmEFGYBMTCZaCkIndI6TjRPtviNO2fiw8XycUa_FjWwiCiTz7Djnv0sXw7NnN3VXZAByImc9KkUJ7gt6_koE3fDd6TNF9H0fFR3usHzQyBwNDSGb3dKkmUIZGY2dTYlBgCR1LKaNJUGOzGWNqu1Ema2W6iZcm72mYuXqYx4-haLZGC3yCekUR1ddgyRiEyP76MCAoPMrKci5KcUB7kvbNR7vLGZCeSIgtl_JvZ83Nc_lL-3qIdb8PPxb-oE1muO_OZ7pjvf7SJvK8_6xFsNVSaHdbYfwxrWD2BzTsNFltwWRfosDNCorplvhNJ9ZYd2vHEpXuzkY9Z1B1KKtZTk-n8BqeMiDwbjqsvQU52i33yTgn7jEFd1Vw215xP4Xwln7cD69W4wmfAylALjqkywvUP1JJIHfKyLGPFdUR72sAX4i5M02zdzfy4KbzTFcrCQ6RwECkaiLThzXLPpG418t_VLSfz5cpG3G3YX6CqaLTRtCAKlwni9VG4--9dL-FBP_84LIaD05M9eBi5OwWfsLgP67PbOT4n4jXTLzz-GVytGkO_AII6NuE
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Beyond+Scalar+Neuron%3A+Adopting+Vector-Neuron+Capsules+for+Long-Term+Person+Re-Identification&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Huang%2C+Yan&rft.au=Xu%2C+Jingsong&rft.au=Wu%2C+Qiang&rft.au=Zhong%2C+Yi&rft.date=2020-10-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=30&rft.issue=10&rft.spage=3459&rft_id=info:doi/10.1109%2FTCSVT.2019.2948093&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon