BinVPR: Binary Neural Networks towards Real-Valued for Visual Place Recognition

Visual Place Recognition (VPR) aims to determine whether a robot or visual navigation system locates in a previously visited place using visual information. It is an essential technology and challenging problem in computer vision and robotic communities. Recently, numerous works have demonstrated th...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 24; no. 13; p. 4130
Main Authors Wang, Junshuai, Han, Junyu, Dong, Ruifang, Kan, Jiangming
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 25.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Visual Place Recognition (VPR) aims to determine whether a robot or visual navigation system locates in a previously visited place using visual information. It is an essential technology and challenging problem in computer vision and robotic communities. Recently, numerous works have demonstrated that the performance of Convolutional Neural Network (CNN)-based VPR is superior to that of traditional methods. However, with a huge number of parameters, large memory storage is necessary for these CNN models. It is a great challenge for mobile robot platforms equipped with limited resources. Fortunately, Binary Neural Networks (BNNs) can reduce memory consumption by converting weights and activation values from 32-bit into 1-bit. But current BNNs always suffer from gradients vanishing and a marked drop in accuracy. Therefore, this work proposed a BinVPR model to handle this issue. The solution is twofold. Firstly, a feature restoration strategy was explored to add features into the latter convolutional layers to further solve the gradient-vanishing problem during the training process. Moreover, we identified two principles to address gradient vanishing: restoring basic features and restoring basic features from higher to lower layers. Secondly, considering the marked drop in accuracy results from gradient mismatch during backpropagation, this work optimized the combination of binarized activation and binarized weight functions in the Larq framework, and the best combination was obtained. The performance of BinVPR was validated on public datasets. The experimental results show that it outperforms state-of-the-art BNN-based approaches and full-precision networks of AlexNet and ResNet in terms of both recognition accuracy and model size. It is worth mentioning that BinVPR achieves the same accuracy with only 1% and 4.6% model sizes of AlexNet and ResNet.
AbstractList Visual Place Recognition (VPR) aims to determine whether a robot or visual navigation system locates in a previously visited place using visual information. It is an essential technology and challenging problem in computer vision and robotic communities. Recently, numerous works have demonstrated that the performance of Convolutional Neural Network (CNN)-based VPR is superior to that of traditional methods. However, with a huge number of parameters, large memory storage is necessary for these CNN models. It is a great challenge for mobile robot platforms equipped with limited resources. Fortunately, Binary Neural Networks (BNNs) can reduce memory consumption by converting weights and activation values from 32-bit into 1-bit. But current BNNs always suffer from gradients vanishing and a marked drop in accuracy. Therefore, this work proposed a BinVPR model to handle this issue. The solution is twofold. Firstly, a feature restoration strategy was explored to add features into the latter convolutional layers to further solve the gradient-vanishing problem during the training process. Moreover, we identified two principles to address gradient vanishing: restoring basic features and restoring basic features from higher to lower layers. Secondly, considering the marked drop in accuracy results from gradient mismatch during backpropagation, this work optimized the combination of binarized activation and binarized weight functions in the Larq framework, and the best combination was obtained. The performance of BinVPR was validated on public datasets. The experimental results show that it outperforms state-of-the-art BNN-based approaches and full-precision networks of AlexNet and ResNet in terms of both recognition accuracy and model size. It is worth mentioning that BinVPR achieves the same accuracy with only 1% and 4.6% model sizes of AlexNet and ResNet.
Visual Place Recognition (VPR) aims to determine whether a robot or visual navigation system locates in a previously visited place using visual information. It is an essential technology and challenging problem in computer vision and robotic communities. Recently, numerous works have demonstrated that the performance of Convolutional Neural Network (CNN)-based VPR is superior to that of traditional methods. However, with a huge number of parameters, large memory storage is necessary for these CNN models. It is a great challenge for mobile robot platforms equipped with limited resources. Fortunately, Binary Neural Networks (BNNs) can reduce memory consumption by converting weights and activation values from 32-bit into 1-bit. But current BNNs always suffer from gradients vanishing and a marked drop in accuracy. Therefore, this work proposed a BinVPR model to handle this issue. The solution is twofold. Firstly, a feature restoration strategy was explored to add features into the latter convolutional layers to further solve the gradient-vanishing problem during the training process. Moreover, we identified two principles to address gradient vanishing: restoring basic features and restoring basic features from higher to lower layers. Secondly, considering the marked drop in accuracy results from gradient mismatch during backpropagation, this work optimized the combination of binarized activation and binarized weight functions in the Larq framework, and the best combination was obtained. The performance of BinVPR was validated on public datasets. The experimental results show that it outperforms state-of-the-art BNN-based approaches and full-precision networks of AlexNet and ResNet in terms of both recognition accuracy and model size. It is worth mentioning that BinVPR achieves the same accuracy with only 1% and 4.6% model sizes of AlexNet and ResNet.Visual Place Recognition (VPR) aims to determine whether a robot or visual navigation system locates in a previously visited place using visual information. It is an essential technology and challenging problem in computer vision and robotic communities. Recently, numerous works have demonstrated that the performance of Convolutional Neural Network (CNN)-based VPR is superior to that of traditional methods. However, with a huge number of parameters, large memory storage is necessary for these CNN models. It is a great challenge for mobile robot platforms equipped with limited resources. Fortunately, Binary Neural Networks (BNNs) can reduce memory consumption by converting weights and activation values from 32-bit into 1-bit. But current BNNs always suffer from gradients vanishing and a marked drop in accuracy. Therefore, this work proposed a BinVPR model to handle this issue. The solution is twofold. Firstly, a feature restoration strategy was explored to add features into the latter convolutional layers to further solve the gradient-vanishing problem during the training process. Moreover, we identified two principles to address gradient vanishing: restoring basic features and restoring basic features from higher to lower layers. Secondly, considering the marked drop in accuracy results from gradient mismatch during backpropagation, this work optimized the combination of binarized activation and binarized weight functions in the Larq framework, and the best combination was obtained. The performance of BinVPR was validated on public datasets. The experimental results show that it outperforms state-of-the-art BNN-based approaches and full-precision networks of AlexNet and ResNet in terms of both recognition accuracy and model size. It is worth mentioning that BinVPR achieves the same accuracy with only 1% and 4.6% model sizes of AlexNet and ResNet.
Author Han, Junyu
Kan, Jiangming
Dong, Ruifang
Wang, Junshuai
Author_xml – sequence: 1
  givenname: Junshuai
  surname: Wang
  fullname: Wang, Junshuai
– sequence: 2
  givenname: Junyu
  surname: Han
  fullname: Han, Junyu
– sequence: 3
  givenname: Ruifang
  orcidid: 0000-0001-7247-4131
  surname: Dong
  fullname: Dong, Ruifang
– sequence: 4
  givenname: Jiangming
  surname: Kan
  fullname: Kan, Jiangming
BackLink https://www.ncbi.nlm.nih.gov/pubmed/39000909$$D View this record in MEDLINE/PubMed
BookMark eNpdkdtKAzEQhoMotlYvfAFZ8EYvVnPY7SbeafEEoqVob8NsDmXrdlOTXYpvb2y1iDDwD5NvfiYzB2i3cY1B6JjgC8YEvgw0IywG3kF9ktEs5ZTi3T95Dx2EMMeYMsb4PurFJowFFn30clM10_HkKokK_jN5Np2HOkq7cv49JK1bgdchmRio0ynUndGJdT6ZVqGL3LgGZeKjcrOmaivXHKI9C3UwRz86QG93t6-jh_Tp5f5xdP2UKlZkbcpKTTIx1EMtuGUCwIrcas4AhFaxXmjCCYFck4IMDZSMEExtTnPGylwpzgboceOrHczl0leLOL10UMl1wfmZBN9WqjaytAXPQA0t0TZTMc1JqfPCQikUoUJFr7ON19K7j86EVi6qoExdQ2NcFyTDheC54IJG9PQfOnedb-JP1xSlNIs7HqCTH6orF0Zvx_tdewTON4DyLgRv7BYhWH6fVG5Pyr4AQfmQYQ
Cites_doi 10.1109/TPAMI.2017.2723009
10.1109/CVPR.2016.572
10.1109/IROS.2015.7353986
10.3390/s21010310
10.1007/978-3-319-46493-0_32
10.1109/TRO.2015.2463671
10.1109/CVPR.2016.90
10.1109/IROS.2011.6048590
10.1109/ICASSP49357.2023.10094626
10.1109/MRA.2006.1678144
10.3390/s23125672
10.1109/TITS.2022.3175656
10.1007/978-3-030-01267-0_44
10.1109/WACV45572.2020.9093444
10.1007/s10462-024-10713-6
10.3390/s24030855
10.1109/TPAMI.2024.3386927
10.1109/JAS.2020.1003453
10.1109/AHS.2019.00011
10.1109/TPAMI.2015.2409868
10.1109/CVPR46437.2021.01392
10.1609/aaai.v37i9.26268
10.21105/joss.01746
10.3390/rs16020246
10.1109/TRO.2022.3148908
10.5244/C.30.87
10.1109/CVPR.2019.01167
10.1109/JSEN.2022.3187052
10.1023/B:VISI.0000029664.99615.94
10.1007/11744023_32
10.1007/s10846-018-0804-x
10.1109/JSEN.2023.3273913
10.1016/j.neucom.2022.09.127
10.1109/ICRA.2012.6224716
10.1109/ICRA.2014.6906953
ContentType Journal Article
Copyright 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID AAYXX
CITATION
NPM
3V.
7X7
7XB
88E
8FI
8FJ
8FK
ABUWG
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
FYUFA
GHDGH
K9.
M0S
M1P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQQKQ
PQUKI
PRINS
7X8
DOA
DOI 10.3390/s24134130
DatabaseName CrossRef
PubMed
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Medical Database (Alumni Edition)
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni Edition)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
ProQuest One Community College
ProQuest Central Korea
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Health & Medical Complete (Alumni)
Health & Medical Collection (Alumni Edition)
Medical Database
ProQuest Central Premium
ProQuest One Academic (New)
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
MEDLINE - Academic
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
PubMed
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Central China
ProQuest Central
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
Health & Medical Research Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList PubMed

Publicly Available Content Database
CrossRef
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1424-8220
ExternalDocumentID oai_doaj_org_article_bf784ac6f1df4c84a51bd57fab9c129c
39000909
10_3390_s24134130
Genre Journal Article
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 62203059
– fundername: National Natural Science Foundation of China
  grantid: 32071680
GroupedDBID ---
123
2WC
53G
5VS
7X7
88E
8FE
8FG
8FI
8FJ
AADQD
AAHBH
AAYXX
ABDBF
ABUWG
ACUHS
ADBBV
ADMLS
AENEX
AFKRA
AFZYC
ALIPV
ALMA_UNASSIGNED_HOLDINGS
BENPR
BPHCQ
BVXVI
CCPQU
CITATION
CS3
D1I
DU5
E3Z
EBD
ESX
F5P
FYUFA
GROUPED_DOAJ
GX1
HH5
HMCUK
HYE
IAO
ITC
KQ8
L6V
M1P
M48
MODMG
M~E
OK1
OVT
P2P
P62
PHGZM
PHGZT
PIMPY
PQQKQ
PROAC
PSQYO
RNS
RPM
TUS
UKHRP
XSB
~8M
3V.
ABJCF
ARAPS
HCIFZ
KB.
M7S
NPM
PDBOC
7XB
8FK
AZQEC
DWQXO
K9.
PJZUB
PKEHL
PPXIY
PQEST
PQUKI
PRINS
7X8
PUEGO
ID FETCH-LOGICAL-c374t-3bd1496d6d98f39aaf95fd83aa9dc96d7d1811a5d1716eab31102f52533b5cc83
IEDL.DBID M48
ISSN 1424-8220
IngestDate Wed Aug 27 01:31:33 EDT 2025
Fri Jul 11 08:46:58 EDT 2025
Sat Jul 26 00:21:15 EDT 2025
Wed Feb 19 02:03:50 EST 2025
Tue Jul 01 03:51:05 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 13
Keywords model compression
binary neural networks
gradient mismatch
gradient vanishing
visual place recognition
Language English
License https://creativecommons.org/licenses/by/4.0
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c374t-3bd1496d6d98f39aaf95fd83aa9dc96d7d1811a5d1716eab31102f52533b5cc83
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0001-7247-4131
OpenAccessLink https://www.proquest.com/docview/3079222423?pq-origsite=%requestingapplication%
PMID 39000909
PQID 3079222423
PQPubID 2032333
ParticipantIDs doaj_primary_oai_doaj_org_article_bf784ac6f1df4c84a51bd57fab9c129c
proquest_miscellaneous_3079859892
proquest_journals_3079222423
pubmed_primary_39000909
crossref_primary_10_3390_s24134130
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2024-Jun-25
PublicationDateYYYYMMDD 2024-06-25
PublicationDate_xml – month: 06
  year: 2024
  text: 2024-Jun-25
  day: 25
PublicationDecade 2020
PublicationPlace Switzerland
PublicationPlace_xml – name: Switzerland
– name: Basel
PublicationTitle Sensors (Basel, Switzerland)
PublicationTitleAlternate Sensors (Basel)
PublicationYear 2024
Publisher MDPI AG
Publisher_xml – name: MDPI AG
References Montiel (ref_37) 2015; 31
Torii (ref_39) 2015; 37
Luo (ref_1) 2024; 57
ref_14
ref_36
ref_35
Ferrarini (ref_4) 2022; 32
ref_34
ref_11
ref_33
ref_10
Xin (ref_20) 2019; 94
ref_19
ref_18
ref_17
ref_38
ref_15
Bailey (ref_31) 2006; 13
Saleem (ref_12) 2023; 23
ref_25
Fan (ref_2) 2022; 22
Ahmed (ref_13) 2021; 8
ref_24
ref_23
ref_22
ref_21
ref_41
ref_40
ref_3
ref_29
Zhou (ref_16) 2018; 40
ref_28
Lowe (ref_32) 2004; 60
Geiger (ref_8) 2020; 5
ref_27
ref_26
ref_9
Tsintotas (ref_30) 2022; 23
ref_5
ref_7
ref_6
References_xml – ident: ref_9
– volume: 40
  start-page: 1452
  year: 2018
  ident: ref_16
  article-title: Places: A 10 million image database for scene recognition
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell
  doi: 10.1109/TPAMI.2017.2723009
– ident: ref_17
  doi: 10.1109/CVPR.2016.572
– ident: ref_24
– ident: ref_14
  doi: 10.1109/IROS.2015.7353986
– ident: ref_26
– ident: ref_11
  doi: 10.3390/s21010310
– ident: ref_25
  doi: 10.1007/978-3-319-46493-0_32
– volume: 31
  start-page: 1147
  year: 2015
  ident: ref_37
  article-title: ORB-SLAM: A versatile and accurate monocular SLAM system
  publication-title: IEEE Trans. Robot.
  doi: 10.1109/TRO.2015.2463671
– ident: ref_5
  doi: 10.1109/CVPR.2016.90
– ident: ref_36
  doi: 10.1109/IROS.2011.6048590
– ident: ref_40
  doi: 10.1109/ICASSP49357.2023.10094626
– volume: 13
  start-page: 99
  year: 2006
  ident: ref_31
  article-title: Simultaneous localization and mapping: Part I
  publication-title: IEEE Robot. Autom. Mag.
  doi: 10.1109/MRA.2006.1678144
– ident: ref_10
  doi: 10.3390/s23125672
– volume: 23
  start-page: 19929
  year: 2022
  ident: ref_30
  article-title: The Revisiting Problem in Simultaneous Localization and Mapping: A Survey on Visual Loop Closure Detection
  publication-title: IEEE Trans. Intell. Transp. Syst.
  doi: 10.1109/TITS.2022.3175656
– ident: ref_27
  doi: 10.1007/978-3-030-01267-0_44
– ident: ref_6
  doi: 10.1109/WACV45572.2020.9093444
– ident: ref_23
– ident: ref_21
– volume: 57
  start-page: 83
  year: 2024
  ident: ref_1
  article-title: 3D point cloud-based place recognition: A survey
  publication-title: Artif. Intell. Rev.
  doi: 10.1007/s10462-024-10713-6
– ident: ref_19
  doi: 10.3390/s24030855
– ident: ref_28
  doi: 10.1109/TPAMI.2024.3386927
– volume: 8
  start-page: 1253
  year: 2021
  ident: ref_13
  article-title: Towards Collaborative Robotics in Top View Surveillance: A Framework for Multiple Object Tracking by Detection Using Deep Learning
  publication-title: IEEE-CAA J. Autom. Sin.
  doi: 10.1109/JAS.2020.1003453
– ident: ref_3
  doi: 10.1109/AHS.2019.00011
– volume: 37
  start-page: 2346
  year: 2015
  ident: ref_39
  article-title: Visual Place Recognition with Repetitive Structures
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2015.2409868
– ident: ref_18
  doi: 10.1109/CVPR46437.2021.01392
– ident: ref_29
  doi: 10.1609/aaai.v37i9.26268
– volume: 5
  start-page: 1746
  year: 2020
  ident: ref_8
  article-title: Larq: An open-source library for training binarized neural networks
  publication-title: J. Open Source Softw.
  doi: 10.21105/joss.01746
– ident: ref_15
  doi: 10.3390/rs16020246
– volume: 32
  start-page: 2617
  year: 2022
  ident: ref_4
  article-title: Binary Neural Networks for Memory-Efficient and Effective Visual Place Recognition in Changing Environments
  publication-title: IEEE Trans. Robot
  doi: 10.1109/TRO.2022.3148908
– ident: ref_38
  doi: 10.5244/C.30.87
– ident: ref_41
– ident: ref_7
  doi: 10.1109/CVPR.2019.01167
– volume: 22
  start-page: 15419
  year: 2022
  ident: ref_2
  article-title: Bio-Inspired Multisensor Navigation System Based on the Skylight Compass and Visual Place Recognition for Unmanned Aerial Vehicles
  publication-title: IEEE Sens. J.
  doi: 10.1109/JSEN.2022.3187052
– volume: 60
  start-page: 91
  year: 2004
  ident: ref_32
  article-title: Distinctive image features from scale-invariant key-points
  publication-title: Int. J. Comput. Vis.
  doi: 10.1023/B:VISI.0000029664.99615.94
– ident: ref_33
  doi: 10.1007/11744023_32
– volume: 94
  start-page: 777
  year: 2019
  ident: ref_20
  article-title: Real-time visual place recognition based on analyzing distribution of multi-scale cnn landmarks
  publication-title: J. Intell. Robot Syst.
  doi: 10.1007/s10846-018-0804-x
– volume: 23
  start-page: 13829
  year: 2023
  ident: ref_12
  article-title: Neural Network-Based Recent Research Developments in SLAM for Autonomous Ground Vehicles: A Review
  publication-title: IEEE Sens. J.
  doi: 10.1109/JSEN.2023.3273913
– ident: ref_22
  doi: 10.1016/j.neucom.2022.09.127
– ident: ref_35
  doi: 10.1109/ICRA.2012.6224716
– ident: ref_34
  doi: 10.1109/ICRA.2014.6906953
SSID ssj0023338
Score 2.4214804
Snippet Visual Place Recognition (VPR) aims to determine whether a robot or visual navigation system locates in a previously visited place using visual information. It...
SourceID doaj
proquest
pubmed
crossref
SourceType Open Website
Aggregation Database
Index Database
StartPage 4130
SubjectTerms Accuracy
binary neural networks
Computer vision
Datasets
Deep learning
Efficiency
Energy consumption
gradient mismatch
gradient vanishing
Hypotheses
Image retrieval
Investigations
Localization
model compression
Neural networks
Principles
Robots
visual place recognition
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3NS8MwFA_iSQ_it9UpUbyWtU3SJt6cOIbgHMON3Uo-GhhIJ7b9_33pF_MgXoQeSppCeC997_drXn5B6J4JTuHivgoyAQTFSl-oUPmWCxrpwFpL3W7k12k8WdCXFVttHfXlasIaeeDGcENlE06ljm1oLNVwy0JlWGKlEhpylXbRF3JeR6ZaqkWAeTU6QgRI_bBwq0cuXP_IPrVI_-_Iss4w40N00EJD_NgM6QjtZPkx2t8SDDxBb6N1vpzNH_Co3kaLnbQGvDJtarkLXNZFsAWeA_zzl_KjygwGVIqX66KCfjP3zxzPu5qhTX6KFuPn96eJ3x6J4GuS0NInygCliU1sBLdESGkFs4YTKYXR0J4YyNihZMap4GRSEcjukWURgDrFtObkDO3mmzy7QNhwlkBwA8YDKCqgmRRUcq10aK0hRDMP3XWmSj8b5YsUGIOzZ9rb00MjZ8S-gxOrrhvAhWnrwvQvF3po0Lkgbb-gIoXYIwC7ANrz0G3_GOa-W9CQebapmj4cZpuIPHTeuK4fCXGnoYpAXP7HCK_QXgRgxpWIRWyAdsuvKrsGMFKqm3refQOD094w
  priority: 102
  providerName: Directory of Open Access Journals
– databaseName: Health & Medical Collection
  dbid: 7X7
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3da9swED_68bI-lH5t9ZoWbfTVJLakWNrLWEZDKawLoQl5M_qwRmHYbZ38_72znWR7aMEPRpaNuZPufiedfgdwLbUSeKnYDgqNAUowsbaJjYPSInWDEIKg08i_7oe3M3G3kItuwa3u0irXNrEx1L5ytEbex7Go0Zeh9__-9BxT1SjaXe1KaOzCPlGXUUpXttgGXBzjr5ZNiGNo369pD4mM9n8-qKHqfxtfNn5mfASHHUBkP1qNHsNOUZ7AwT-0gafwe_RYzifTb2zUHKZlRLCBr9y3Gd01WzapsDWbIgiM5-bvqvAMsSmbP9Yr7DehlXM2XWcOVeUZzMY3Dz9v464wQux4JpYxtx4Dm6Efeq0C18YELYNX3BjtHbZnHv12YqQnLpzCWI4-Pg0yRWhnpXOKf4S9siqLc2BeyQxNHMY9iKUGojBaGOWsS0LwnDsZwde1qPKnlv8ix7iB5Jlv5BnBiIS46UCU1U1D9fIn72ZAbkOmhHHDkPggHN7KxHqZBWO1Q9DhIuitVZB386jOt1qP4MvmMc4A2tYwZVGt2j4Kx5xOI_jUqm7zJ5xqouqB_vz-xy_gQ4pghVLAUtmDveXLqrhEsLG0V82IegVFftQD
  priority: 102
  providerName: ProQuest
Title BinVPR: Binary Neural Networks towards Real-Valued for Visual Place Recognition
URI https://www.ncbi.nlm.nih.gov/pubmed/39000909
https://www.proquest.com/docview/3079222423
https://www.proquest.com/docview/3079859892
https://doaj.org/article/bf784ac6f1df4c84a51bd57fab9c129c
Volume 24
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1ta9swED76AqP9UNq9tO6yoI199RZbUiwVRmlK01JoFsIS8s3oxSqB4rRxAu2_78mOTQsLGGOkM5g7ne55rNMJ4CeXguElQt3JJBIUp0KpIx06IVlsOs455ncj3w26N2N2O-XTLajP2FwrsPgvtfPnSY0XD7-en17O0eH_eMaJlP134deG_GS8Dbv4mHj_vGPNYkJMkYZVRYXei-_BB-rPzJQ-G_FNVCqL929GnGXk6R_CwRoykovKxkewleUfYf9NIcFP8Lc3yyfD0RnpldtriS-5ga8MqhzvgizL5NiCjBAWhhP1sMosQbRKJrNihXJD_y-djOpconn-Gcb9q3-XN-H6qITQ0IQtQ6otUp2u7VopHJVKOcmdFVQpaQ22JxYjeaS49dVxMqUpRv3Y8RjBnubGCPoFdvJ5np0AsYInOOkhE0J01WGZkkwJo03knKXU8AB-1KpKH6uKGCkyCa_atFFtAD2vxEbAF7EuG-aL-3TtE6l2iWDKdF1kHTP4yCNteeKUlgZhiAmgVZsgrQdGinOSREyDKDCA7003-oRf6FB5Nl9VMgJHoYwDOK5M13xJbfHTjT1fYS9G5OLzwWLegp3lYpV9Q-Sx1G3YTqYJ3kX_ug27vavBcNQuWXy7HHGvCz7ZmQ
linkProvider Scholars Portal
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT9wwEB5ROLQcUOmLAG3dqj1GJLG9sZGqqtsWLQW2aAWrvaV-IiSUANkV4k_xGztONkt7aG9IOUSOE0XjGX_f2OMZgA9cCoaXiHXiJDooXsVSpzr2QrLMJN57Fk4jHw17g1P2Y8InS3DXnYUJYZXdnNhM1LYyYY18B3VRIpYh-n--vIpD1aiwu9qV0GjV4sDd3qDLVn_a_4bj-zHL9r6ffB3E86oCsaE5m8ZUW_QKerZnpfBUKuUl91ZQpaQ12J5bBL1UcRsSyTilKQJk5nmGvEhzYwTF7z6CFQTeJFhUPrl38Cj6e232IkplslOHPasAEn9hXlMa4N98tsG1vaewNiek5EurQeuw5MpnsPpHmsLn8LN_Xo6PR7uk3xzeJSGhB74ybCPIazJtQm9rMkLSGY_VxcxZglyYjM_rGfY7Div1ZNRFKlXlCzh9EJG9hOWyKt0GECt4jlMq-lnI3RLmlGRKGG1S7y2lhkfwvhNVcdnm2yjQTwnyLBbyjKAfhLjoEFJkNw3V9Vkxt7hC-1wwZXo-tZ4ZvOWptjz3SkuDJMdEsN0NQTG327q417II3i0eo8WFbRRVumrW9hGo4zKL4FU7dIs_oaEGq0zk5v8__hYeD06ODovD_eHBFjzJkCiF8LOMb8Py9HrmXiPRmeo3jXYR-PXQ6vwbsGYRHQ
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6VIiE4IN4ESnErOEabxPbGRkKIbbvqi2W1oqu9BT9RJZSUZleIv8avY5zHlh7gVimHyHGiaPyN5xt7PAPwhkvB8BKxTpxEB8WrWOpUx15IlpnEe8_CaeRPk-HhGTte8MUG_O7PwoSwyn5ObCZqW5mwRj5ALEq0ZWj9B74Li5jujz9c_IhDBamw09qX02ghcuJ-_UT3rX5_tI9j_TbLxgdf9g7jrsJAbGjOljHVFj2EoR1aKTyVSnnJvRVUKWkNtucWDWCquA1JZZzSFI1l5nmGHElzYwTF796C2znladCxfHHl7FH0_dpMRpTKZFCH_atgMK7Zv6ZMwL-5bWPjxg_gfkdOyccWTQ9hw5WP4N5fKQsfw-fReTmfzt6RUXOQl4TkHvjKpI0mr8myCcOtyQwJaDxX31fOEuTFZH5er7DfNKzak1kftVSVT-DsRkT2FDbLqnTPgVjBc5xe0edCHpcwpyRTwmiTem8pNTyC3V5UxUWbe6NAnyXIs1jLM4JREOK6Q0iX3TRUl9-KTvsK7XPBlBn61Hpm8Jan2vLcKy0NEh4TwVY_BEWnw3VxhbgIdtaPUfvClooqXbVq-wjEu8wieNYO3fpPaKjHKhP54v8ffw13EMjF6dHk5CXczZAzhUi0jG_B5vJy5V4h51nq7QZcBL7eNJr_ACV2FVM
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=BinVPR%3A+Binary+Neural+Networks+towards+Real-Valued+for+Visual+Place+Recognition&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Wang%2C+Junshuai&rft.au=Han%2C+Junyu&rft.au=Dong%2C+Ruifang&rft.au=Kan%2C+Jiangming&rft.date=2024-06-25&rft.eissn=1424-8220&rft.volume=24&rft.issue=13&rft_id=info:doi/10.3390%2Fs24134130&rft_id=info%3Apmid%2F39000909&rft.externalDocID=39000909
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon