Pseudo-Mono for Monocular 3D Object Detection in Autonomous Driving

Current monocular 3D object detection algorithms generally suffer from inaccurate depth estimation, which leads to reduction of detection accuracy. The depth error from image-to-image generation for the stereo view is insignificant compared with the gap in single-image generation. Therefore, a novel...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 33; no. 8; pp. 3962 - 3975
Main Authors Tao, Chongben, Cao, Jiecheng, Wang, Chen, Zhang, Zufeng, Gao, Zhen
Format Journal Article
LanguageEnglish
Published New York IEEE 01.08.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Current monocular 3D object detection algorithms generally suffer from inaccurate depth estimation, which leads to reduction of detection accuracy. The depth error from image-to-image generation for the stereo view is insignificant compared with the gap in single-image generation. Therefore, a novel pseudo-monocular 3D object detection framework is proposed, which is called Pseudo-Mono. Particularly, stereo images are brought into monocular 3D detection. Firstly, stereo images are taken as input, then a lightweight depth predictor is used to generate the depth map of input images. Secondly, the left input images obtained from stereo camera are used as subjects, which generate enhanced visual feature and multi-scale depth feature by depth indexing and feature matching probabilities, respectively. Finally, sparse anchors set by the foreground probability maps and the multi-scale feature maps are used as reference points to find the suitable initialization approach of object query. The encoded visual feature is adopted to enhance object query for enabling deep interaction between visual feature and depth feature. Compared with popular monocular 3D object detection methods, Pseudo-Mono is able to achieve richer fine-grained information without additional data input. Extensive experimental results on the datasets of KITTI, NuScenes, and MS-COCO demonstrate the generalizability and portability of the proposed method. The effectiveness and efficiency of Pseudo-Mono have been demonstrated by extensive ablation experiments. Experiments on a real vehicle platform have shown that the proposed method maintains high performance in complex real-world environments.
AbstractList Current monocular 3D object detection algorithms generally suffer from inaccurate depth estimation, which leads to reduction of detection accuracy. The depth error from image-to-image generation for the stereo view is insignificant compared with the gap in single-image generation. Therefore, a novel pseudo-monocular 3D object detection framework is proposed, which is called Pseudo-Mono. Particularly, stereo images are brought into monocular 3D detection. Firstly, stereo images are taken as input, then a lightweight depth predictor is used to generate the depth map of input images. Secondly, the left input images obtained from stereo camera are used as subjects, which generate enhanced visual feature and multi-scale depth feature by depth indexing and feature matching probabilities, respectively. Finally, sparse anchors set by the foreground probability maps and the multi-scale feature maps are used as reference points to find the suitable initialization approach of object query. The encoded visual feature is adopted to enhance object query for enabling deep interaction between visual feature and depth feature. Compared with popular monocular 3D object detection methods, Pseudo-Mono is able to achieve richer fine-grained information without additional data input. Extensive experimental results on the datasets of KITTI, NuScenes, and MS-COCO demonstrate the generalizability and portability of the proposed method. The effectiveness and efficiency of Pseudo-Mono have been demonstrated by extensive ablation experiments. Experiments on a real vehicle platform have shown that the proposed method maintains high performance in complex real-world environments.
Author Zhang, Zufeng
Gao, Zhen
Cao, Jiecheng
Wang, Chen
Tao, Chongben
Author_xml – sequence: 1
  givenname: Chongben
  orcidid: 0000-0002-8196-9280
  surname: Tao
  fullname: Tao, Chongben
  email: tom1tao@163.com
  organization: School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
– sequence: 2
  givenname: Jiecheng
  orcidid: 0000-0001-5236-789X
  surname: Cao
  fullname: Cao, Jiecheng
  email: caojc9527@163.com
  organization: School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
– sequence: 3
  givenname: Chen
  orcidid: 0000-0002-5340-9737
  surname: Wang
  fullname: Wang, Chen
  email: chenwang@usts.edu.cn
  organization: School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
– sequence: 4
  givenname: Zufeng
  surname: Zhang
  fullname: Zhang, Zufeng
  email: zhangzufeng@tsari.tsinghua.edu.cn
  organization: Department of Automation, Tsinghua University, Beijing, China
– sequence: 5
  givenname: Zhen
  orcidid: 0000-0003-3412-280X
  surname: Gao
  fullname: Gao, Zhen
  email: gaozhen@mcmaster.ca
  organization: Faculty of Engineering, McMaster University, Hamilton, ON, Canada
BookMark eNp9kE9LAzEUxINUsK1-AfEQ8Lw1edlsssfS-g8qFaxew26aSEq7qcmu4Lc3a3sQD55mDvN7w5sRGjS-MQhdUjKhlJQ3q9nL22oCBNiEARNclCdoSDmXGQDhg-QJp5kEys_QKMYNITSXuRii2XM03dpnT77x2PqAe6O7bRUwm-NlvTG6xXPTJnG-wa7B065NkZ3vIp4H9-ma93N0aqttNBdHHaPXu9vV7CFbLO8fZ9NFpqEs2qxmHEAWa1FoKEphktWlFdaaui6ozGvCoZCsZFoaRi2taC4MSENZZSthgY3R9eHuPviPzsRWbXwXmlSpQOZcFEAJTSk4pHTwMQZj1T64XRW-FCWqH0v9jKX6sdRxrATJP5B2bdW_3IbKbf9Hrw6oM8b86iLpJULYNwLzeK4
CODEN ITCTEM
CitedBy_id crossref_primary_10_1109_ACCESS_2024_3456893
crossref_primary_10_1109_TITS_2024_3439557
crossref_primary_10_1109_TCSVT_2024_3405992
crossref_primary_10_1111_mice_13143
crossref_primary_10_1016_j_patcog_2024_110699
crossref_primary_10_1109_TIV_2023_3323518
crossref_primary_10_1016_j_aei_2023_102348
crossref_primary_10_1016_j_aei_2023_102069
crossref_primary_10_7717_peerj_cs_2656
crossref_primary_10_1109_TIM_2024_3387500
crossref_primary_10_1109_TCSVT_2023_3306361
crossref_primary_10_1587_transinf_2023EDP7200
crossref_primary_10_1109_TCSVT_2024_3499327
crossref_primary_10_3390_app13116448
crossref_primary_10_1109_TCSVT_2024_3366664
crossref_primary_10_1109_TCSVT_2024_3492201
crossref_primary_10_1109_TCSVT_2024_3492289
Cites_doi 10.1109/CVPR.2019.01298
10.1109/ICCV48922.2021.01489
10.1109/CVPR52688.2022.00386
10.1007/978-3-319-10602-1_48
10.1109/CVPRW50498.2020.00506
10.1109/tcsvt.2022.3172971
10.1109/TCSVT.2021.3073718
10.5220/0009102506520659
10.1109/CVPR.2017.198
10.1609/aaai.v34i07.6618
10.1109/CVPR.2019.00864
10.1109/tcsvt.2022.3164230
10.1109/CVPR42600.2020.01169
10.1109/tcsvt.2021.3127149
10.1109/ICCVW54120.2021.00107
10.1109/TCSVT.2020.3014053
10.1007/978-3-030-58452-8_13
10.1109/TCSVT.2020.2975671
10.1007/978-3-030-58601-0_19
10.1109/tcsvt.2022.3178844
10.1109/CVPR46437.2021.01024
10.1109/CVPR46437.2021.00845
10.1109/CVPR52688.2022.00398
10.1109/CVPR52688.2022.00096
10.1007/978-3-030-58580-8_38
10.1109/ICCV48922.2021.00310
10.1109/CVPR.2018.00249
10.1109/CVPR52688.2022.00377
10.1109/TCSVT.2021.3135013
10.1109/TCSVT.2019.2947482
10.1109/tcsvt.2022.3146305
10.1109/tcsvt.2021.3085907
10.1109/ICCV.2019.00938
10.48550/ARXIV.1706.03762
10.1007/978-3-030-58592-1_9
10.1109/LRA.2020.2976305
10.1109/tcsvt.2022.3161815
10.1109/CVPR46437.2021.01437
10.1109/CVPR.2019.00308
10.1109/WACV48630.2021.00157
10.1109/CVPR42600.2020.01211
10.1109/ICCV.2017.324
10.1109/tcsvt.2022.3181490
10.1109/CVPR.2012.6248074
10.1109/ICCV.2015.169
10.1109/tcsvt.2022.3162069
10.1109/TCSVT.2021.3082763
10.1109/CVPR46437.2021.00052
10.1109/tcsvt.2022.3145513
10.1109/CVPR46437.2021.00608
10.1109/CVPR46437.2021.00330
10.1109/ICCV.2019.00636
10.1109/TCSVT.2021.3102025
10.1109/TCSVT.2019.2957275
10.1109/CVPR42600.2020.01164
10.1109/TCSVT.2020.3048440
10.1109/tcsvt.2022.3165934
10.1109/ICCV48922.2021.01535
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2023.3237579
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 3975
ExternalDocumentID 10_1109_TCSVT_2023_3237579
10018400
Genre orig-research
GrantInformation_xml – fundername: China Postdoctoral Science Foundation
  grantid: 2021M691848
  funderid: 10.13039/501100002858
– fundername: Science and Technology Projects Fund of Suzhou
  grantid: SYG202142
– fundername: National Natural Science Foundation of China
  grantid: 61801323; 62201375
  funderid: 10.13039/501100001809
– fundername: Natural Science Foundation of Jiangsu Province
  grantid: BK20220635
  funderid: 10.13039/501100004608
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c296t-b352286d76c2697e86dc9f7ffebb6184b05268393c8e31f1a147e28e13afa7f23
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Sun Jun 29 13:55:12 EDT 2025
Thu Apr 24 23:00:01 EDT 2025
Tue Jul 01 00:41:20 EDT 2025
Wed Aug 27 02:55:14 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 8
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c296t-b352286d76c2697e86dc9f7ffebb6184b05268393c8e31f1a147e28e13afa7f23
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-8196-9280
0000-0002-5340-9737
0000-0003-3412-280X
0000-0001-5236-789X
PQID 2845762101
PQPubID 85433
PageCount 14
ParticipantIDs proquest_journals_2845762101
crossref_citationtrail_10_1109_TCSVT_2023_3237579
ieee_primary_10018400
crossref_primary_10_1109_TCSVT_2023_3237579
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-08-01
PublicationDateYYYYMMDD 2023-08-01
PublicationDate_xml – month: 08
  year: 2023
  text: 2023-08-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2023
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
ref56
Wang (ref65)
ref15
ref14
ref58
ref53
ref52
ref11
ref10
ref54
ref17
ref19
Parmar (ref34)
ref51
ref50
ref46
ref45
ref48
ref47
ref42
ref41
ref44
ref43
Yao (ref16) 2021
ref49
Zhu (ref35) 2020
ref8
ref7
Zhou (ref63) 2019
ref9
ref4
ref3
ref6
ref5
Zhang (ref55); 32
ref40
ref36
ref31
ref30
ref33
ref32
Zhang (ref37) 2022
ref2
ref1
Chen (ref18) 2021
ref39
ref38
ref24
ref23
ref67
ref26
ref25
ref20
ref64
ref22
ref66
ref21
ref28
ref27
ref29
Loshchilov (ref59) 2016
ref60
ref62
ref61
References_xml – ident: ref61
  doi: 10.1109/CVPR.2019.01298
– start-page: 1475
  volume-title: Proc. 5th Conf. Robot Learn.
  ident: ref65
  article-title: Probabilistic and geometric depth: Detecting objects in perspective
– ident: ref52
  doi: 10.1109/ICCV48922.2021.01489
– ident: ref66
  doi: 10.1109/CVPR52688.2022.00386
– ident: ref57
  doi: 10.1007/978-3-319-10602-1_48
– ident: ref22
  doi: 10.1109/CVPRW50498.2020.00506
– ident: ref7
  doi: 10.1109/tcsvt.2022.3172971
– ident: ref44
  doi: 10.1109/TCSVT.2021.3073718
– ident: ref27
  doi: 10.5220/0009102506520659
– volume-title: arXiv:2112.02604
  year: 2021
  ident: ref18
  article-title: PSI: A pedestrian behavior dataset for socially intelligent autonomous car
– volume-title: arXiv:1904.07850
  year: 2019
  ident: ref63
  article-title: Objects as points
– ident: ref26
  doi: 10.1109/CVPR.2017.198
– ident: ref28
  doi: 10.1609/aaai.v34i07.6618
– ident: ref31
  doi: 10.1109/CVPR.2019.00864
– ident: ref12
  doi: 10.1109/tcsvt.2022.3164230
– ident: ref29
  doi: 10.1109/CVPR42600.2020.01169
– ident: ref40
  doi: 10.1109/tcsvt.2021.3127149
– start-page: 4055
  volume-title: Proc. 35th Int. Conf. Mach. Learn.
  ident: ref34
  article-title: Image transformer
– volume-title: arXiv:2203.13310
  year: 2022
  ident: ref37
  article-title: MonoDETR: Depth-aware transformer for monocular 3D object detection
– ident: ref64
  doi: 10.1109/ICCVW54120.2021.00107
– ident: ref3
  doi: 10.1109/TCSVT.2020.3014053
– ident: ref14
  doi: 10.1007/978-3-030-58452-8_13
– ident: ref43
  doi: 10.1109/TCSVT.2020.2975671
– ident: ref32
  doi: 10.1007/978-3-030-58601-0_19
– ident: ref41
  doi: 10.1109/tcsvt.2022.3178844
– ident: ref50
  doi: 10.1109/CVPR46437.2021.01024
– ident: ref46
  doi: 10.1109/CVPR46437.2021.00845
– ident: ref36
  doi: 10.1109/CVPR52688.2022.00398
– ident: ref45
  doi: 10.1109/CVPR52688.2022.00096
– volume-title: arXiv:2104.01318
  year: 2021
  ident: ref16
  article-title: Efficient DETR: Improving end-to-end object detector with dense prior
– ident: ref21
  doi: 10.1007/978-3-030-58580-8_38
– ident: ref23
  doi: 10.1109/ICCV48922.2021.00310
– ident: ref30
  doi: 10.1109/CVPR.2018.00249
– ident: ref60
  doi: 10.1109/CVPR52688.2022.00377
– ident: ref13
  doi: 10.1109/TCSVT.2021.3135013
– ident: ref39
  doi: 10.1109/TCSVT.2019.2947482
– ident: ref2
  doi: 10.1109/tcsvt.2022.3146305
– ident: ref11
  doi: 10.1109/tcsvt.2021.3085907
– ident: ref20
  doi: 10.1109/ICCV.2019.00938
– ident: ref15
  doi: 10.48550/ARXIV.1706.03762
– ident: ref33
  doi: 10.1007/978-3-030-58592-1_9
– ident: ref19
  doi: 10.1109/LRA.2020.2976305
– ident: ref9
  doi: 10.1109/tcsvt.2022.3161815
– ident: ref47
  doi: 10.1109/CVPR46437.2021.01437
– ident: ref48
  doi: 10.1109/CVPR.2019.00308
– ident: ref62
  doi: 10.1109/WACV48630.2021.00157
– ident: ref67
  doi: 10.1109/CVPR42600.2020.01211
– ident: ref53
  doi: 10.1109/ICCV.2017.324
– volume-title: arXiv:1608.03983
  year: 2016
  ident: ref59
  article-title: SGDR: Stochastic gradient descent with warm restarts
– ident: ref5
  doi: 10.1109/tcsvt.2022.3181490
– ident: ref56
  doi: 10.1109/CVPR.2012.6248074
– ident: ref54
  doi: 10.1109/ICCV.2015.169
– ident: ref8
  doi: 10.1109/tcsvt.2022.3162069
– ident: ref38
  doi: 10.1109/TCSVT.2021.3082763
– ident: ref49
  doi: 10.1109/CVPR46437.2021.00052
– ident: ref1
  doi: 10.1109/tcsvt.2022.3145513
– volume: 32
  start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref55
  article-title: FreeAnchor: Learning to match anchors for visual object detection
– ident: ref24
  doi: 10.1109/CVPR46437.2021.00608
– ident: ref25
  doi: 10.1109/CVPR46437.2021.00330
– ident: ref17
  doi: 10.1109/ICCV.2019.00636
– ident: ref42
  doi: 10.1109/TCSVT.2021.3102025
– ident: ref4
  doi: 10.1109/TCSVT.2019.2957275
– ident: ref58
  doi: 10.1109/CVPR42600.2020.01164
– ident: ref10
  doi: 10.1109/TCSVT.2020.3048440
– ident: ref6
  doi: 10.1109/tcsvt.2022.3165934
– volume-title: arXiv:2010.04159
  year: 2020
  ident: ref35
  article-title: Deformable DETR: Deformable transformers for end-to-end object detection
– ident: ref51
  doi: 10.1109/ICCV48922.2021.01535
SSID ssj0014847
Score 2.5388565
Snippet Current monocular 3D object detection algorithms generally suffer from inaccurate depth estimation, which leads to reduction of detection accuracy. The depth...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 3962
SubjectTerms 3D object detection
Ablation
Algorithms
Decoding
Feature extraction
Feature maps
Heuristic algorithms
Image enhancement
Image processing
multi-scale feature
Object detection
Object recognition
sparse anchor point
stereo images
Three-dimensional displays
transformer
Transformers
Visualization
Title Pseudo-Mono for Monocular 3D Object Detection in Autonomous Driving
URI https://ieeexplore.ieee.org/document/10018400
https://www.proquest.com/docview/2845762101
Volume 33
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NS8MwFA-6kx78nDidkoM3Sbc2adoex-YYHqbgJruV5guG0srWXvzrfUm7MRRF6OEdkhLykrzfS37vPYTuKBPacBWSwChOAIHDOQiGmUT9ULCMSQmfZVtM-WTOHhfhoglWd7EwWmtHPtOeFd1bvipkZa_KejZfEDgk4KHvg-dWB2ttnwxY7KqJAV7wSQyGbBMh0096s-HL68yzhcI9GtAotLytHSvkyqr8OIudgRkfo-lmaDWv5M2rSuHJz29ZG_899hN01EBNPKjXxina0_kZOtxJQHiOhs9rXamCwM4uMMBXbAXHTMV0hJ-EvaTBI106vlaOlzkeVKUNgyiqNR6tlvY2oo3m44fZcEKasgpEBgkvibCYK-Yq4jLgSaRBlImJjNFC2PIvwqWAoQmVsaa-8TOfRTqItU8zk0UmoBeolRe5vkQ4FAngHU6ZiUGzigkWKFByBihGKpXpDvI305zKJue4LX3xnjrfo5-kTjWpVU3aqKaD7rd9PuqMG3-2btu53mlZT3MHdTfqTJtduU7BFIN7BU6uf_VLt2t0YP9eM_y6qFWuKn0DqKMUt261fQGc3NFH
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB5V5VA4AIWiLrTUB3pCSTe28zpwqHaptvQBElvUW4hfUgVKUDcRov-lf4Xf1hknu1pRtbdKSDn4YCtK5tPMN_bnGYB3QirrEhMH3JkkQAaOfhADc5AOYyVLqTU-pLY4TSZn8tN5fL4C14u7MNZaLz6zIQ39Wb6pdUtbZXtULwgTkmGvoTyyf35jhjb7cDhGc-5yfvBxOpoEfROBQPM8aQJFDCNLTJponuSpxaHOXeqcVYqanShf8ETkQmdWRC4qI5lantlIlK5MHdU1QA__CIlGzLvrYYtDCpn5_mXIUKIgw9A5v5MzzPemo6_fpiG1Jg8FF2lMSrGluOcbudzy_j6kHTyDv_Of0SlZfoRto0J99U-dyP_2bz2Hpz2ZZvsd-tdhxVYv4MlSicWXMPoys62pA_RdNUOCzmjgtbdMjNlnRdtQbGwbr0ir2EXF9tuGLnrU7YyNLy9ov2UDzh7kK17BalVXdhNYrHJkdImQLkPsGqkkNwjjEnmaNqa0A4jmZi10X1Wdmnv8LHx2NcwLD4WCoFD0UBjA-8WaX11NkXtnb5Btl2Z2Zh3A1hw-Re93ZgWSDUwgMY2PXt-xbAfWJtOT4-L48PToDTymN3V6xi1YbS5bu40cq1FvPdIZfH9osNwAdMguSA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Pseudo-Mono+for+Monocular+3D+Object+Detection+in+Autonomous+Driving&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Chongben+Tao&rft.au=Cao%2C+Jiecheng&rft.au=Wang%2C+Chen&rft.au=Zhang%2C+Zufeng&rft.date=2023-08-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=33&rft.issue=8&rft.spage=3962&rft_id=info:doi/10.1109%2FTCSVT.2023.3237579&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon