Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review

Recently, the advancement of deep learning (DL) in discriminative feature learning from 3-D LiDAR data has led to rapid development in the field of autonomous driving. However, automated processing uneven, unstructured, noisy, and massive 3-D point clouds are a challenging and tedious task. In this...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 32; no. 8; pp. 3412 - 3432
Main Authors Li, Ying, Ma, Lingfei, Zhong, Zilong, Liu, Fei, Chapman, Michael A., Cao, Dongpu, Li, Jonathan
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.08.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2020.3015992

Cover

Loading…
Abstract Recently, the advancement of deep learning (DL) in discriminative feature learning from 3-D LiDAR data has led to rapid development in the field of autonomous driving. However, automated processing uneven, unstructured, noisy, and massive 3-D point clouds are a challenging and tedious task. In this article, we provide a systematic review of existing compelling DL architectures applied in LiDAR point clouds, detailing for specific tasks in autonomous driving, such as segmentation, detection, and classification. Although several published research articles focus on specific topics in computer vision for autonomous vehicles, to date, no general survey on DL applied in LiDAR point clouds for autonomous vehicles exists. Thus, the goal of this article is to narrow the gap in this topic. More than 140 key contributions in the recent five years are summarized in this survey, including the milestone 3-D deep architectures, the remarkable DL applications in 3-D semantic segmentation, object detection, and classification; specific data sets, evaluation metrics, and the state-of-the-art performance. Finally, we conclude the remaining challenges and future researches.
AbstractList Recently, the advancement of deep learning (DL) in discriminative feature learning from 3-D LiDAR data has led to rapid development in the field of autonomous driving. However, automated processing uneven, unstructured, noisy, and massive 3-D point clouds are a challenging and tedious task. In this article, we provide a systematic review of existing compelling DL architectures applied in LiDAR point clouds, detailing for specific tasks in autonomous driving, such as segmentation, detection, and classification. Although several published research articles focus on specific topics in computer vision for autonomous vehicles, to date, no general survey on DL applied in LiDAR point clouds for autonomous vehicles exists. Thus, the goal of this article is to narrow the gap in this topic. More than 140 key contributions in the recent five years are summarized in this survey, including the milestone 3-D deep architectures, the remarkable DL applications in 3-D semantic segmentation, object detection, and classification; specific data sets, evaluation metrics, and the state-of-the-art performance. Finally, we conclude the remaining challenges and future researches.
Recently, the advancement of deep learning (DL) in discriminative feature learning from 3-D LiDAR data has led to rapid development in the field of autonomous driving. However, automated processing uneven, unstructured, noisy, and massive 3-D point clouds are a challenging and tedious task. In this article, we provide a systematic review of existing compelling DL architectures applied in LiDAR point clouds, detailing for specific tasks in autonomous driving, such as segmentation, detection, and classification. Although several published research articles focus on specific topics in computer vision for autonomous vehicles, to date, no general survey on DL applied in LiDAR point clouds for autonomous vehicles exists. Thus, the goal of this article is to narrow the gap in this topic. More than 140 key contributions in the recent five years are summarized in this survey, including the milestone 3-D deep architectures, the remarkable DL applications in 3-D semantic segmentation, object detection, and classification; specific data sets, evaluation metrics, and the state-of-the-art performance. Finally, we conclude the remaining challenges and future researches.Recently, the advancement of deep learning (DL) in discriminative feature learning from 3-D LiDAR data has led to rapid development in the field of autonomous driving. However, automated processing uneven, unstructured, noisy, and massive 3-D point clouds are a challenging and tedious task. In this article, we provide a systematic review of existing compelling DL architectures applied in LiDAR point clouds, detailing for specific tasks in autonomous driving, such as segmentation, detection, and classification. Although several published research articles focus on specific topics in computer vision for autonomous vehicles, to date, no general survey on DL applied in LiDAR point clouds for autonomous vehicles exists. Thus, the goal of this article is to narrow the gap in this topic. More than 140 key contributions in the recent five years are summarized in this survey, including the milestone 3-D deep architectures, the remarkable DL applications in 3-D semantic segmentation, object detection, and classification; specific data sets, evaluation metrics, and the state-of-the-art performance. Finally, we conclude the remaining challenges and future researches.
Author Cao, Dongpu
Li, Ying
Liu, Fei
Chapman, Michael A.
Ma, Lingfei
Zhong, Zilong
Li, Jonathan
Author_xml – sequence: 1
  givenname: Ying
  orcidid: 0000-0003-0608-9619
  surname: Li
  fullname: Li, Ying
  email: y2424li@uwaterloo.ca
  organization: Department of Geography and Environmental Management, University of Waterloo, Waterloo, ON, Canada
– sequence: 2
  givenname: Lingfei
  orcidid: 0000-0001-8893-9693
  surname: Ma
  fullname: Ma, Lingfei
  email: l53ma@uwaterloo.ca
  organization: Department of Geography and Environmental Management, University of Waterloo, Waterloo, ON, Canada
– sequence: 3
  givenname: Zilong
  orcidid: 0000-0003-0104-9116
  surname: Zhong
  fullname: Zhong, Zilong
  email: z26zhong@uwaterloo.ca
  organization: Department of Geography and Environmental Management, University of Waterloo, Waterloo, ON, Canada
– sequence: 4
  givenname: Fei
  surname: Liu
  fullname: Liu, Fei
  email: fledaliu@xilinx.com
  organization: Xilinx Technology Beijing, Ltd., Beijing, China
– sequence: 5
  givenname: Michael A.
  surname: Chapman
  fullname: Chapman, Michael A.
  email: mchapman@ryerson.ca
  organization: Department of Civil Engineering, Ryerson University, Toronto, ON, Canada
– sequence: 6
  givenname: Dongpu
  orcidid: 0000-0001-7929-4336
  surname: Cao
  fullname: Cao, Dongpu
  email: dongpu.cao@uwaterloo.ca
  organization: Waterloo Cognitive Autonomous Driving Laboratory, University of Waterloo, Waterloo, ON, Canada
– sequence: 7
  givenname: Jonathan
  orcidid: 0000-0001-7899-0049
  surname: Li
  fullname: Li, Jonathan
  email: junli@uwaterloo.ca
  organization: Department of Geography and Environmental Management, University of Waterloo, Waterloo, ON, Canada
BookMark eNp9kEtPwzAQhC0E4v0H4GKJC5cWv2LH3KqWlxQBKkXiZrnJBhmldrGTIv49KUUcOLCX3cM3O6M5QNs-eEDohJIhpURfzO7vi6chI4wMOaGZ1mwL7TMq2YDxPN_-vdXLHjpO6Y30I0kmhd5Fe5zljHFK99FkArDEBdjonX_FdYi4cJPRFD8G51s8bkJXJew8HnVt8GERuoQn0a16-BKP8BRWDj6O0E5tmwTHP_sQPV9fzca3g-Lh5m48KgYl16IdVEqDqHglwEI5p0orTapK2IrqXDKRgeBcqHkNGZOl5brkQlhp56qPKnQt-SE63_xdxvDeQWrNwqUSmsZ66IMZJrgUhKs869GzP-hb6KLv0xmWZWtrLvOeYhuqjCGlCLVZRrew8dNQYtY1m--azbpm81NzL8r_iErX2tYF30brmv-lpxupA4BfL00VV0TyL8YOiFA
CODEN ITNNAL
CitedBy_id crossref_primary_10_3390_rs14174393
crossref_primary_10_3390_rs15153787
crossref_primary_10_1109_LSP_2024_3449226
crossref_primary_10_3390_electronics13152915
crossref_primary_10_3390_s24010283
crossref_primary_10_1016_j_precisioneng_2025_03_016
crossref_primary_10_3390_rs17030443
crossref_primary_10_1109_TNNLS_2023_3326606
crossref_primary_10_3390_sci4040046
crossref_primary_10_1007_s00521_024_10259_2
crossref_primary_10_1109_TITS_2022_3195213
crossref_primary_10_1016_j_autcon_2022_104456
crossref_primary_10_3390_rs13204123
crossref_primary_10_1007_s13042_023_01863_0
crossref_primary_10_1109_TMM_2022_3206664
crossref_primary_10_3390_s22249577
crossref_primary_10_1016_j_isprsjprs_2022_01_001
crossref_primary_10_1080_01431161_2023_2297177
crossref_primary_10_1109_TVT_2024_3372940
crossref_primary_10_1016_j_ultras_2024_107381
crossref_primary_10_1093_jcde_qwae013
crossref_primary_10_1080_14498596_2023_2236996
crossref_primary_10_3390_s22072576
crossref_primary_10_1364_OE_483522
crossref_primary_10_1016_j_knosys_2022_109770
crossref_primary_10_1109_TGRS_2023_3313876
crossref_primary_10_1109_TCSI_2024_3387998
crossref_primary_10_1109_TRO_2023_3323936
crossref_primary_10_1088_2632_2153_aca1f8
crossref_primary_10_1016_j_jag_2022_102836
crossref_primary_10_1016_j_isprsjprs_2023_11_005
crossref_primary_10_1145_3715851
crossref_primary_10_1109_LGRS_2022_3190558
crossref_primary_10_1109_JSEN_2024_3379990
crossref_primary_10_1109_TNNLS_2024_3352974
crossref_primary_10_3390_s24196446
crossref_primary_10_3390_s25051581
crossref_primary_10_3390_heritage7100261
crossref_primary_10_1016_j_ijepes_2023_109348
crossref_primary_10_1016_j_vlsi_2023_102111
crossref_primary_10_1016_j_engappai_2023_105817
crossref_primary_10_1109_TCSII_2024_3364514
crossref_primary_10_1038_s41598_024_62342_2
crossref_primary_10_3390_wevj15010020
crossref_primary_10_1109_TIV_2023_3343878
crossref_primary_10_3390_rs16030453
crossref_primary_10_12688_cobot_17590_1
crossref_primary_10_3390_s22228858
crossref_primary_10_1109_TNNLS_2023_3281871
crossref_primary_10_1016_j_aei_2024_102850
crossref_primary_10_1016_j_cag_2024_104053
crossref_primary_10_1016_j_jfranklin_2024_01_033
crossref_primary_10_3390_rs15010061
crossref_primary_10_1007_s10489_023_04646_w
crossref_primary_10_1007_s42235_025_00654_3
crossref_primary_10_1016_j_cose_2023_103471
crossref_primary_10_3389_fnbot_2022_891158
crossref_primary_10_3390_info15070376
crossref_primary_10_1002_smr_2644
crossref_primary_10_1038_s41598_024_62629_4
crossref_primary_10_1016_j_geits_2023_100125
crossref_primary_10_1109_TGRS_2024_3385667
crossref_primary_10_1016_j_isprsjprs_2021_03_003
crossref_primary_10_1109_ACCESS_2023_3315741
crossref_primary_10_1109_TITS_2022_3220025
crossref_primary_10_1007_s11263_021_01504_5
crossref_primary_10_1080_10095020_2023_2175478
crossref_primary_10_3390_s21237860
crossref_primary_10_1016_j_eswa_2024_125039
crossref_primary_10_1109_TGRS_2022_3226956
crossref_primary_10_3390_s23063085
crossref_primary_10_1109_ACCESS_2024_3486603
crossref_primary_10_1016_j_compeleceng_2024_109555
crossref_primary_10_1016_j_ophoto_2024_100061
crossref_primary_10_1016_j_mfglet_2024_09_175
crossref_primary_10_1016_j_neucom_2024_129277
crossref_primary_10_1109_JSEN_2023_3240295
crossref_primary_10_3390_drones8100561
crossref_primary_10_3390_electronics12245017
crossref_primary_10_1109_TNNLS_2023_3267333
crossref_primary_10_1109_TMM_2022_3216951
crossref_primary_10_1109_TGRS_2022_3165746
crossref_primary_10_3390_machines11010054
crossref_primary_10_1016_j_neucom_2023_126822
crossref_primary_10_1007_s10489_023_05263_3
crossref_primary_10_3390_rs14225760
crossref_primary_10_3390_agriengineering7030068
crossref_primary_10_3390_infrastructures10040070
crossref_primary_10_1016_j_eswa_2023_122140
crossref_primary_10_1109_ACCESS_2024_3456893
crossref_primary_10_1109_TNNLS_2022_3141821
crossref_primary_10_3390_s22239316
crossref_primary_10_3390_wevj16020080
crossref_primary_10_1109_TITS_2023_3347150
crossref_primary_10_32604_cmc_2024_053204
crossref_primary_10_3233_JIFS_223584
crossref_primary_10_3390_make5030041
crossref_primary_10_1007_s10915_021_01699_2
crossref_primary_10_3390_s22114051
crossref_primary_10_1016_j_imavis_2024_105409
crossref_primary_10_1109_TAI_2023_3237787
crossref_primary_10_1109_TMM_2024_3387729
crossref_primary_10_1016_j_jag_2023_103180
crossref_primary_10_1109_TETCI_2023_3259441
crossref_primary_10_1109_ACCESS_2024_3452160
crossref_primary_10_1109_TIP_2024_3437234
crossref_primary_10_1109_ACCESS_2022_3186476
crossref_primary_10_1145_3627160
crossref_primary_10_1109_TIM_2023_3288256
crossref_primary_10_3390_rs16050743
crossref_primary_10_1016_j_neucom_2022_01_005
crossref_primary_10_1109_JSEN_2023_3235830
crossref_primary_10_3390_rs14153583
crossref_primary_10_1016_j_neucom_2023_126732
crossref_primary_10_1109_TSMC_2023_3311446
crossref_primary_10_3390_s22207868
crossref_primary_10_3390_app12041975
crossref_primary_10_1109_TVT_2024_3390414
crossref_primary_10_3390_rs15010253
crossref_primary_10_3390_s20143964
crossref_primary_10_1016_j_patcog_2022_109267
crossref_primary_10_1109_JSEN_2023_3347585
crossref_primary_10_1109_TVT_2024_3351053
crossref_primary_10_1016_j_aei_2024_102911
crossref_primary_10_1016_j_isprsjprs_2023_01_009
crossref_primary_10_1109_JSEN_2023_3320099
crossref_primary_10_1016_j_isprsjprs_2023_08_008
crossref_primary_10_1109_ACCESS_2023_3337049
crossref_primary_10_1145_3639261
crossref_primary_10_1016_j_procs_2022_10_123
crossref_primary_10_1021_acsomega_1c05473
crossref_primary_10_1109_TNNLS_2022_3146306
crossref_primary_10_1109_ACCESS_2022_3181131
crossref_primary_10_1364_OL_545310
crossref_primary_10_1061_JCCEE5_CPENG_4979
crossref_primary_10_1016_j_eswa_2024_123564
crossref_primary_10_3390_s23020601
crossref_primary_10_3390_machines10070507
crossref_primary_10_1016_j_neunet_2023_06_025
crossref_primary_10_1109_MITS_2021_3109041
crossref_primary_10_1007_s12652_022_03874_1
crossref_primary_10_1016_j_jag_2022_102684
crossref_primary_10_1063_5_0154871
crossref_primary_10_1109_TNNLS_2024_3363244
crossref_primary_10_1177_17298806241297424
crossref_primary_10_1109_TCAD_2022_3172031
crossref_primary_10_3390_s23136119
crossref_primary_10_1016_j_compag_2025_109944
crossref_primary_10_1016_j_inffus_2024_102299
crossref_primary_10_1109_TGRS_2022_3162582
crossref_primary_10_1109_TAI_2023_3342104
crossref_primary_10_1109_TITS_2024_3394481
crossref_primary_10_1109_ACCESS_2025_3541023
crossref_primary_10_1016_j_imavis_2024_104916
crossref_primary_10_1109_TGRS_2023_3234542
crossref_primary_10_1007_s44196_023_00303_9
crossref_primary_10_1109_TITS_2022_3183889
crossref_primary_10_3390_s23062936
crossref_primary_10_3390_pr11020501
crossref_primary_10_1016_j_compag_2023_108264
crossref_primary_10_1007_s11263_024_02027_5
crossref_primary_10_7467_KSAE_2022_30_8_635
crossref_primary_10_1016_j_mfglet_2024_09_159
crossref_primary_10_3390_en15134681
crossref_primary_10_1109_TMC_2022_3198089
crossref_primary_10_1109_ACCESS_2024_3351868
crossref_primary_10_3390_rs15143578
crossref_primary_10_1016_j_autcon_2024_105850
crossref_primary_10_1109_TCSVT_2024_3382322
crossref_primary_10_1109_ACCESS_2021_3131389
crossref_primary_10_1109_TIFS_2023_3333687
crossref_primary_10_1109_ACCESS_2023_3337995
crossref_primary_10_3390_s22166210
crossref_primary_10_1038_s41598_021_98697_z
crossref_primary_10_1109_TNNLS_2021_3128968
crossref_primary_10_1002_cta_4465
crossref_primary_10_1155_abb_2451501
crossref_primary_10_1007_s11831_024_10108_4
crossref_primary_10_11834_jig_220942
crossref_primary_10_3390_rs14132955
crossref_primary_10_3390_s22062184
crossref_primary_10_1109_LRA_2023_3301278
crossref_primary_10_1016_j_isprsjprs_2022_04_023
crossref_primary_10_1145_3674979
crossref_primary_10_1007_s00371_024_03600_2
crossref_primary_10_3390_s23083892
crossref_primary_10_1109_TIM_2024_3381260
crossref_primary_10_1080_01431161_2024_2424509
crossref_primary_10_5194_gi_11_195_2022
crossref_primary_10_1016_j_neucom_2024_129313
crossref_primary_10_1155_2022_6430120
crossref_primary_10_1109_JSEN_2023_3344947
crossref_primary_10_1016_j_jag_2021_102580
crossref_primary_10_1109_TIM_2022_3224525
crossref_primary_10_1109_TITS_2022_3204068
crossref_primary_10_1016_j_measurement_2023_113620
crossref_primary_10_3390_s23042019
crossref_primary_10_1109_TCYB_2023_3312647
crossref_primary_10_1109_TNNLS_2022_3233562
crossref_primary_10_1142_S0218271825400103
crossref_primary_10_1109_TNSE_2023_3306202
crossref_primary_10_1016_j_measurement_2025_116860
crossref_primary_10_3390_s21217177
crossref_primary_10_1109_TNNLS_2022_3155282
crossref_primary_10_1109_JSTARS_2023_3324483
crossref_primary_10_1145_3690768
crossref_primary_10_1109_TNNLS_2022_3201534
crossref_primary_10_1109_JIOT_2022_3194716
crossref_primary_10_3390_photonics10101118
crossref_primary_10_3390_rs15041093
crossref_primary_10_1016_j_cag_2022_07_008
crossref_primary_10_1109_TPAMI_2022_3204713
crossref_primary_10_3390_s22155510
crossref_primary_10_3390_machines11110982
crossref_primary_10_1109_TITS_2024_3456293
crossref_primary_10_1088_2631_8695_adaabf
crossref_primary_10_1007_s11760_024_03728_7
crossref_primary_10_1007_s10462_024_10754_x
crossref_primary_10_1007_s00521_024_10733_x
crossref_primary_10_1109_TITS_2024_3391286
crossref_primary_10_1177_20539517231203669
crossref_primary_10_1109_TCYB_2022_3219142
crossref_primary_10_3390_electronics12112424
crossref_primary_10_3390_s24227268
crossref_primary_10_1007_s10489_024_05822_2
crossref_primary_10_1364_AO_538498
crossref_primary_10_1109_TVT_2023_3287687
crossref_primary_10_1016_j_comcom_2024_06_015
crossref_primary_10_1109_TIM_2024_3450071
crossref_primary_10_1109_TIV_2022_3223131
crossref_primary_10_3390_s24196494
crossref_primary_10_21653_tjpr_1421321
crossref_primary_10_1109_TNNLS_2023_3265533
crossref_primary_10_1016_j_asoc_2024_112466
crossref_primary_10_1109_TPAMI_2023_3312592
crossref_primary_10_1109_TSMC_2023_3321881
crossref_primary_10_3390_pr11020435
crossref_primary_10_1016_j_aej_2024_10_037
crossref_primary_10_1016_j_isprsjprs_2024_03_024
crossref_primary_10_1109_ACCESS_2023_3238824
crossref_primary_10_3390_s22218115
crossref_primary_10_1109_TSMC_2023_3276218
crossref_primary_10_34133_research_0467
crossref_primary_10_1088_2040_8986_ac4870
Cites_doi 10.1109/ICPR.2016.7900038
10.1155/2018/7068349
10.1109/ICCV.2017.99
10.1007/978-3-030-01225-0_37
10.1109/CVPR.2018.00907
10.1177/0278364913491297
10.1109/CVPR.2018.00526
10.1109/CVPR.2018.00028
10.1016/j.cag.2015.03.004
10.1109/IROS.2015.7353481
10.3390/s19194188
10.1007/978-3-030-01237-3_6
10.1016/j.isprsjprs.2015.01.011
10.1109/CVPR.2018.00798
10.1080/19479832.2016.1188860
10.1109/CVPR.2018.00979
10.1109/TIM.2018.2840598
10.1038/nature14539
10.1016/j.isprsjprs.2017.05.012
10.1109/CVPR.2018.00272
10.15607/RSS.2016.XII.042
10.1109/3DV.2018.00052
10.1109/TITS.2019.2892405
10.1109/CVPR.2018.00959
10.1177/0278364916679498
10.1109/TITS.2016.2639582
10.3390/s19040810
10.1109/CVPR.2016.90
10.1109/ICCV.2017.230
10.1016/j.isprsjprs.2018.10.007
10.1109/CVPR.2019.00705
10.1109/CVPR.2018.00268
10.1109/MGRS.2016.2540798
10.1109/CVPR.2009.5206590
10.1016/j.neucom.2015.09.116
10.1109/ICCV.2015.114
10.1145/1964179.1964190
10.1007/s11263-014-0733-5
10.1109/ITSC.2018.8569311
10.1109/TIE.2015.2410258
10.1109/ICPR.2016.7899697
10.1016/j.isprsjprs.2018.02.008
10.1145/3042064
10.1109/CVPR.2017.691
10.1109/TNNLS.2018.2876865
10.1109/CVPR.2017.701
10.1109/CVPR.2019.00910
10.1109/MGRS.2017.2762307
10.3390/rs10040612
10.1109/CVPRW.2018.00141
10.1109/CVPR.2018.00479
10.1109/TITS.2017.2752461
10.1109/ICRA.2018.8462926
10.1109/IVS.2011.5940562
10.1016/j.isprsjprs.2017.05.006
10.1109/CVPR.2018.00472
10.1016/j.isprsjprs.2018.06.018
10.1109/3DV.2017.00067
10.1016/j.neucom.2018.09.075
10.1109/TGRS.2014.2359951
10.1007/978-3-030-01225-0_4
10.1016/j.isprsjprs.2019.01.024
10.1109/IROS.2017.8206198
10.1109/CVPR.2018.00278
10.1109/CVPR.2012.6248074
10.1016/j.cag.2017.10.007
10.1109/JSTARS.2017.2781132
10.1109/CVPR.2018.00375
10.1007/978-3-030-01249-6_28
10.1109/ICRA.2017.7989161
10.1007/978-3-030-01228-1_35
10.1109/CVPR.2017.16
10.1109/CVPR.2015.7298965
10.1007/978-3-319-64689-3_8
10.1109/TGRS.2018.2829625
10.1007/s11263-015-0816-y
10.1016/j.trc.2018.02.012
10.1109/TGRS.2017.2769120
10.1016/j.optlastec.2017.06.015
10.1109/CVPR.2017.11
10.1109/IROS.2017.8205955
10.1109/CVPR.2015.7298594
10.1109/CVPR.2018.00033
10.3390/rs3061104
10.1145/1531326.1531377
10.1016/j.isprsjprs.2018.11.006
10.1109/CVPR.2018.00484
10.3390/rs10101531
10.1109/CVPR.2019.01054
10.1109/ICCV.2019.00937
10.1016/j.isprsjprs.2018.04.022
10.1007/978-3-030-01267-0_37
10.1109/CVPR.2018.00102
10.1007/978-3-030-01270-0_39
10.1109/MSP.2017.2693418
10.1109/ICRA.2019.8793983
10.1109/CVPR.2016.609
10.15607/RSS.2015.XI.035
10.1109/JPROC.2017.2761740
10.5194/isprs-annals-IV-1-W1-91-2017
10.1016/j.neucom.2016.12.038
10.1145/3240508.3240702
10.1145/1618452.1618522
10.1109/CVPR.2017.697
10.1109/IROS.2018.8594049
10.1007/978-3-642-33715-4_54
10.1109/LRA.2018.2852843
10.1109/CVPR.2018.00481
10.1016/j.isprsjprs.2015.01.016
10.1109/TPAMI.2017.2706685
10.1109/RAM.2013.6758588
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
DOI 10.1109/TNNLS.2020.3015992
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Calcium & Calcified Tissue Abstracts
Ceramic Abstracts
Chemoreception Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitle CrossRef
Materials Research Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Materials Business File
Aerospace Database
Engineered Materials Abstracts
Biotechnology Research Abstracts
Chemoreception Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Civil Engineering Abstracts
Aluminium Industry Abstracts
Electronics & Communications Abstracts
Ceramic Abstracts
Neurosciences Abstracts
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Solid State and Superconductivity Abstracts
Engineering Research Database
Calcium & Calcified Tissue Abstracts
Corrosion Abstracts
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
Materials Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 3432
ExternalDocumentID 10_1109_TNNLS_2020_3015992
9173706
Genre orig-research
GrantInformation_xml – fundername: Natural Sciences and Engineering Research Council of Canada (NSERC)
  grantid: 50503-10284
  funderid: 10.13039/501100000038
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
RIG
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
ID FETCH-LOGICAL-c394t-d79e4d3d4eaecb179790dd4ad1986245e43347bfe526ca39c344a6ab723149f63
IEDL.DBID RIE
ISSN 2162-237X
2162-2388
IngestDate Fri Jul 11 12:28:46 EDT 2025
Mon Jun 30 07:16:15 EDT 2025
Thu Apr 24 23:07:43 EDT 2025
Tue Jul 01 00:27:35 EDT 2025
Wed Aug 27 02:39:34 EDT 2025
IsPeerReviewed false
IsScholarly true
Issue 8
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c394t-d79e4d3d4eaecb179790dd4ad1986245e43347bfe526ca39c344a6ab723149f63
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0003-0104-9116
0000-0001-7929-4336
0000-0003-0608-9619
0000-0001-8893-9693
0000-0001-7899-0049
PMID 32822311
PQID 2557979368
PQPubID 85436
PageCount 21
ParticipantIDs proquest_journals_2557979368
proquest_miscellaneous_2436403785
crossref_citationtrail_10_1109_TNNLS_2020_3015992
crossref_primary_10_1109_TNNLS_2020_3015992
ieee_primary_9173706
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2021-08-01
PublicationDateYYYYMMDD 2021-08-01
PublicationDate_xml – month: 08
  year: 2021
  text: 2021-08-01
  day: 01
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
mescheder (ref136) 2018
ref56
ref58
ref53
ref52
ref55
ref54
li (ref71) 2018
veli?kovi? (ref75) 2017
simonyan (ref87) 2014
engelmann (ref99) 2018
ref51
ref50
krizhevsky (ref86) 2012
roynard (ref59) 2018
ref46
ref45
ref48
ref47
ref42
ref41
ref44
sauder (ref145) 2019
ref43
ref49
ref8
ref7
ref9
ref4
yu (ref125) 2017
lafferty (ref114) 2001
ref3
ahmed (ref29) 2018
ref6
liu (ref23) 2018
ref5
ref100
ref101
ref40
ref34
ref37
ref36
ref31
ref33
ref32
zhi (ref94) 2017
ref39
lei (ref98) 2018
rusu (ref103) 2011
shoef (ref146) 2019
ref24
ref26
ref20
de deuge (ref61) 2013; 2
ref22
ref21
wu (ref68) 2016
liaw (ref117) 2002; 2
garcia-garcia (ref25) 2017
ref28
ref27
boulch (ref11) 2017
fujiwara (ref140) 2018
ref13
ref128
ref15
ref129
ref14
ref97
ref127
ref96
ref124
vosselman (ref80) 2004; 46
ref10
ref17
ref16
ref19
ref18
ref133
ref93
ref134
ref92
ref131
ref95
ref132
ref91
treml (ref38) 2016; 1
ref89
ref139
ren (ref126) 2015
ref137
qi (ref12) 2017
ref138
ref85
ref135
ref88
goodfellow (ref81) 2014
ref144
ref82
ref142
ref84
ref143
ref83
ref141
ref79
ref108
ref78
ref109
ref106
ref107
ref104
ref105
ref77
ref102
ref76
ref2
sedaghat (ref130) 2016
ref111
ref70
ref112
hana (ref35) 2018
ref73
ref72
ref110
ref67
ref69
ref118
ref64
ref115
ref63
ref66
ref113
ref65
wu (ref30) 2015
janai (ref1) 2017
wang (ref74) 2018
yang (ref119) 2018
simony (ref116) 2018
ref60
ref122
su (ref90) 2018
ref123
ref62
ref120
ref121
References_xml – year: 2014
  ident: ref87
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: arXiv 1409 1556
– ident: ref112
  doi: 10.1109/ICPR.2016.7900038
– ident: ref20
  doi: 10.1155/2018/7068349
– ident: ref70
  doi: 10.1109/ICCV.2017.99
– ident: ref108
  doi: 10.1007/978-3-030-01225-0_37
– year: 2016
  ident: ref130
  article-title: Orientation-boosted voxel nets for 3D object recognition
  publication-title: arXiv 1604 03351
– start-page: 9
  year: 2017
  ident: ref94
  article-title: Lightnet: A lightweight 3D convolutional neural network for real-time 3D object recognition
  publication-title: Proc 3DOR
– ident: ref120
  doi: 10.1109/CVPR.2018.00907
– ident: ref54
  doi: 10.1177/0278364913491297
– ident: ref78
  doi: 10.1109/CVPR.2018.00526
– ident: ref142
  doi: 10.1109/CVPR.2018.00028
– ident: ref58
  doi: 10.1016/j.cag.2015.03.004
– ident: ref67
  doi: 10.1109/IROS.2015.7353481
– year: 2018
  ident: ref35
  article-title: A comprehensive review of 3D point cloud descriptors
  publication-title: arXiv 1802 02297
– ident: ref37
  doi: 10.3390/s19194188
– ident: ref129
  doi: 10.1007/978-3-030-01237-3_6
– ident: ref102
  doi: 10.1016/j.isprsjprs.2015.01.011
– ident: ref8
  doi: 10.1109/CVPR.2018.00798
– ident: ref32
  doi: 10.1080/19479832.2016.1188860
– ident: ref96
  doi: 10.1109/CVPR.2018.00979
– ident: ref131
  doi: 10.1109/TIM.2018.2840598
– ident: ref17
  doi: 10.1038/nature14539
– year: 2017
  ident: ref75
  article-title: Graph attention networks
  publication-title: arXiv 1710 10903
– ident: ref48
  doi: 10.1016/j.isprsjprs.2017.05.012
– year: 2019
  ident: ref145
  article-title: Context prediction for unsupervised deep learning on point clouds
  publication-title: arXiv 1901 08396
– ident: ref109
  doi: 10.1109/CVPR.2018.00272
– ident: ref9
  doi: 10.15607/RSS.2016.XII.042
– ident: ref104
  doi: 10.1109/3DV.2018.00052
– ident: ref36
  doi: 10.1109/TITS.2019.2892405
– ident: ref137
  doi: 10.1109/CVPR.2018.00959
– ident: ref64
  doi: 10.1177/0278364916679498
– ident: ref97
  doi: 10.1109/TITS.2016.2639582
– ident: ref33
  doi: 10.3390/s19040810
– ident: ref89
  doi: 10.1109/CVPR.2016.90
– ident: ref82
  doi: 10.1109/ICCV.2017.230
– ident: ref47
  doi: 10.1016/j.isprsjprs.2018.10.007
– ident: ref135
  doi: 10.1109/CVPR.2019.00705
– ident: ref110
  doi: 10.1109/CVPR.2018.00268
– ident: ref21
  doi: 10.1109/MGRS.2016.2540798
– ident: ref57
  doi: 10.1109/CVPR.2009.5206590
– year: 2018
  ident: ref119
  article-title: IPOD: Intensive point-based object detector for point cloud
  publication-title: arXiv 1812 05276
– ident: ref19
  doi: 10.1016/j.neucom.2015.09.116
– ident: ref76
  doi: 10.1109/ICCV.2015.114
– ident: ref83
  doi: 10.1145/1964179.1964190
– year: 2018
  ident: ref29
  article-title: A survey on deep learning advances on different 3D data representations: A survey
  publication-title: arXiv 1808 01462
– ident: ref62
  doi: 10.1007/s11263-014-0733-5
– volume: 1
  start-page: 5
  year: 2016
  ident: ref38
  article-title: Speeding up semantic segmentation for autonomous driving
  publication-title: Proc MLITS NIPS Workshop
– ident: ref43
  doi: 10.1109/ITSC.2018.8569311
– ident: ref55
  doi: 10.1109/TIE.2015.2410258
– ident: ref50
  doi: 10.1109/ICPR.2016.7899697
– ident: ref65
  doi: 10.1016/j.isprsjprs.2018.02.008
– volume: 2
  start-page: 1
  year: 2013
  ident: ref61
  article-title: Unsupervised feature learning for classification of outdoor 3D scans
  publication-title: Proc ACRA
– ident: ref28
  doi: 10.1145/3042064
– year: 2017
  ident: ref1
  article-title: Computer vision for autonomous vehicles: Problems, datasets and state of the art
  publication-title: arXiv 1704 05519
– ident: ref123
  doi: 10.1109/CVPR.2017.691
– ident: ref24
  doi: 10.1109/TNNLS.2018.2876865
– ident: ref69
  doi: 10.1109/CVPR.2017.701
– ident: ref52
  doi: 10.1109/CVPR.2019.00910
– start-page: 395
  year: 2018
  ident: ref99
  article-title: Know what your neighbors do: 3D semantic segmentation of point clouds
  publication-title: Proc ECC
– start-page: 2672
  year: 2014
  ident: ref81
  article-title: Generative adversarial nets
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref22
  doi: 10.1109/MGRS.2017.2762307
– ident: ref106
  doi: 10.3390/rs10040612
– start-page: 5099
  year: 2017
  ident: ref12
  article-title: Pointnet++: Deep hierarchical feature learning on point sets in a metric space
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref4
  doi: 10.1109/CVPRW.2018.00141
– ident: ref105
  doi: 10.1109/CVPR.2018.00479
– ident: ref5
  doi: 10.1109/TITS.2017.2752461
– ident: ref7
  doi: 10.1109/ICRA.2018.8462926
– ident: ref2
  doi: 10.1109/IVS.2011.5940562
– year: 2018
  ident: ref74
  article-title: Dynamic graph CNN for learning on point clouds
  publication-title: arXiv 1801 07829
– ident: ref101
  doi: 10.1016/j.isprsjprs.2017.05.006
– ident: ref13
  doi: 10.1109/CVPR.2018.00472
– ident: ref141
  doi: 10.1016/j.isprsjprs.2018.06.018
– ident: ref113
  doi: 10.1109/3DV.2017.00067
– ident: ref132
  doi: 10.1016/j.neucom.2018.09.075
– ident: ref46
  doi: 10.1109/TGRS.2014.2359951
– ident: ref84
  doi: 10.1007/978-3-030-01225-0_4
– ident: ref45
  doi: 10.1016/j.isprsjprs.2019.01.024
– ident: ref16
  doi: 10.1109/IROS.2017.8206198
– volume: 2
  start-page: 18
  year: 2002
  ident: ref117
  article-title: Classification and regression by randomforest
  publication-title: R News
– ident: ref40
  doi: 10.1109/CVPR.2018.00278
– ident: ref60
  doi: 10.1109/CVPR.2012.6248074
– ident: ref93
  doi: 10.1016/j.cag.2017.10.007
– start-page: 1
  year: 2011
  ident: ref103
  article-title: Point cloud library (PCL)
  publication-title: Proc IEEE ICRA
– start-page: 1097
  year: 2012
  ident: ref86
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref34
  doi: 10.1109/JSTARS.2017.2781132
– year: 2017
  ident: ref25
  article-title: A review on deep learning techniques applied to semantic segmentation
  publication-title: arXiv 1704 06857
– year: 2018
  ident: ref59
  article-title: Classification of point cloud scenes with multiscale voxel deep network
  publication-title: arXiv 1804 03583
– start-page: 1912
  year: 2015
  ident: ref30
  article-title: 3D ShapeNets: A deep representation for volumetric shapes
  publication-title: Proc IEEE CVPR
– ident: ref44
  doi: 10.1109/CVPR.2018.00375
– ident: ref77
  doi: 10.1007/978-3-030-01249-6_28
– ident: ref122
  doi: 10.1109/ICRA.2017.7989161
– ident: ref139
  doi: 10.1007/978-3-030-01228-1_35
– ident: ref10
  doi: 10.1109/CVPR.2017.16
– ident: ref79
  doi: 10.1109/CVPR.2015.7298965
– year: 2019
  ident: ref146
  article-title: PointWise: An unsupervised point-wise feature learning network
  publication-title: arXiv 1901 04544
– ident: ref107
  doi: 10.1007/978-3-319-64689-3_8
– ident: ref111
  doi: 10.1109/TGRS.2018.2829625
– ident: ref92
  doi: 10.1007/s11263-015-0816-y
– ident: ref3
  doi: 10.1016/j.trc.2018.02.012
– ident: ref118
  doi: 10.1109/TGRS.2017.2769120
– ident: ref63
  doi: 10.1016/j.optlastec.2017.06.015
– ident: ref73
  doi: 10.1109/CVPR.2017.11
– start-page: 7
  year: 2017
  ident: ref11
  article-title: Unstructured point cloud semantic labeling using deep segmentation networks
  publication-title: Proc 3DOR
– ident: ref14
  doi: 10.1109/IROS.2017.8205955
– ident: ref88
  doi: 10.1109/CVPR.2015.7298594
– ident: ref134
  doi: 10.1109/CVPR.2018.00033
– year: 2018
  ident: ref136
  article-title: Occupancy networks: Learning 3D reconstruction in function space
  publication-title: arXiv 1812 03828
– ident: ref6
  doi: 10.3390/rs3061104
– start-page: 82
  year: 2016
  ident: ref68
  article-title: Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref51
  doi: 10.1145/1531326.1531377
– ident: ref49
  doi: 10.1016/j.isprsjprs.2018.11.006
– ident: ref143
  doi: 10.1109/CVPR.2018.00484
– ident: ref31
  doi: 10.3390/rs10101531
– ident: ref85
  doi: 10.1109/CVPR.2019.01054
– ident: ref42
  doi: 10.1109/ICCV.2019.00937
– start-page: 91
  year: 2015
  ident: ref126
  article-title: Faster R-CNN: Towards real-time object detection with region proposal networks
  publication-title: Proc Adv Neural Inf Process Syst
– ident: ref115
  doi: 10.1016/j.isprsjprs.2018.04.022
– start-page: 197
  year: 2018
  ident: ref116
  article-title: Complex-Yolo: An Euler-region-proposal for real-time 3D object detection on point clouds
  publication-title: Proc ECC
– ident: ref144
  doi: 10.1007/978-3-030-01267-0_37
– ident: ref41
  doi: 10.1109/CVPR.2018.00102
– ident: ref133
  doi: 10.1007/978-3-030-01270-0_39
– volume: 46
  start-page: 33
  year: 2004
  ident: ref80
  article-title: Recognising structure in laser scanner point clouds
  publication-title: ISPRS Int Arch Photogramm Remote Sens Spatial Inf Sci
– ident: ref27
  doi: 10.1109/MSP.2017.2693418
– ident: ref138
  doi: 10.1109/ICRA.2019.8793983
– start-page: 645
  year: 2018
  ident: ref90
  article-title: A deeper look at 3D shape classifiers
  publication-title: Proc ECCV
– ident: ref15
  doi: 10.1109/CVPR.2016.609
– ident: ref121
  doi: 10.15607/RSS.2015.XI.035
– ident: ref18
  doi: 10.1109/JPROC.2017.2761740
– ident: ref56
  doi: 10.5194/isprs-annals-IV-1-W1-91-2017
– year: 2018
  ident: ref23
  article-title: Deep learning for generic object detection: A survey
  publication-title: arXiv 1809 02165
– ident: ref26
  doi: 10.1016/j.neucom.2016.12.038
– ident: ref91
  doi: 10.1145/3240508.3240702
– ident: ref53
  doi: 10.1145/1618452.1618522
– ident: ref72
  doi: 10.1109/CVPR.2017.697
– ident: ref124
  doi: 10.1109/IROS.2018.8594049
– start-page: 102
  year: 2017
  ident: ref125
  article-title: Vehicle detection and localization on bird's eye view elevation images using convolutional neural network
  publication-title: Proc IEEE SSRR
– ident: ref128
  doi: 10.1007/978-3-642-33715-4_54
– ident: ref127
  doi: 10.1109/LRA.2018.2852843
– start-page: 282
  year: 2001
  ident: ref114
  article-title: Conditional random fields: Probabilistic models for segmenting and labeling sequence data
  publication-title: Proc IEEE Int Conf Mach Learn
– ident: ref95
  doi: 10.1109/CVPR.2018.00481
– year: 2018
  ident: ref140
  article-title: Canonical and compact point cloud representation for shape classification
  publication-title: arXiv 1809 04820
– ident: ref100
  doi: 10.1016/j.isprsjprs.2015.01.016
– ident: ref66
  doi: 10.1109/TPAMI.2017.2706685
– ident: ref39
  doi: 10.1109/RAM.2013.6758588
– start-page: 820
  year: 2018
  ident: ref71
  article-title: PointCNN: Convolution on X-transformed points
  publication-title: Proc NeurIPS
– year: 2018
  ident: ref98
  article-title: Spherical convolutional neural network for 3D point clouds
  publication-title: arXiv 1805 07872
SSID ssj0000605649
Score 2.7105625
Snippet Recently, the advancement of deep learning (DL) in discriminative feature learning from 3-D LiDAR data has led to rapid development in the field of autonomous...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 3412
SubjectTerms Autonomous driving
Autonomous vehicles
Classification
Computer vision
Deep learning
deep learning (DL)
Laser radar
LiDAR
Machine learning
object classification
Object detection
Object recognition
point clouds
Polls & surveys
Semantic segmentation
Semantics
Solid modeling
Task analysis
Three dimensional models
Three-dimensional displays
Title Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review
URI https://ieeexplore.ieee.org/document/9173706
https://www.proquest.com/docview/2557979368
https://www.proquest.com/docview/2436403785
Volume 32
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1La9wwEBZpTr00bZPSTdKiQm-tN7I1lle5LdmGUJIl5AF7M7I0LksWO2TtS399RvID-qD0ZvDIWDMazTfSPBj7DEZgUoKMbKpFBGh0ZApDm6FLS0KvMHMzn418tVQX9_B9la522NcxFwYRQ_AZTv1juMt3tW39UdkJuRYy8_W1X5Dj1uVqjecpgnC5Cmg3iVUSJTJbDTkyQp_cLZeXt-QNJuSkkgHU2nexkT6EUsbxLyYp9Fj5Y2MO1uZ8j10N_9kFmTxM26aY2p-_lXD834m8Zq962Mnn3Tp5w3awesv2hpYOvNfwfbZYID7yvujqD06Ill-uF_Mbfl2vq4afberWbfm64vO28ekQdbvli6e1P5U45XPe3TQcsPvzb3dnF1HfaCGyUkMTuUwjOOlIVGgLUtFMC-fAuFj7_JEUQUrIihLTRFkjtZUARpkiI-6BLpV8x3arusL3jNvYpQ6kEKVBUJbIUjBIGAAtYSVpJiweeJ3bvgq5b4axyYM3InQeRJV7UeW9qCbsyzjmsavB8U_qfc_wkbLn9YQdDyLNezXd5uRP-clKNZuwT-NrUjB_a2IqJCbmCUgFQmaz9PDvXz5iLxMf6BKiAo_ZbvPU4gdCKk3xMSzRZxRo4H4
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3db9MwELem8QAvDBiIsg2MxBukc-KLU_NWrZsKtBWCTupb5NiXqWJKpjV52V_POV_SGEK8RYodxXc-3-_O98HYBzACoxxkYGMtAkCjA5MZOgxdnBN6hYmb-Gzk5UrNL-HrJt7ssU9DLgwiNsFnOPaPzV2-K23tXWWnZFrIxNfXfkR6H3SbrTV4VAQhc9Xg3ShUURDJZNNnyQh9ul6tFj_JHozITCUVqLXvYyN9EKUMw3tKqemy8uBobvTNxQFb9n_ahpn8GtdVNrZ3fxRx_N-lPGNPO-DJp-1Oec72sHjBDvqmDryT8UM2myHe8K7s6hUnTMsX29n0B_9ebouKn12XtdvxbcGndeUTIsp6x2e3W--X-MynvL1reMkuL87XZ_Oga7UQWKmhClyiEZx0xCy0GQlpooVzYFyofQZJjCAlJFmOcaSskdpKAKNMlhD1QOdKvmL7RVnga8Zt6GIHUojcIChLw2IwSCgALaElaUYs7Gmd2q4OuW-HcZ029ojQacOq1LMq7Vg1Yh-HOTdtFY5_jj70BB9GdrQeseOepWknqLuULCq_WKkmI_Z-eE0i5u9NTIFExDQCqUDIZBK_-fuX37HH8_VykS6-rL4dsSeRD3tpYgSP2X51W-MJ4ZYqe9ts198KSuPO
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Learning+for+LiDAR+Point+Clouds+in+Autonomous+Driving%3A+A+Review&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Li%2C+Ying&rft.au=Ma%2C+Lingfei&rft.au=Zhong%2C+Zilong&rft.au=Liu%2C+Fei&rft.date=2021-08-01&rft.issn=2162-237X&rft.eissn=2162-2388&rft.volume=32&rft.issue=8&rft.spage=3412&rft.epage=3432&rft_id=info:doi/10.1109%2FTNNLS.2020.3015992&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TNNLS_2020_3015992
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon