Deep Hough Transform for Semantic Line Detection

We focus on a fundamental task of detecting meaningful line structures, a.k.a. , semantic line, in natural scenes. Many previous methods regard this problem as a special case of object detection and adjust existing object detectors for semantic line detection. However, these methods neglect the inhe...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on pattern analysis and machine intelligence Vol. 44; no. 9; pp. 4793 - 4806
Main Authors Zhao, Kai, Han, Qi, Zhang, Chang-Bin, Xu, Jun, Cheng, Ming-Ming
Format Journal Article
LanguageEnglish
Published United States IEEE 01.09.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract We focus on a fundamental task of detecting meaningful line structures, a.k.a. , semantic line, in natural scenes. Many previous methods regard this problem as a special case of object detection and adjust existing object detectors for semantic line detection. However, these methods neglect the inherent characteristics of lines, leading to sub-optimal performance. Lines enjoy much simpler geometric property than complex objects and thus can be compactly parameterized by a few arguments. To better exploit the property of lines, in this paper, we incorporate the classical Hough transform technique into deeply learned representations and propose a one-shot end-to-end learning framework for line detection. By parameterizing lines with slopes and biases, we perform Hough transform to translate deep representations into the parametric domain, in which we perform line detection. Specifically, we aggregate features along candidate lines on the feature map plane and then assign the aggregated features to corresponding locations in the parametric domain. Consequently, the problem of detecting semantic lines in the spatial domain is transformed into spotting individual points in the parametric domain, making the post-processing steps, i.e., non-maximal suppression, more efficient. Furthermore, our method makes it easy to extract contextual line features that are critical for accurate line detection. In addition to the proposed method, we design an evaluation metric to assess the quality of line detection and construct a large scale dataset for the line detection task. Experimental results on our proposed dataset and another public dataset demonstrate the advantages of our method over previous state-of-the-art alternatives. The dataset and source code is available at https://mmcheng.net/dhtline/ .
AbstractList We focus on a fundamental task of detecting meaningful line structures, a.k.a. , semantic line, in natural scenes. Many previous methods regard this problem as a special case of object detection and adjust existing object detectors for semantic line detection. However, these methods neglect the inherent characteristics of lines, leading to sub-optimal performance. Lines enjoy much simpler geometric property than complex objects and thus can be compactly parameterized by a few arguments. To better exploit the property of lines, in this paper, we incorporate the classical Hough transform technique into deeply learned representations and propose a one-shot end-to-end learning framework for line detection. By parameterizing lines with slopes and biases, we perform Hough transform to translate deep representations into the parametric domain, in which we perform line detection. Specifically, we aggregate features along candidate lines on the feature map plane and then assign the aggregated features to corresponding locations in the parametric domain. Consequently, the problem of detecting semantic lines in the spatial domain is transformed into spotting individual points in the parametric domain, making the post-processing steps, i.e., non-maximal suppression, more efficient. Furthermore, our method makes it easy to extract contextual line features that are critical for accurate line detection. In addition to the proposed method, we design an evaluation metric to assess the quality of line detection and construct a large scale dataset for the line detection task. Experimental results on our proposed dataset and another public dataset demonstrate the advantages of our method over previous state-of-the-art alternatives. The dataset and source code is available at https://mmcheng.net/dhtline/ .
We focus on a fundamental task of detecting meaningful line structures, a.k.a., semantic line, in natural scenes. Many previous methods regard this problem as a special case of object detection and adjust existing object detectors for semantic line detection. However, these methods neglect the inherent characteristics of lines, leading to sub-optimal performance. Lines enjoy much simpler geometric property than complex objects and thus can be compactly parameterized by a few arguments. In this paper, we incorporate the classical Hough transform technique into deeply learned representations and propose a one-shot end-to-end learning framework for line detection. By parameterizing lines with slopes and biases, we perform Hough transform to translate deep representations into the parametric domain, in which we perform line detection. Specifically, we aggregate features along candidate lines on the feature map plane and then assign the aggregated features to corresponding locations in the parametric domain. The problem of detecting semantic lines in the spatial domain is transformed into spotting individual points in the parametric domain, making the post-processing steps, i.e., non-maximal suppression, more efficient. Experimental results on our proposed dataset and another public dataset demonstrate the advantages of our method over previous state-of-the-art alternatives.
We focus on a fundamental task of detecting meaningful line structures, a.k.a., semantic line, in natural scenes. Many previous methods regard this problem as a special case of object detection and adjust existing object detectors for semantic line detection. However, these methods neglect the inherent characteristics of lines, leading to sub-optimal performance. Lines enjoy much simpler geometric property than complex objects and thus can be compactly parameterized by a few arguments. To better exploit the property of lines, in this paper, we incorporate the classical Hough transform technique into deeply learned representations and propose a one-shot end-to-end learning framework for line detection. By parameterizing lines with slopes and biases, we perform Hough transform to translate deep representations into the parametric domain, in which we perform line detection. Specifically, we aggregate features along candidate lines on the feature map plane and then assign the aggregated features to corresponding locations in the parametric domain. Consequently, the problem of detecting semantic lines in the spatial domain is transformed into spotting individual points in the parametric domain, making the post-processing steps, i.e., non-maximal suppression, more efficient. Furthermore, our method makes it easy to extract contextual line features that are critical for accurate line detection. In addition to the proposed method, we design an evaluation metric to assess the quality of line detection and construct a large scale dataset for the line detection task. Experimental results on our proposed dataset and another public dataset demonstrate the advantages of our method over previous state-of-the-art alternatives. The dataset and source code is available at https://mmcheng.net/dhtline/.We focus on a fundamental task of detecting meaningful line structures, a.k.a., semantic line, in natural scenes. Many previous methods regard this problem as a special case of object detection and adjust existing object detectors for semantic line detection. However, these methods neglect the inherent characteristics of lines, leading to sub-optimal performance. Lines enjoy much simpler geometric property than complex objects and thus can be compactly parameterized by a few arguments. To better exploit the property of lines, in this paper, we incorporate the classical Hough transform technique into deeply learned representations and propose a one-shot end-to-end learning framework for line detection. By parameterizing lines with slopes and biases, we perform Hough transform to translate deep representations into the parametric domain, in which we perform line detection. Specifically, we aggregate features along candidate lines on the feature map plane and then assign the aggregated features to corresponding locations in the parametric domain. Consequently, the problem of detecting semantic lines in the spatial domain is transformed into spotting individual points in the parametric domain, making the post-processing steps, i.e., non-maximal suppression, more efficient. Furthermore, our method makes it easy to extract contextual line features that are critical for accurate line detection. In addition to the proposed method, we design an evaluation metric to assess the quality of line detection and construct a large scale dataset for the line detection task. Experimental results on our proposed dataset and another public dataset demonstrate the advantages of our method over previous state-of-the-art alternatives. The dataset and source code is available at https://mmcheng.net/dhtline/.
Author Zhao, Kai
Han, Qi
Zhang, Chang-Bin
Cheng, Ming-Ming
Xu, Jun
Author_xml – sequence: 1
  givenname: Kai
  orcidid: 0000-0002-2496-0829
  surname: Zhao
  fullname: Zhao, Kai
  email: kz@kaizhao.net
  organization: TKLNDST, College of Computer Science, Nankai University, Tianjin, China
– sequence: 2
  givenname: Qi
  surname: Han
  fullname: Han, Qi
  email: hqer@foxmail.com
  organization: TKLNDST, College of Computer Science, Nankai University, Tianjin, China
– sequence: 3
  givenname: Chang-Bin
  orcidid: 0000-0003-0043-8240
  surname: Zhang
  fullname: Zhang, Chang-Bin
  email: zhangchbin@mail.nankai.edu.cn
  organization: TKLNDST, College of Computer Science, Nankai University, Tianjin, China
– sequence: 4
  givenname: Jun
  surname: Xu
  fullname: Xu, Jun
  email: nankaimathxujun@gmail.com
  organization: School of Statistics and Data Science, Nankai University, Tianjin, China
– sequence: 5
  givenname: Ming-Ming
  orcidid: 0000-0001-5550-8758
  surname: Cheng
  fullname: Cheng, Ming-Ming
  email: cmm@nankai.edu.cn
  organization: TKLNDST, College of Computer Science, Nankai University, Tianjin, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/33939606$$D View this record in MEDLINE/PubMed
BookMark eNp9kD1PwzAQhi0EouXjD4CEIrGwpJzPSWqPqOVLKgKJMlt2coagJil2MvDvSWnp0IHlbnne-3iO2H7d1MTYGYcR56Cu5y83T48jBOQjAeMxR7XHhsgziBUq3GdD4BnGUqIcsKMQPgF4koI4ZAMhlFAZZEMGU6Jl9NB07x_R3Js6uMZXUV-iV6pM3ZZ5NCtriqbUUt6WTX3CDpxZBDrd9GP2dnc7nzzEs-f7x8nNLM5Fytu4cE4Ya2VhTTJOhUMARyCLgtA5Mjy1yhbcYuqkS8CpsUkQncissLk0MhHH7Go9d-mbr45Cq6sy5LRYmJqaLmhMEbniCniPXu6gn03n6_46jZla_Q9C9tTFhupsRYVe-rIy_lv_uegBXAO5b0Lw5LYIB70Srn-F65VwvRHeh-ROKC9bsxLVelMu_o-er6MlEW13qd5DL0v8AP8ZjAA
CODEN ITPIDJ
CitedBy_id crossref_primary_10_1007_s00371_021_02321_0
crossref_primary_10_1109_ACCESS_2022_3190404
crossref_primary_10_1016_j_patcog_2024_110952
crossref_primary_10_1109_ACCESS_2023_3329300
crossref_primary_10_1109_LRA_2021_3097052
crossref_primary_10_1109_TGRS_2021_3128989
crossref_primary_10_3390_jimaging7070120
crossref_primary_10_1109_TIM_2024_3418107
crossref_primary_10_1109_TIM_2024_3522395
crossref_primary_10_1007_s11548_022_02812_y
crossref_primary_10_1016_j_autcon_2023_105024
crossref_primary_10_1080_01431161_2023_2229495
crossref_primary_10_1109_TIM_2024_3436114
crossref_primary_10_1109_ACCESS_2023_3262703
crossref_primary_10_1109_ACCESS_2024_3382140
crossref_primary_10_1109_ACCESS_2024_3506613
crossref_primary_10_1109_TMM_2022_3172852
crossref_primary_10_1109_ACCESS_2024_3355154
crossref_primary_10_1007_s00371_022_02455_9
crossref_primary_10_1109_JESTIE_2023_3322111
crossref_primary_10_1016_j_jclepro_2022_132575
crossref_primary_10_1109_ACCESS_2024_3509342
crossref_primary_10_3390_s21144648
crossref_primary_10_1109_TMM_2022_3204440
crossref_primary_10_1016_j_compmedimag_2023_102284
crossref_primary_10_1109_ACCESS_2021_3113155
crossref_primary_10_1007_s00530_024_01307_x
crossref_primary_10_1109_ACCESS_2024_3369035
crossref_primary_10_1109_TVCG_2022_3230369
crossref_primary_10_3390_s23198208
crossref_primary_10_1111_mice_12900
crossref_primary_10_1109_TCSVT_2023_3239381
crossref_primary_10_3390_s22134722
crossref_primary_10_1109_TIM_2024_3413189
crossref_primary_10_1016_j_image_2023_116970
crossref_primary_10_1109_JSTARS_2024_3396522
crossref_primary_10_1109_TGRS_2022_3158901
crossref_primary_10_1109_TCSVT_2022_3215979
crossref_primary_10_1109_LGRS_2024_3400514
crossref_primary_10_1109_TPAMI_2023_3269202
crossref_primary_10_3390_s21175750
crossref_primary_10_1007_s00371_024_03268_8
Cites_doi 10.1109/TPAMI.2014.2345401
10.1109/ICCV.2017.350
10.1109/CVPR.2019.00727
10.1007/978-3-319-46448-0_40
10.1016/j.patrec.2011.06.001
10.1109/TPAMI.2018.2815688
10.1109/CVPR.2014.360
10.1109/TIM.2013.2283741
10.1109/TIP.2019.2936746
10.1109/TPAMI.1986.4767851
10.1145/2508363.2508381
10.1109/ICIP.2018.8451621
10.1109/ICCV.2015.164
10.1109/CVPR42600.2020.00893
10.1109/TIP.2005.863021
10.1007/s41095-019-0160-1
10.1109/CVPR.2018.00813
10.1109/TC.1976.1674627
10.1109/CVPR42600.2020.00011
10.1145/2508363.2508371
10.1109/CVPR.2016.60
10.1109/ICPR.1996.546737
10.1109/ICCV.2019.00105
10.1049/ip-vis:19951434
10.1007/s11432-020-3097-4
10.1016/j.patcog.2012.09.020
10.4324/9780080556161
10.1002/nav.3800020109
10.48550/arXiv.1802.02611
10.1007/978-3-030-58545-7_15
10.1109/TPAMI.2019.2924417
10.1016/S0734-189X(86)80047-0
10.1109/CVPR.2017.660
10.1007/s41095-020-0173-9
10.1016/0031-3203(81)90009-1
10.1109/CVPR42600.2020.00012
10.1007/978-3-030-58539-6_42
10.5555/3454287.3455008
10.1109/CVPR42600.2020.00286
10.1016/0734-189X(90)90123-D
10.1007/978-3-030-01264-9_45
10.1109/ICCV.2019.00349
10.1145/1661412.1618470
10.1109/CVPR.2019.00524
10.1109/TPAMI.2008.300
10.1109/TPAMI.2017.2723009
10.1109/TPAMI.2019.2938758
10.1111/j.1467-8659.2009.01616.x
10.1023/A:1026543900054
10.1007/s41095-019-0149-9
10.1109/CVPR.2016.90
10.1109/IJCNN.2017.7966418
10.1016/j.patcog.2014.12.020
10.1016/0031-3203(91)90073-E
10.1109/CVPR.2017.106
10.1145/361237.361242
10.1109/CVPR.2018.00072
10.1109/ICCV.2019.00069
10.1109/ICCV.2015.169
10.1007/s41095-020-0158-8
10.1109/TPAMI.2016.2577031
10.1109/TPAMI.2018.2878849
10.1109/ICCV.2019.00894
10.1109/TPAMI.1987.4767964
10.1109/ICCV.2019.00625
10.1109/ICCV.2019.00937
10.1109/TPAMI.1986.4767808
10.1016/j.patcog.2007.04.003
10.1109/ACCESS.2019.2936289
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TPAMI.2021.3077129
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList Technology Research Database
PubMed
MEDLINE - Academic

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 2160-9292
1939-3539
EndPage 4806
ExternalDocumentID 33939606
10_1109_TPAMI_2021_3077129
9422200
Genre orig-research
Journal Article
GrantInformation_xml – fundername: NSFC
  grantid: 61922046; 61620106008; 62002176
– fundername: National Key Research and Development Program of China
  grantid: 2018AAA0100400
– fundername: Chinese Ministry of Education
– fundername: Tianjin Natural Science Foundation
  grantid: 17JCJQJC43700
GroupedDBID ---
-DZ
-~X
.DC
0R~
29I
4.4
53G
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
ACNCT
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
RXW
TAE
TN5
UHB
~02
5VS
9M8
AAYOK
AAYXX
ABFSI
ADRHT
AETEA
AETIX
AGSQL
AI.
AIBXA
ALLEH
CITATION
FA8
H~9
IBMZZ
ICLAB
IFJZH
RIG
RNI
RZB
VH1
XJT
NPM
RIC
Z5M
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c351t-dff3abb8dba4753f200fe08dde2ffea15b9bd1b25f8f40f97a422f36b3bc8a843
IEDL.DBID RIE
ISSN 0162-8828
1939-3539
IngestDate Fri Jul 11 12:24:42 EDT 2025
Mon Jun 30 04:42:48 EDT 2025
Wed Feb 19 02:27:52 EST 2025
Thu Apr 24 23:06:46 EDT 2025
Tue Jul 01 03:18:26 EDT 2025
Wed Aug 27 02:29:16 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 9
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c351t-dff3abb8dba4753f200fe08dde2ffea15b9bd1b25f8f40f97a422f36b3bc8a843
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0001-5550-8758
0000-0002-2496-0829
0000-0003-0043-8240
PMID 33939606
PQID 2698828038
PQPubID 85458
PageCount 14
ParticipantIDs pubmed_primary_33939606
crossref_citationtrail_10_1109_TPAMI_2021_3077129
proquest_journals_2698828038
crossref_primary_10_1109_TPAMI_2021_3077129
ieee_primary_9422200
proquest_miscellaneous_2522191901
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-09-01
PublicationDateYYYYMMDD 2022-09-01
PublicationDate_xml – month: 09
  year: 2022
  text: 2022-09-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on pattern analysis and machine intelligence
PublicationTitleAbbrev TPAMI
PublicationTitleAlternate IEEE Trans Pattern Anal Mach Intell
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
ref56
ref15
ref59
ref14
ref58
ref53
ref52
ref11
ref55
ref10
ref17
ref16
ref19
ref18
ref51
ref50
ref46
Kingma (ref78)
ref48
ref47
ref42
ref41
Simonyan (ref73)
ref44
Sobel (ref45) 1973
ref49
Yu (ref77)
ref8
ref7
ref9
Krages (ref21) 2012
ref4
ref3
ref6
ref5
ref40
ref35
ref34
ref37
ref36
ref31
ref75
ref30
ref74
ref32
ref76
Etemadi (ref54)
ref2
ref1
ref39
ref38
Caplin (ref33) 2008
Radon (ref43) 2005; 69
ref71
ref70
ref72
ref24
ref68
ref23
ref26
ref25
ref69
ref20
ref64
ref63
ref22
ref66
ref65
Hough (ref27) 1962
ref28
ref29
Rubner (ref67) 2000; 40
ref60
ref62
ref61
References_xml – ident: ref8
  doi: 10.1109/TPAMI.2014.2345401
– ident: ref11
  doi: 10.1109/ICCV.2017.350
– ident: ref34
  doi: 10.1109/CVPR.2019.00727
– ident: ref18
  doi: 10.1007/978-3-319-46448-0_40
– ident: ref32
  doi: 10.1016/j.patrec.2011.06.001
– ident: ref4
  doi: 10.1109/TPAMI.2018.2815688
– ident: ref9
  doi: 10.1109/CVPR.2014.360
– ident: ref58
  doi: 10.1109/TIM.2013.2283741
– ident: ref14
  doi: 10.1109/TIP.2019.2936746
– volume: 69
  start-page: 262
  volume-title: Classic Papers in Modern Diagnostic Radiology
  year: 2005
  ident: ref43
  article-title: über die bestimmung von funktionen durch ihre integralwerte längs gewisser mannigfaltigkeiten,
– ident: ref44
  doi: 10.1109/TPAMI.1986.4767851
– ident: ref15
  doi: 10.1145/2508363.2508381
– ident: ref16
  doi: 10.1109/ICIP.2018.8451621
– ident: ref41
  doi: 10.1109/ICCV.2015.164
– ident: ref66
  doi: 10.1109/CVPR42600.2020.00893
– volume-title: Photography: The Art of Composition
  year: 2012
  ident: ref21
– start-page: 311
  volume-title: Proc. Int. Conf. Image Process. Appl.
  ident: ref54
  article-title: Robust segmentation of edge data,
– ident: ref51
  doi: 10.1109/TIP.2005.863021
– ident: ref20
  doi: 10.1007/s41095-019-0160-1
– year: 1962
  ident: ref27
  article-title: Method and means for recognizing complex patterns,
– ident: ref38
  doi: 10.1109/CVPR.2018.00813
– ident: ref48
  doi: 10.1109/TC.1976.1674627
– ident: ref47
  doi: 10.1109/CVPR42600.2020.00011
– ident: ref22
  doi: 10.1145/2508363.2508371
– ident: ref19
  doi: 10.1109/CVPR.2016.60
– ident: ref55
  doi: 10.1109/ICPR.1996.546737
– start-page: 271
  year: 1973
  ident: ref45
  article-title: A 3x3 Isotropic Gradient Operator for Image Processing,
  publication-title: Pattern Classification Scene Anal.
– ident: ref61
  doi: 10.1109/ICCV.2019.00105
– ident: ref29
  doi: 10.1049/ip-vis:19951434
– ident: ref72
  doi: 10.1007/s11432-020-3097-4
– ident: ref57
  doi: 10.1016/j.patcog.2012.09.020
– ident: ref13
  doi: 10.4324/9780080556161
– volume-title: Proc. Int. Conf. Learn. Representation
  ident: ref73
  article-title: Very deep convolutional networks for large-scale image recognition,
– ident: ref42
  doi: 10.1002/nav.3800020109
– ident: ref75
  doi: 10.48550/arXiv.1802.02611
– ident: ref1
  doi: 10.1007/978-3-030-58545-7_15
– ident: ref7
  doi: 10.1109/TPAMI.2019.2924417
– ident: ref68
  doi: 10.1016/S0734-189X(86)80047-0
– ident: ref74
  doi: 10.1109/CVPR.2017.660
– ident: ref10
  doi: 10.1007/s41095-020-0173-9
– ident: ref26
  doi: 10.1016/0031-3203(81)90009-1
– ident: ref46
  doi: 10.1109/CVPR42600.2020.00012
– ident: ref5
  doi: 10.1007/978-3-030-58539-6_42
– ident: ref71
  doi: 10.5555/3454287.3455008
– ident: ref62
  doi: 10.1109/CVPR42600.2020.00286
– ident: ref30
  doi: 10.1016/0734-189X(90)90123-D
– ident: ref36
  doi: 10.1007/978-3-030-01264-9_45
– ident: ref53
  doi: 10.1109/ICCV.2019.00349
– ident: ref23
  doi: 10.1145/1661412.1618470
– ident: ref64
  doi: 10.1109/CVPR.2019.00524
– ident: ref56
  doi: 10.1109/TPAMI.2008.300
– ident: ref70
  doi: 10.1109/TPAMI.2017.2723009
– volume-title: Proc. Int. Conf. Learn. Representation
  ident: ref78
  article-title: Adam: A method for stochastic optimization,
– ident: ref76
  doi: 10.1109/TPAMI.2019.2938758
– ident: ref12
  doi: 10.1111/j.1467-8659.2009.01616.x
– volume-title: Proc. Int. Conf. Learn. Representation
  ident: ref77
  article-title: Multi-scale context aggregation by dilated convolutions,
– volume: 40
  start-page: 99
  issue: 2
  year: 2000
  ident: ref67
  article-title: The earth mover’s distance as a metric for image retrieval,
  publication-title: Int. J. Comput. Vis.
  doi: 10.1023/A:1026543900054
– ident: ref6
  doi: 10.1007/s41095-019-0149-9
– ident: ref69
  doi: 10.1109/CVPR.2016.90
– ident: ref59
  doi: 10.1109/IJCNN.2017.7966418
– ident: ref49
  doi: 10.1016/j.patcog.2014.12.020
– ident: ref31
  doi: 10.1016/0031-3203(91)90073-E
– ident: ref63
  doi: 10.1109/CVPR.2017.106
– ident: ref25
  doi: 10.1145/361237.361242
– ident: ref60
  doi: 10.1109/CVPR.2018.00072
– ident: ref39
  doi: 10.1109/ICCV.2019.00069
– ident: ref37
  doi: 10.1109/ICCV.2015.169
– ident: ref24
  doi: 10.1007/s41095-020-0158-8
– ident: ref35
  doi: 10.1109/TPAMI.2016.2577031
– ident: ref40
  doi: 10.1109/TPAMI.2018.2878849
– ident: ref3
  doi: 10.1109/ICCV.2019.00894
– ident: ref50
  doi: 10.1109/TPAMI.1987.4767964
– ident: ref65
  doi: 10.1109/ICCV.2019.00625
– volume-title: Art and Design in Photoshop
  year: 2008
  ident: ref33
– ident: ref52
  doi: 10.1109/ICCV.2019.00937
– ident: ref2
  doi: 10.1109/TPAMI.1986.4767808
– ident: ref28
  doi: 10.1016/j.patcog.2007.04.003
– ident: ref17
  doi: 10.1109/ACCESS.2019.2936289
SSID ssj0014503
Score 2.6820772
Snippet We focus on a fundamental task of detecting meaningful line structures, a.k.a. , semantic line, in natural scenes. Many previous methods regard this problem as...
We focus on a fundamental task of detecting meaningful line structures, a.k.a., semantic line, in natural scenes. Many previous methods regard this problem as...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 4793
SubjectTerms CNN
Datasets
deep learning
Detectors
Domains
Feature extraction
Feature maps
hough transform
Hough transformation
Image edge detection
Measurement
Object recognition
Quality assessment
Representations
Semantic line detection
Semantics
Source code
Task analysis
Transforms
Title Deep Hough Transform for Semantic Line Detection
URI https://ieeexplore.ieee.org/document/9422200
https://www.ncbi.nlm.nih.gov/pubmed/33939606
https://www.proquest.com/docview/2698828038
https://www.proquest.com/docview/2522191901
Volume 44
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8QwEB50D6IH34_6IoI33bVt2m1zFHVZhRXBFbyVJE1A1K5o9-KvdyZ9oKLipRSatmlmpvm-ZB4Ah0gBdKJDjRKIKSQHbU6Fkg4GwX4uZCwowHl03R_eRVf38f0MHLexMMYY53xmenTq9vLziZ7SUtmJoPUKHwn6LBK3Klar3TGIYlcFGREMWjjSiCZAxhcn45vT0SVSwTDooUYnOMPNwxznghN6_zIfuQIrv2NNN-cMlmDU9LZyNXnsTUvV0-_fEjn-93OWYbEGn-y00pYVmDHFKiw1hR1YbeersPApS-Ea-OfGvLAhlfNh4wbnMjywW_OMgnnQDBmtYeemdH5dxTrcDS7GZ8NuXWihq3kclN3cWi6VSnMlI6QvFjtljZ_iny-01sggVkLlgQpjm9rItyKR2HPL-4ornco04hvQKSaF2QKWICMjGfdxcox4ahVFxgqpaM0yTxLuQdAMd6brLORUDOMpc2zEF5mTVkbSymppeXDU3vNS5eD4s_UaDXXbsh5lD3YbqWa1mb5lYV-Qrvg89eCgvYwGRrsmsjCTKbZBhIoajbjJg81KG9pnN0q0_fM7d2A-pGgJ55K2C53ydWr2EMOUat8p7wdC1ueE
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fT9swED4hkAZ7oKPACHTDk3iDliROmvgRwaoy2gqJIvEW2Y4tTYMUsfRlfz13zg_BBNNeokhxEsd3l_s-23cHcIQUQCc61CiBmEJy0OZUKOlgEOznQsaCApyns-H4NvpxF9-twEkbC2OMcZvPzIBO3Vp-vtBLmio7FTRf4SNBX0O_HwdVtFa7ZhDFrg4yYhi0cSQSTYiML07n12fTSySDYTBAnU7Qx23AB84FJ_z-yiO5Eivvo03ndUYdmDb9rTab_BosSzXQf_5K5fi_H_QJNmv4yc4qfdmCFVN0odOUdmC1pXfh44s8hdvgXxjzyMZU0IfNG6TL8MBuzAOK5qdmyGkNuzCl29lV7MDt6Pv8fNyvSy30NY-Dsp9by6VSaa5khATGYqes8VP894XWGhnESqg8UGFsUxv5ViQSe275UHGlU5lGfBdWi0Vh9oAlyMlIykN0jxFPraLYWCEVzVrmScI9CJrhznSdh5zKYdxnjo_4InPSykhaWS0tD47bex6rLBz_bL1NQ922rEfZg14j1aw21N9ZOBSkKz5PPfjWXkYTo3UTWZjFEtsgRkWdRuTkwedKG9pnN0q0__Y7D2F9PJ9Ossnl7OoANkKKnXAb1HqwWj4tzRdENKX66hT5GYIj6s0
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Hough+Transform+for+Semantic+Line+Detection&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Zhao%2C+Kai&rft.au=Han%2C+Qi&rft.au=Zhang%2C+Chang-Bin&rft.au=Xu%2C+Jun&rft.date=2022-09-01&rft.pub=IEEE&rft.issn=0162-8828&rft.volume=44&rft.issue=9&rft.spage=4793&rft.epage=4806&rft_id=info:doi/10.1109%2FTPAMI.2021.3077129&rft_id=info%3Apmid%2F33939606&rft.externalDocID=9422200
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon