High-for-Low and Low-for-High: Efficient Boundary Detection from Deep Object Features and Its Applications to High-Level Vision
Most of the current boundary detection systems rely exclusively on low-level features, such as color and texture. However, perception studies suggest that humans employ object-level reasoning when judging if a particular pixel is a boundary. Inspired by this observation, in this work we show how to...
Saved in:
Published in | 2015 IEEE International Conference on Computer Vision (ICCV) pp. 504 - 512 |
---|---|
Main Authors | , , |
Format | Conference Proceeding Journal Article |
Language | English |
Published |
IEEE
01.12.2015
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Most of the current boundary detection systems rely exclusively on low-level features, such as color and texture. However, perception studies suggest that humans employ object-level reasoning when judging if a particular pixel is a boundary. Inspired by this observation, in this work we show how to predict boundaries by exploiting object-level features from a pretrained object-classification network. Our method can be viewed as a "High-for-Low" approach where high-level object features inform the low-level boundary detection process. Our model achieves state-of-the-art performance on an established boundary detection benchmark and it is efficient to run. Additionally, we show that due to the semantic nature of our boundaries we can use them to aid a number of high-level vision tasks. We demonstrate that using our boundaries we improve the performance of state-of-the-art methods on the problems of semantic boundary labeling, semantic segmentation and object proposal generation. We can view this process as a "Low-for-High'" scheme, where low-level boundaries aid high-level vision tasks. Thus, our contributions include a boundary detection system that is accurate, efficient, generalizes well to multiple datasets, and is also shown to improve existing state-of-the-art high-level vision methods on three distinct tasks. |
---|---|
AbstractList | Most of the current boundary detection systems rely exclusively on low-level features, such as color and texture. However, perception studies suggest that humans employ object-level reasoning when judging if a particular pixel is a boundary. Inspired by this observation, in this work we show how to predict boundaries by exploiting object-level features from a pretrained object-classification network. Our method can be viewed as a "High-for-Low" approach where high-level object features inform the low-level boundary detection process. Our model achieves state-of-the-art performance on an established boundary detection benchmark and it is efficient to run. Additionally, we show that due to the semantic nature of our boundaries we can use them to aid a number of high-level vision tasks. We demonstrate that using our boundaries we improve the performance of state-of-the-art methods on the problems of semantic boundary labeling, semantic segmentation and object proposal generation. We can view this process as a "Low-for-High'" scheme, where low-level boundaries aid high-level vision tasks. Thus, our contributions include a boundary detection system that is accurate, efficient, generalizes well to multiple datasets, and is also shown to improve existing state-of-the-art high-level vision methods on three distinct tasks. |
Author | Torresani, Lorenzo Bertasius, Gedas Jianbo Shi |
Author_xml | – sequence: 1 givenname: Gedas surname: Bertasius fullname: Bertasius, Gedas email: gberta@seas.upenn.edu – sequence: 2 surname: Jianbo Shi fullname: Jianbo Shi email: jshi@seas.upenn.edu – sequence: 3 givenname: Lorenzo surname: Torresani fullname: Torresani, Lorenzo email: lt@dartmouth.edu |
BookMark | eNotj71PwzAUxA0CibawsbF4ZEnxdxK2ElpaKVIX6Fo5zjO4SuMQJyAm_nXSlul09366pxuji9rXgNAtJVNKSfqwyrLNlBEqp0qeoTEVKuYJTyk5RyPGExLFkogrNA5hRwhPWaJG6Hfp3j8i69so999Y1yUe9OgPh0c8t9YZB3WHn3xfl7r9wc_Qgemcr7Ft_X6w0OB1sRsyvADd9S2EY9GqC3jWNJUz-kAH3Hl8_JbDF1R448KQXqNLq6sAN_86QW-L-Wu2jPL1yyqb5ZFjQnWRVjYWCeUFAWN5qZQqKFdxATSxcWp4kZLCGs4Ek4ammoHWJYtNXGhZCiE1n6D7U2_T-s8eQrfdu2CgqnQNvg9bmlBFlFRSDujdCXUAsG1atx9Wb2NBiWCM_wFWeW1r |
CODEN | IEEPAD |
ContentType | Conference Proceeding Journal Article |
DBID | 6IE 6IH CBEJK RIE RIO 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/ICCV.2015.65 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan (POP) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP) 1998-present Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences |
EISBN | 1467383910 9781467383912 |
EISSN | 2380-7504 |
EndPage | 512 |
ExternalDocumentID | 7410422 |
Genre | orig-research |
GroupedDBID | 29O 6IE 6IF 6IH 6IK 6IL 6IM 6IN AAJGR ACGFS ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IPLJI JC5 M43 OCL RIE RIL RIO RNS 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-i246t-a6f74813b0ecf3d666b1367be18f79c3b90bfc32425c19a2eaad27c7ba5d445a3 |
IEDL.DBID | RIE |
IngestDate | Fri Apr 12 03:37:19 EDT 2024 Wed Jun 26 19:23:25 EDT 2024 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-i246t-a6f74813b0ecf3d666b1367be18f79c3b90bfc32425c19a2eaad27c7ba5d445a3 |
Notes | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Conference-1 ObjectType-Feature-3 content type line 23 SourceType-Conference Papers & Proceedings-2 |
OpenAccessLink | https://arxiv.org/pdf/1504.06201 |
PQID | 1816065655 |
PQPubID | 23500 |
PageCount | 9 |
ParticipantIDs | ieee_primary_7410422 proquest_miscellaneous_1816065655 |
PublicationCentury | 2000 |
PublicationDate | 20151201 |
PublicationDateYYYYMMDD | 2015-12-01 |
PublicationDate_xml | – month: 12 year: 2015 text: 20151201 day: 01 |
PublicationDecade | 2010 |
PublicationTitle | 2015 IEEE International Conference on Computer Vision (ICCV) |
PublicationTitleAbbrev | ICCV |
PublicationYear | 2015 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0039286 ssib030089929 |
Score | 2.4667988 |
Snippet | Most of the current boundary detection systems rely exclusively on low-level features, such as color and texture. However, perception studies suggest that... |
SourceID | proquest ieee |
SourceType | Aggregation Database Publisher |
StartPage | 504 |
SubjectTerms | Boundaries Computer vision Convolutional codes Feature extraction Image edge detection Interpolation Labeling Mathematical models Semantics State of the art Tasks Texture Training Vision |
Title | High-for-Low and Low-for-High: Efficient Boundary Detection from Deep Object Features and Its Applications to High-Level Vision |
URI | https://ieeexplore.ieee.org/document/7410422 https://search.proquest.com/docview/1816065655 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3JTsMwELVKT5zKUsQuI3EkbRYnjrlBKaKoLAda9VbZyVhCSEnVJkJw4dfxOGlBwIFTYlmJLW_v2fNmTMipp0LOU4Yq8VQ5jOnIibVwHaF8P2GuRP8ZVFvcRzcjdjsJJw1ytvKFAQArPoMOvlpbfponJR6VdQ36YciqNbLGhah8tZZjJ3DRfoVQX63CBvbjaCV0F91BrzdGIVfYQRixF6n8Wn0tpFy3yN2yMpWS5KVTFqqTvP-I0_jf2m6Q9pfzHn1cwdImaUC2RVo126T1XF5skw-UeDiGszrD_JXKLKXmadOYcU77NriEKYJe2quX5m_0Cgqr3MooeqWYJMzog8KTHIpUsjRbd_ujQbGgF98s47TIqS1tiBIlOrb-7G0yuu4_9W6c-joG59lnUeHISHMWe4FyIdFBavY9CuO9KfBizUUSKOEqnSBBCxNPSB-kTH2ecCXDlLFQBjukmeUZ7BLKg5jFILSnQTPpSmlwFKSKJIsFB4j2yDa253RWRdyY1k25R06WPTY1swBNGzKDvFxMDU8xOzFDTsP9vz89IOvY-5UQ5ZA0i3kJR4ZOFOrYjqNPahrKkA |
link.rule.ids | 310,311,315,783,787,792,793,799,23944,23945,25154,27938,27939,55088 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3JTsMwEB2VcoATu9gxEkdSsjgbNyhFLZTl0FbcIjsZSwgpQTQRggu_jsdpCwIOnBIrSmw5tt8bz5sxwJEj_TDMOKnEM2lxrgIrUrFtxdJ1U24Lip8htcVt0B3yqwf_oQHHs1gYRDTiM2zRrfHlZ0Va0VbZiUY_Slk1B_M-8Yo6Wms6ejybPFgE9vU6rIE_CmZS9_ik126PSMrltwhIzFEqv9ZfAyqXS3AzbU6tJXlqVaVspe8_MjX-t73LsP4VvsfuZ8C0Ag3MV2FpwjfZZDaP1-CDRB6WZq1Wv3hlIs-YvpoyPThlHZNeQlfBzs3hSy9v7AJLo93KGcWl6CI-sztJezmMyGSljXfzoV45ZmfffOOsLJiprU8iJTYyEe3rMLzsDNpda3Igg_Xo8qC0RKBCHjmetDFVXqYtH0kZ3yQ6kQrj1JOxLVVKFM1PnVi4KETmhmkohZ9x7gtvA5p5keMmsNCLeISxchQqLmwhNJKikIHgURwiBluwRv2ZPNc5N5JJV27B4fSPJXoekHND5FhU40QzFW2LaXrqb__96gEsdAc3_aTfu73egUUaCbUsZRea5UuFe5pclHLfjKlPK-DN3Q |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=2015+IEEE+International+Conference+on+Computer+Vision+%28ICCV%29&rft.atitle=High-for-Low+and+Low-for-High%3A+Efficient+Boundary+Detection+from+Deep+Object+Features+and+Its+Applications+to+High-Level+Vision&rft.au=Bertasius%2C+Gedas&rft.au=Jianbo+Shi&rft.au=Torresani%2C+Lorenzo&rft.date=2015-12-01&rft.pub=IEEE&rft.eissn=2380-7504&rft.spage=504&rft.epage=512&rft_id=info:doi/10.1109%2FICCV.2015.65&rft.externalDocID=7410422 |