360-Indoor: Towards Learning Real-World Objects in 360° Indoor Equirectangular Images
While there are several widely used object detection datasets, current computer vision algorithms are still limited in conventional images. Such images narrow our vision in a restricted region. On the other hand, 360° images provide a thorough sight. In this paper, our goal is to provide a standard...
Saved in:
Published in | Proceedings / IEEE Workshop on Applications of Computer Vision pp. 834 - 842 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.03.2020
|
Online Access | Get full text |
Cover
Loading…
Abstract | While there are several widely used object detection datasets, current computer vision algorithms are still limited in conventional images. Such images narrow our vision in a restricted region. On the other hand, 360° images provide a thorough sight. In this paper, our goal is to provide a standard dataset to facilitate the vision and machine learning communities in 360° domain. To facilitate the research, we present a real-world 360° panoramic object detection dataset, 360-Indoor, which is a new benchmark for visual object detection and class recognition in 360° indoor images. It is achieved by gathering images of complex indoor scenes containing common objects and the intensive annotated bounding field-of-view. In addition, 360-Indoor has several distinct properties: (1) the largest category number (37 labels in total). (2) the most complete annotations on average (27 bounding boxes per image). The selected 37 objects are all common in indoor scene. With around 3k images and 90k labels in total, 360-Indoor achieves the largest dataset for detection in 360° images. In the end, extensive experiments on the state-of-the-art methods for both classification and detection are provided. We will release this dataset in the near future. |
---|---|
AbstractList | While there are several widely used object detection datasets, current computer vision algorithms are still limited in conventional images. Such images narrow our vision in a restricted region. On the other hand, 360° images provide a thorough sight. In this paper, our goal is to provide a standard dataset to facilitate the vision and machine learning communities in 360° domain. To facilitate the research, we present a real-world 360° panoramic object detection dataset, 360-Indoor, which is a new benchmark for visual object detection and class recognition in 360° indoor images. It is achieved by gathering images of complex indoor scenes containing common objects and the intensive annotated bounding field-of-view. In addition, 360-Indoor has several distinct properties: (1) the largest category number (37 labels in total). (2) the most complete annotations on average (27 bounding boxes per image). The selected 37 objects are all common in indoor scene. With around 3k images and 90k labels in total, 360-Indoor achieves the largest dataset for detection in 360° images. In the end, extensive experiments on the state-of-the-art methods for both classification and detection are provided. We will release this dataset in the near future. |
Author | Sun, Min Fu, Jianlong Sun, Cheng Hsu, Wan-Ting Chou, Shih-Han Chang, Wen-Yen |
Author_xml | – sequence: 1 givenname: Shih-Han surname: Chou fullname: Chou, Shih-Han organization: National Tsing Hua University,Hsinchu – sequence: 2 givenname: Cheng surname: Sun fullname: Sun, Cheng organization: National Tsing Hua University,Hsinchu – sequence: 3 givenname: Wen-Yen surname: Chang fullname: Chang, Wen-Yen organization: National Tsing Hua University,Hsinchu – sequence: 4 givenname: Wan-Ting surname: Hsu fullname: Hsu, Wan-Ting organization: National Tsing Hua University,Hsinchu – sequence: 5 givenname: Min surname: Sun fullname: Sun, Min organization: National Tsing Hua University,Hsinchu – sequence: 6 givenname: Jianlong surname: Fu fullname: Fu, Jianlong organization: Microsoft Research,Beijing |
BookMark | eNotkN9KwzAchaMouE6fQJC8QGbyS9M23o2yaaEwkLldjvxr6ehSTTbEt_IZfDIL29WB7_Cdi5OgGz94h9ATozPGqHzezstNKkQOM6BAZ5JKDhlcoYTlULBMCC6v0QSyFIjkBbtDSYx7Srlkkk_QhmeUVN4OQ3jB6-FbBRtx7VTwnW_xu1M92Q6ht3il984cI-48HpW_X3yW8OLr1IWxUb499Srg6qBaF-_RbaP66B4uOUUfy8W6fCP16rUq5zXpWJEeiVbW5JZxA04ZaBphQGjpVG6sYLwYOU-lBm0cp1YXIAVLqbRNJrRWeSH5FD2edzvn3O4zdAcVfnaXD_g_k1dTkw |
ContentType | Conference Proceeding |
DBID | 6IE 6IL CBEJK RIE RIL |
DOI | 10.1109/WACV45572.2020.9093262 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Xplore POP ALL IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP All) 1998-Present |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences |
EISBN | 1728165539 9781728165530 |
EISSN | 2642-9381 |
EndPage | 842 |
ExternalDocumentID | 9093262 |
Genre | orig-research |
GroupedDBID | 29G 29O 6IE 6IF 6IK 6IL 6IM 6IN AAJGR AAWTH ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK IPLJI M43 OCL RIE RIL RNS |
ID | FETCH-LOGICAL-i184t-badc7d13c2eac2ff5c25b9ea7cd51383c2349b2bce30db82951409df65bba7893 |
IEDL.DBID | RIE |
IngestDate | Wed Aug 27 02:40:04 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-i184t-badc7d13c2eac2ff5c25b9ea7cd51383c2349b2bce30db82951409df65bba7893 |
PageCount | 9 |
ParticipantIDs | ieee_primary_9093262 |
PublicationCentury | 2000 |
PublicationDate | 2020-03-01 |
PublicationDateYYYYMMDD | 2020-03-01 |
PublicationDate_xml | – month: 03 year: 2020 text: 2020-03-01 day: 01 |
PublicationDecade | 2020 |
PublicationTitle | Proceedings / IEEE Workshop on Applications of Computer Vision |
PublicationTitleAbbrev | WACV |
PublicationYear | 2020 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0039193 |
Score | 2.3197057 |
Snippet | While there are several widely used object detection datasets, current computer vision algorithms are still limited in conventional images. Such images narrow... |
SourceID | ieee |
SourceType | Publisher |
StartPage | 834 |
Title | 360-Indoor: Towards Learning Real-World Objects in 360° Indoor Equirectangular Images |
URI | https://ieeexplore.ieee.org/document/9093262 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3JTsMwELVKT5wKtIhdPnDEqWPHWbihqlWLVECoLb1V3oIqRIKa9MJX8Q18GbaTFoE4cIsSjRLZ8sybyXszAFxqqjnmWiDmR650E6A4VgQpxYTEvlaKWnHy-C4cToPbOZs3wNVWC6O1duQz7dlL9y9f5XJtS2XdBFu0YRzujkncKq3WxuvSxCCRWgHs46T7dNObBYxFVmtFsFdb_hih4iLIoAXGm3dXxJEXb10KT77_asv434_bA51vrR582EahfdDQ2QFo1eAS1ke3aIMZDTEaZSrPV9dw4siyBay7qz7DR4MXkSPWwHthSzMFXGbQmHx-wMoI9i1n2DyxBU6TDsPRq3FFRQdMB_1Jb4jqoQpoaZK5EgmuZKR8KolxuSRNmSRMJJpHUjHfpKuS0CARREhNsRIxMQjMpIAqDZkQPDLo5hA0szzTRwDazjmSxDwNaBpInCZxErmhnz6NGOfpMWjbZVq8VX0zFvUKnfx9-xTs2q2q-F1noFmu1vrcBPxSXLid_gKIXat1 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT8JAEN0QPOgJFYzf7sGjLdvdLm29GQIBBTQGkBvZrxpibA0tF3-Vv8Ff5u62YDQevDVtJ21205k30_dmALhURDHEFHeoF9jSje-EocSOlJQL5CkpiREnD0et3sS_ndFZBVxttDBKKUs-U645tP_yZSpWplTWjJBBG9rhbum4T3Gh1lr7XRJpLFJqgD0UNZ9u2lN9V2DUVhi5pe2PISo2hnRrYLh-ekEdeXFXOXfF-6_GjP99vV3Q-FbrwYdNHNoDFZXsg1oJL2H58WZ1MCUt5PQTmabLazi2dNkMlv1Vn-GjRoyOpdbAe26KMxlcJFCbfH7Awgh2DGtYXzElTp0Qw_6rdkZZA0y6nXG755RjFZyFTudyhzMpAukRgbXTxXFMBaY8UiwQkno6YRWY-BHHXCiCJA-xxmA6CZRxi3LOAo1vDkA1SRN1CKDpnSNwyGKfxL5AcRRGgR376ZGAMhYfgbpZpvlb0TljXq7Q8d-nL8B2bzwczAf90d0J2DHbVrC9TkE1X67UmQ7_OT-3u_4FrO-uvw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=Proceedings+%2F+IEEE+Workshop+on+Applications+of+Computer+Vision&rft.atitle=360-Indoor%3A+Towards+Learning+Real-World+Objects+in+360%C2%B0+Indoor+Equirectangular+Images&rft.au=Chou%2C+Shih-Han&rft.au=Sun%2C+Cheng&rft.au=Chang%2C+Wen-Yen&rft.au=Hsu%2C+Wan-Ting&rft.date=2020-03-01&rft.pub=IEEE&rft.eissn=2642-9381&rft.spage=834&rft.epage=842&rft_id=info:doi/10.1109%2FWACV45572.2020.9093262&rft.externalDocID=9093262 |