COCO-Stuff: Thing and Stuff Classes in Context

Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes ar...

Full description

Saved in:
Bibliographic Details
Published in2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 1209 - 1218
Main Authors Caesar, Holger, Uijlings, Jasper, Ferrari, Vittorio
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2018
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff1, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.
AbstractList Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff1, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.
Author Uijlings, Jasper
Ferrari, Vittorio
Caesar, Holger
Author_xml – sequence: 1
  givenname: Holger
  surname: Caesar
  fullname: Caesar, Holger
– sequence: 2
  givenname: Jasper
  surname: Uijlings
  fullname: Uijlings, Jasper
– sequence: 3
  givenname: Vittorio
  surname: Ferrari
  fullname: Ferrari, Vittorio
BookMark eNotjs1KxDAURqMoOI5du3CTF0i9N2nSG3cS_IOBio5uh7RJtDJmZFJB395B_TYHzuLwHbODvMmRsVOEGhHsuXu-f6glINUAqOQeq2xLqBUZ00iw-2yGYJQwFu0Rq0p5AwBpSFGjZ6x2nevE4_SZ0gVfvo75hfsc-K_gbu1LiYWPmbtNnuLXdMIOk1-XWP1zzp6ur5buViy6mzt3uRCjJDkJNDHYwRCGNOymvNZJqiFKVOhD6IMJQ590ssk2FGS7O0y910NvrG57BDVnZ3_dMca4-tiO7377vSLdklSgfgByAESm
CODEN IEEPAD
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/CVPR.2018.00132
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
EISBN 9781538664209
1538664208
EISSN 1063-6919
EndPage 1218
ExternalDocumentID 8578230
Genre orig-research
GroupedDBID 6IE
6IH
6IL
6IN
AAWTH
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IJVOP
OCL
RIE
RIL
RIO
ID FETCH-LOGICAL-i282t-16ed9c681dfcccc3a55f23ce2131addbd6dcbf5f9f948d279788ba5cb6957b103
IEDL.DBID RIE
IngestDate Wed Aug 27 02:52:16 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i282t-16ed9c681dfcccc3a55f23ce2131addbd6dcbf5f9f948d279788ba5cb6957b103
OpenAccessLink https://www.research.ed.ac.uk/en/publications/ae36a7c8-e2b6-4e86-ab83-7ea27f632bb8
PageCount 10
ParticipantIDs ieee_primary_8578230
PublicationCentury 2000
PublicationDate 2018-06
PublicationDateYYYYMMDD 2018-06-01
PublicationDate_xml – month: 06
  year: 2018
  text: 2018-06
PublicationDecade 2010
PublicationTitle 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
PublicationTitleAbbrev CVPR
PublicationYear 2018
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0002683845
ssj0003211698
Score 2.6172628
Snippet Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While...
SourceID ieee
SourceType Publisher
StartPage 1209
SubjectTerms Automobiles
Bridges
Image segmentation
Protocols
Semantics
Shape
Vegetation mapping
Title COCO-Stuff: Thing and Stuff Classes in Context
URI https://ieeexplore.ieee.org/document/8578230
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PS8MwFA5zJ09TN_E3OXi03dI0r4nX4hjC3FAnu43mFwyhimtB_OtN0jpFPNhTGnJICMl738v33ofQJQWrndliEWchdAM6kswdPJ2AhFGRcQE-33l6B5NFertkyw662ubCGGMC-czEvhne8vWLqn2obMh97XXqAPqOA25NrtY2npIAp7x9IfP_1CEbELyt5kNGYpg_ze89l8uTJ4mXG_khpxKsybiHpl_zaEgkz3FdyVh9_CrR-N-J7qHBd94enm8t0j7qmPIA9VpHE7fHeNNHcT7LZ9FDVVt7jYNyJy5KN8B34CCTaTZ4XeJQuuq9GqDF-OYxn0StcEK0dgiqiggYLRQ4V9Qq99GCMZtQZRJCibvPpAatpGVWWJFynWQOSXJZMCVBsEySET1E3fKlNEcImwKEVtwmXDnPyqRSW-pApbHMZJAqcoz6fvmr16Y2xqpd-cnf3ado129AQ7U6Q93qrTbnzqhX8iLs5icHIZ8P
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELaqMsDEo0W88cBI0jqOHZs1oirQl6BF3ar4JVVIKaKJhPj12E4oCDGQybE82LLsu-_83X0AXGFqlDVbJGDEh26oCgSxB09FVNBuljBOXb7zcET7s_h-TuYNcL3JhdFae_KZDl3Tv-WrlSxdqKzDXO11bAH6lrX7BFXZWpuISkQZZvUbmfvHFttQzup6PqjLO-nz5NGxuRx9EjnBkR-CKt6e9HbB8GsmFY3kJSwLEcqPX0Ua_zvVPdD-ztyDk41N2gcNnR-A3drVhPVBXrdAmI7TcfBUlMbcQK_dCbPcDnAd0Atl6jVc5tAXr3ov2mDWu52m_aCWTgiWFkMVAaJacUmtM2qk_XBGiImw1BHCyN5oQlElhSGGGx4zFSUWSzKRESkoJ4lAXXwImvkq10cA6oxyJZmJmLS-lY6FMtjCSm2ITmgs0TFoueUvXqvqGIt65Sd_d1-C7f50OFgM7kYPp2DHbUZFvDoDzeKt1OfWxBfiwu_sJ4eFolg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2018+IEEE%2FCVF+Conference+on+Computer+Vision+and+Pattern+Recognition&rft.atitle=COCO-Stuff%3A+Thing+and+Stuff+Classes+in+Context&rft.au=Caesar%2C+Holger&rft.au=Uijlings%2C+Jasper&rft.au=Ferrari%2C+Vittorio&rft.date=2018-06-01&rft.pub=IEEE&rft.eissn=1063-6919&rft.spage=1209&rft.epage=1218&rft_id=info:doi/10.1109%2FCVPR.2018.00132&rft.externalDocID=8578230