Superpixel-Based Interactive Classification of Very High Resolution Images

Very high resolution (VHR) images are large datasets for pixel annotation -- a process that has depended on the supervised training of an effective pixel classifier. Active learning techniques have mitigated this problem, but pixel descriptors are limited to local image information and the large num...

Full description

Saved in:
Bibliographic Details
Published in2014 27th SIBGRAPI Conference on Graphics, Patterns and Images pp. 173 - 179
Main Authors Vargas, John E., Saito, Priscila T. M., Falcao, Alexandre X., De Rezende, Pedro J., Dos Santos, Jefersson A.
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.08.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Very high resolution (VHR) images are large datasets for pixel annotation -- a process that has depended on the supervised training of an effective pixel classifier. Active learning techniques have mitigated this problem, but pixel descriptors are limited to local image information and the large number of pixels makes the response time to the user's actions impractical, during active learning. To circumvent the problem, we present an active learning strategy that relies on superpixel descriptors and a priori dataset reduction. Firstly, we compare VHR image annotation using superpixel- and pixel-based classifiers, as designed by the same state-of-the-art active learning technique -- Multi-Class Level Uncertainty (MCLU). Even with the dataset reduction provided by the superpixel representation, MCLU remains unfeasible for user interaction. Therefore, we propose a technique to considerably reduce the superpixel dataset for active learning. Moreover, we subdivide the reduced dataset into a list of subsets with random sample rearrangement to gain both speed and sample diversity during the active learning process.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:1530-1834
2377-5416
1530-1834
DOI:10.1109/SIBGRAPI.2014.49