Weakly Supervised Large Scale Object Localization with Multiple Instance Learning and Bag Splitting

Localizing objects of interest in images when provided with only image-level labels is a challenging visual recognition task. Previous efforts have required carefully designed features and have difficulty in handling images with cluttered backgrounds. Up-scaling to large datasets also poses a challe...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on pattern analysis and machine intelligence Vol. 38; no. 2; pp. 405 - 416
Main Authors Ren, Weiqiang, Huang, Kaiqi, Tao, Dacheng, Tan, Tieniu
Format Journal Article
LanguageEnglish
Published United States IEEE 01.02.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Localizing objects of interest in images when provided with only image-level labels is a challenging visual recognition task. Previous efforts have required carefully designed features and have difficulty in handling images with cluttered backgrounds. Up-scaling to large datasets also poses a challenge to applying these methods to real applications. In this paper, we propose an efficient and effective learning framework called MILinear, which is able to learn an object localization model from large-scale data without using bounding box annotations. We integrate rich general prior knowledge into a learning model using a large pre-trained convolutional network. Moreover, to reduce ambiguity in positive images, we present a bag-splitting algorithm that iteratively generates new negative bags from positive ones. We evaluate the proposed approach on the challenging Pascal VOC 2007 dataset, and our method outperforms other state-of-the-art methods by a large margin; some results are even comparable to fully supervised models trained with bounding box annotations. To further demonstrate scalability, we also present detection results on the ILSVRC 2013 detection dataset, and our method outperforms supervised deformable part-based model without using box annotations.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0162-8828
1939-3539
2160-9292
DOI:10.1109/TPAMI.2015.2456908