Holistic Feature Extraction for Automatic Image Annotation
Automating the annotation process of digital images is a crucial step towards efficient and effective management of this increasingly high volume of content. It is, nevertheless, an extremely challenging task for the research community. One of the main bottle necks is the lack of integrity and diver...
Saved in:
Published in | 2011 fifth FTRA International Conference on Multimedia and Ubiquitous Engineering : 28-30 June 2011 pp. 59 - 66 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English Japanese |
Published |
IEEE
01.06.2011
|
Subjects | |
Online Access | Get full text |
ISBN | 1457712288 9781457712289 |
DOI | 10.1109/MUE.2011.22 |
Cover
Summary: | Automating the annotation process of digital images is a crucial step towards efficient and effective management of this increasingly high volume of content. It is, nevertheless, an extremely challenging task for the research community. One of the main bottle necks is the lack of integrity and diversity of features. We solve this problem by proposing to utilize 43 image features that cover the holistic content of the image from global to subject, background, and scene. In our approach, saliency regions and background are separated without prior knowledge. Each of them together with the whole image is treated independently for feature extraction. Extensive experiments were designed to show the efficiency and effectiveness of our approach. We chose two publicly available datasets manually annotated and with the diverse nature of images for our experiments, namely, the Corel5k and ESP Game datasets. They contain 5,000 images with 260 keywords and 20,770 images with 268 keywords, respectively. Through empirical experiments, it is confirmed that by using our features with the state-of-the-art technique, we achieve superior performance in many metrics, particularly in auto-annotation. |
---|---|
ISBN: | 1457712288 9781457712289 |
DOI: | 10.1109/MUE.2011.22 |