Finding and labeling the subject of a captioned depictive natural photograph
We address the problem of finding the subject of a photographic image intended to illustrate some physical object or objects ("depictive") and taken by usual optical means without magnification ("natural"). This could help in developing digital image libraries since important ima...
Saved in:
Published in | IEEE transactions on knowledge and data engineering Vol. 14; no. 1; pp. 202 - 207 |
---|---|
Main Author | |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.01.2002
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We address the problem of finding the subject of a photographic image intended to illustrate some physical object or objects ("depictive") and taken by usual optical means without magnification ("natural"). This could help in developing digital image libraries since important image properties like subject size and color of a photograph are not usually mentioned in accompanying captions and can help rank the photograph retrievals for a user. We explore an approach that identifies the "visual focus" of the image and the "depicted concepts" in a caption and connects them. The visual focus is determined by using eight domain-independent characteristics of regions in the segmented image, and the caption depiction is identified by a set a rules applied to the parsed and interpreted caption. The visual-focus determination also does combinatorial optimization on sets of regions to find the set that best satisfies focus criteria. Experiments on 100 randomly selected image-caption pairs show significant improvement in precision of retrieval over simpler methods, and, particularly, emphasizes the value of segmentation of the image. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 ObjectType-Article-2 ObjectType-Feature-1 |
ISSN: | 1041-4347 1558-2191 |
DOI: | 10.1109/69.979983 |