Holistic Feature Extraction for Automatic Image Annotation
Automating the annotation process of digital images is a crucial step towards efficient and effective management of this increasingly high volume of content. It is, nevertheless, an extremely challenging task for the research community. One of the main bottle necks is the lack of integrity and diver...
Saved in:
Published in | 2011 fifth FTRA International Conference on Multimedia and Ubiquitous Engineering : 28-30 June 2011 pp. 59 - 66 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English Japanese |
Published |
IEEE
01.06.2011
|
Subjects | |
Online Access | Get full text |
ISBN | 1457712288 9781457712289 |
DOI | 10.1109/MUE.2011.22 |
Cover
Abstract | Automating the annotation process of digital images is a crucial step towards efficient and effective management of this increasingly high volume of content. It is, nevertheless, an extremely challenging task for the research community. One of the main bottle necks is the lack of integrity and diversity of features. We solve this problem by proposing to utilize 43 image features that cover the holistic content of the image from global to subject, background, and scene. In our approach, saliency regions and background are separated without prior knowledge. Each of them together with the whole image is treated independently for feature extraction. Extensive experiments were designed to show the efficiency and effectiveness of our approach. We chose two publicly available datasets manually annotated and with the diverse nature of images for our experiments, namely, the Corel5k and ESP Game datasets. They contain 5,000 images with 260 keywords and 20,770 images with 268 keywords, respectively. Through empirical experiments, it is confirmed that by using our features with the state-of-the-art technique, we achieve superior performance in many metrics, particularly in auto-annotation. |
---|---|
AbstractList | Automating the annotation process of digital images is a crucial step towards efficient and effective management of this increasingly high volume of content. It is, nevertheless, an extremely challenging task for the research community. One of the main bottle necks is the lack of integrity and diversity of features. We solve this problem by proposing to utilize 43 image features that cover the holistic content of the image from global to subject, background, and scene. In our approach, saliency regions and background are separated without prior knowledge. Each of them together with the whole image is treated independently for feature extraction. Extensive experiments were designed to show the efficiency and effectiveness of our approach. We chose two publicly available datasets manually annotated and with the diverse nature of images for our experiments, namely, the Corel5k and ESP Game datasets. They contain 5,000 images with 260 keywords and 20,770 images with 268 keywords, respectively. Through empirical experiments, it is confirmed that by using our features with the state-of-the-art technique, we achieve superior performance in many metrics, particularly in auto-annotation. |
Author | Fahrmair, M. Kameyama, W. Wagner, M. Sarin, S. |
Author_xml | – sequence: 1 givenname: S. surname: Sarin fullname: Sarin, S. email: sarin@docomolab-euro.com organization: Smart & Secure Services Res. Group, DOCOMO Euro-Labs., Munich, Germany – sequence: 2 givenname: M. surname: Fahrmair fullname: Fahrmair, M. email: fahrmair@docomolab-euro.com organization: Smart & Secure Services Res. Group, DOCOMO Euro-Labs., Munich, Germany – sequence: 3 givenname: M. surname: Wagner fullname: Wagner, M. email: wagner@docomolab-euro.com organization: Smart & Secure Services Res. Group, DOCOMO Euro-Labs., Munich, Germany – sequence: 4 givenname: W. surname: Kameyama fullname: Kameyama, W. email: wataru@waseda.jp organization: Grad. Sch. of Global Info. & Telecommun. Studies (GITS), Waseda Univ., Saitama, Japan |
BookMark | eNotjL1OwzAYRY0ACVoyMbLkBRL8-Tdmi6qUVipioXP14R9k1MQocSV4e1LBXc5wju6CXA1p8ITcA60BqHl82Xc1owA1YxekMLqhWhkphKb0kixASK2Bsaa5IcU0fVJKwSitQN2Sp006xilHW6495tPoy-47j2hzTEMZ0li2p5x6PAfbHj982Q5DynjWd-Q64HHyxT-XZL_u3labavf6vF21uyqCVLlCJ4R7b6QDaxXXQQXNldRoZ2u1dJw7FYQPaKjk86gFzaRC9MFZDJIvycPfb_TeH77G2OP4c5DGsDnkv1GJSSY |
ContentType | Conference Proceeding |
DBID | 6IE 6IL CBEJK RIE RIL |
DOI | 10.1109/MUE.2011.22 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Xplore POP ALL IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP All) 1998-Present |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISBN | 9780769544700 0769544703 |
EndPage | 66 |
ExternalDocumentID | 5992172 |
Genre | orig-research |
GroupedDBID | 6IE 6IF 6IK 6IL 6IN AAJGR AAWTH ADFMO ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK IEGSK IERZE OCL RIE RIL |
ID | FETCH-LOGICAL-i156t-ad44db85d1cc637f6f73657ac156c75d33d6f4efa90533330c17256aaefdcaf53 |
IEDL.DBID | RIE |
ISBN | 1457712288 9781457712289 |
IngestDate | Wed Aug 27 03:38:33 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | false |
Language | English Japanese |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-i156t-ad44db85d1cc637f6f73657ac156c75d33d6f4efa90533330c17256aaefdcaf53 |
PageCount | 8 |
ParticipantIDs | ieee_primary_5992172 |
PublicationCentury | 2000 |
PublicationDate | 2011-06 |
PublicationDateYYYYMMDD | 2011-06-01 |
PublicationDate_xml | – month: 06 year: 2011 text: 2011-06 |
PublicationDecade | 2010 |
PublicationTitle | 2011 fifth FTRA International Conference on Multimedia and Ubiquitous Engineering : 28-30 June 2011 |
PublicationTitleAbbrev | mue |
PublicationYear | 2011 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0001967616 |
Score | 1.484526 |
Snippet | Automating the annotation process of digital images is a crucial step towards efficient and effective management of this increasingly high volume of content.... |
SourceID | ieee |
SourceType | Publisher |
StartPage | 59 |
SubjectTerms | automatic image annotation background Feature extraction Gabor filters Games holistic feature extraction Humans Image color analysis Image segmentation k nearest neighbors (KNN) saliency regions Training |
Title | Holistic Feature Extraction for Automatic Image Annotation |
URI | https://ieeexplore.ieee.org/document/5992172 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NS8MwGH7ZdvI0dRO_ycGj3dql-fI2ZGMKEw8Odhv5Kojaymhh-OtN0m4T8eCtbS4veZO-H8nzPAA31tXIOjZJlFiKoxRLG7ldpCPMtVaxFIoHnoL5E50t0sclWbbgdoeFsdaGy2d24B_DWb4pdOVbZUMihNdTakPbLbMaq7XvpwjqKnIasFuEsWQ04nxL6dS8iwafl8RiOF9Mav5Or5r7Q1clhJVpF-Zbg-rbJG-DqlQD_fWLq_G_Fh9Cfw_gQ8-70HQELZsfQ3er4ICaDd2Du1nxHpiakU8Fq7VFk025rrEOyKWzaFyVRSB1RQ8f7s-Dxnle1If3fVhMJy_3s6hRU4heXY1WOhekqVGcmERrillGM4YpYVK7Uc2IwdjQLLWZFB6e613oDCdUSpsZLTOCT6CTF7k9BSSwwiaNjc820kRzxRgz0hKVceFWpjiDnp-K1WdNmLFqZuH8788XcFA3an1r4xI65bqyVy7Sl-o6uPgbcn-l_Q |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT8IwGG4QD3pCBeO3PXh0wOjX6o0YyFBGPEDCjfRriVE3Q7bE-OttuyHGePC2bpc3fde-H-3zPADcGFsjq74Og9BQFGAkTGBXkQpQpJTsCy4jz1OQzGi8wA9LsmyA228sjDHGXz4zXffoz_J1rkrXKusRzp2e0g7YtXEfkwqtte2ocGprcurRW4SxcDCIog2pUz3mNUIv7PNeshhVDJ5ON_eHsooPLOMWSDYmVfdJXrplIbvq8xdb439tPgCdLYQPPn0Hp0PQMNkRaG00HGC9pNvgLs5fPVczdMlguTZw9FGsK7QDtAktHJZF7mld4eTN7j1wmGV5dXzfAYvxaH4fB7WeQvBsq7TCOgFjLSOiQ6UoYilNGaKECWW_KkY0Qpqm2KSCO4Cuc6I1nFAhTKqVSAk6Bs0sz8wJgBxJpHFfu3wDhyqSjDEtDJFpxO2_yU9B203F6r2izFjVs3D29-trsBfPk-lqOpk9noP9qm3rGh0XoFmsS3Np434hr7y7vwBYfKlK |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2011+fifth+FTRA+International+Conference+on+Multimedia+and+Ubiquitous+Engineering+%3A+28-30+June+2011&rft.atitle=Holistic+Feature+Extraction+for+Automatic+Image+Annotation&rft.au=Sarin%2C+S.&rft.au=Fahrmair%2C+M.&rft.au=Wagner%2C+M.&rft.au=Kameyama%2C+W.&rft.date=2011-06-01&rft.pub=IEEE&rft.isbn=9781457712289&rft.spage=59&rft.epage=66&rft_id=info:doi/10.1109%2FMUE.2011.22&rft.externalDocID=5992172 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=9781457712289/lc.gif&client=summon&freeimage=true |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=9781457712289/mc.gif&client=summon&freeimage=true |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=9781457712289/sc.gif&client=summon&freeimage=true |