Automatic image annotation system using deep learning method to analyse ambiguous images

Image annotation has gotten a lot of attention recently because of how quickly picture data has expanded. Together with image analysis and interpretation, image annotation, which may semantically describe imag-es, has a variety of uses in allied industries including urban planning engineering. Even...

Full description

Saved in:
Bibliographic Details
Published inPeriodicals of Engineering and Natural Sciences (PEN) Vol. 11; no. 2; pp. 176 - 185
Main Author Al-Shammary, et. al, Ali Abbas
Format Journal Article
LanguageEnglish
Published 05.04.2023
Online AccessGet full text
ISSN2303-4521
2303-4521
DOI10.21533/pen.v11.i2.110

Cover

Abstract Image annotation has gotten a lot of attention recently because of how quickly picture data has expanded. Together with image analysis and interpretation, image annotation, which may semantically describe imag-es, has a variety of uses in allied industries including urban planning engineering. Even without big data and image identification technologies, it is challenging to manually analyze a diverse variety of photos. The improvements to the Automated Image Annotation (AIA) label system have been the subject of several scholarly research. The authors will discuss how to use image databases and the AIA system in this essay. The proposed method extracts image features from photos using an improved VGG-19, and then uses near-by features to automatically forecast picture labels. The proposed study accounts for both correlations be-tween labels and images as well as correlations within images. The number of labels is also estimated using a label quantity prediction (LQP) model, which improves label prediction precision. The suggested method addresses automatic annotation methodologies for pixel-level images of unusual things while incorporating supervisory information via interactive spherical skins. The genuine things that were converted into metada-ta and identified as being connected to pre-existing categories were categorized by the authors using a deep learning approach called a conventional neural network (CNN) - supervised. Certain object monitoring sys-tems strive for a high item detection rate (true-positive), followed by a low availability rate (false-positive). The authors created a KD-tree based on k-nearest neighbors (KNN) to speed up annotating. In order to take into account for the collected image backdrop. The proposed method transforms the conventional two-class object detection problem into a multi-class classification problem, breaking the separated and identical dis-tribution estimations on machine learning methodologies. It is also simple to use because it only requires pixel information and ignores any other supporting elements from various color schemes. The following factors are taken into consideration while comparing the five different AIA approaches: main idea, signifi-cant contribution, computational framework, computing speed, and annotation accuracy. A set of publicly accessible photos that serve as standards for assessing AIA methods is also provided, along with a brief description of the four common assessment signs.
AbstractList Image annotation has gotten a lot of attention recently because of how quickly picture data has expanded. Together with image analysis and interpretation, image annotation, which may semantically describe imag-es, has a variety of uses in allied industries including urban planning engineering. Even without big data and image identification technologies, it is challenging to manually analyze a diverse variety of photos. The improvements to the Automated Image Annotation (AIA) label system have been the subject of several scholarly research. The authors will discuss how to use image databases and the AIA system in this essay. The proposed method extracts image features from photos using an improved VGG-19, and then uses near-by features to automatically forecast picture labels. The proposed study accounts for both correlations be-tween labels and images as well as correlations within images. The number of labels is also estimated using a label quantity prediction (LQP) model, which improves label prediction precision. The suggested method addresses automatic annotation methodologies for pixel-level images of unusual things while incorporating supervisory information via interactive spherical skins. The genuine things that were converted into metada-ta and identified as being connected to pre-existing categories were categorized by the authors using a deep learning approach called a conventional neural network (CNN) - supervised. Certain object monitoring sys-tems strive for a high item detection rate (true-positive), followed by a low availability rate (false-positive). The authors created a KD-tree based on k-nearest neighbors (KNN) to speed up annotating. In order to take into account for the collected image backdrop. The proposed method transforms the conventional two-class object detection problem into a multi-class classification problem, breaking the separated and identical dis-tribution estimations on machine learning methodologies. It is also simple to use because it only requires pixel information and ignores any other supporting elements from various color schemes. The following factors are taken into consideration while comparing the five different AIA approaches: main idea, signifi-cant contribution, computational framework, computing speed, and annotation accuracy. A set of publicly accessible photos that serve as standards for assessing AIA methods is also provided, along with a brief description of the four common assessment signs.
Author Al-Shammary, et. al, Ali Abbas
Author_xml – sequence: 1
  givenname: Ali Abbas
  surname: Al-Shammary, et. al
  fullname: Al-Shammary, et. al, Ali Abbas
BookMark eNpNkE1PwzAMhiM0JMbYmWv-QLs4SUt3nCa-pEkc4MAtclO3FLVJ1bRI_fcExoGT_Vj2a-m5ZivnHTF2CyKVkCm1G8ilXwBpK1MAccHWUgmV6EzC6l9_xbYhfAohoMillvs1ez_Mk-9xai1ve2yIo3N-iuwdD0uYqOdzaF3DK6KBd4Sj-6Gepg9f8cnHfeyWEO_6sm1mP4dzTrhhlzV2gbZ_dcNeH-7fjk_J6eXx-Xg4JbYAkZSEVGtAUYDVxV1WFZoAKLItMc5K0DITFjHfa4tZLm2GdZ4LBaWEvFIbtjun2tGHMFJthjH-HxcDwvyaMdGMiWZMK000o74BMFBbPw
ContentType Journal Article
DBID AAYXX
CITATION
DOI 10.21533/pen.v11.i2.110
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList CrossRef
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 2303-4521
EndPage 185
ExternalDocumentID 10_21533_pen_v11_i2_110
GroupedDBID 5VS
AAYXX
ADBBV
ALMA_UNASSIGNED_HOLDINGS
BCNDV
CITATION
M~E
OK1
ID FETCH-LOGICAL-c810-beaef41a081c4875d84e11ea08cba81cb14250caa694ca562c5af66031b216d3
ISSN 2303-4521
IngestDate Tue Jul 01 01:48:56 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 2
Language English
License https://creativecommons.org/licenses/by/4.0
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c810-beaef41a081c4875d84e11ea08cba81cb14250caa694ca562c5af66031b216d3
OpenAccessLink https://doi.org/10.21533/pen.v11.i2.110
PageCount 10
ParticipantIDs crossref_primary_10_21533_pen_v11_i2_110
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-04-05
PublicationDateYYYYMMDD 2023-04-05
PublicationDate_xml – month: 04
  year: 2023
  text: 2023-04-05
  day: 05
PublicationDecade 2020
PublicationTitle Periodicals of Engineering and Natural Sciences (PEN)
PublicationYear 2023
SSID ssj0001862429
Score 2.2153404
Snippet Image annotation has gotten a lot of attention recently because of how quickly picture data has expanded. Together with image analysis and interpretation,...
SourceID crossref
SourceType Index Database
StartPage 176
Title Automatic image annotation system using deep learning method to analyse ambiguous images
Volume 11
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS-RAEG58XPQgu6viY3fpwx4EyZjOdDIzx2FRRHAQVJhb6O5UNKCJaMaDB3-71dXJpBUF9RJmmpkiSX1Uf1VdD8b-ydjkuHGGgdGQBRI5czDMMxmgWZAjQIYMNCXidJIcX8qTaTztjguouqTWPfP0bl3Jd7SKa6hXWyX7Bc3OheICfkb94hU1jNdP6Xg8qyvXcrW4tbk3qiyrJnvQdWjen1EoIAO4a-dDXDVDoy3pVNSRBP93q4urmU2GJTkPPmM9wwep6DCHkj68_oV07jBRrnFHYyMohnt2OPEDDDfB-bWiEjmrT7BNP1xmwE2xP9ZavYo8RH1KWIk7A4XeSz-Qsatw7sE7a62FFR6SIs9cikHi7bzCDe95a9QjS0ntKGIoe49C9IrI1i50-1d7Zv9mW5snG6KbQyJSFJCigLSI0O8JF9lyNBjQ0f7psxeXo4qZEc0kbB7FdYQiGQevb8IjMx4rufjB1hp3go8dNn6yBSh_sVVPSetsOkcJJ-3yDiXcoYQTSrhFCW9Rwh1KeF3xBiV8jhIn52GDnR8dXvw_Dpp5GoEZ4m6rQUEuhUISaKybmg0lCAH43WiFazYcGIdGqWQkjUJebGKVJ3YKuY5EkvU32VJZlbDFOLLWUYZvSYlMSR3mGpJBBLlJUHgS5mab7bWvJb1zTVPSD7Sw8_mf7rKVDoW_2VJ9P4M_yAhr_ZdU-AJ6_WRs
linkProvider ISSN International Centre
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Automatic+image+annotation+system+using+deep+learning+method+to+analyse+ambiguous+images&rft.jtitle=Periodicals+of+Engineering+and+Natural+Sciences+%28PEN%29&rft.au=Al-Shammary%2C+et.+al%2C+Ali+Abbas&rft.date=2023-04-05&rft.issn=2303-4521&rft.eissn=2303-4521&rft.volume=11&rft.issue=2&rft.spage=176&rft.epage=185&rft_id=info:doi/10.21533%2Fpen.v11.i2.110&rft.externalDBID=n%2Fa&rft.externalDocID=10_21533_pen_v11_i2_110
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2303-4521&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2303-4521&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2303-4521&client=summon