A deep learning approach for image and text classification using neutrosophy
The proliferation of data on the web and on personal computers is a direct result of the proliferation of new technologies and gadgets. Most of these pieces of information are collected through a number of different methods (text, image, video, etc.). This type of information is also vital for e-com...
Saved in:
Published in | International journal of information technology (Singapore. Online) Vol. 16; no. 2; pp. 853 - 859 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Singapore
Springer Nature Singapore
01.02.2024
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The proliferation of data on the web and on personal computers is a direct result of the proliferation of new technologies and gadgets. Most of these pieces of information are collected through a number of different methods (text, image, video, etc.). This type of information is also vital for e-commerce websites. The products on these websites feature both images and descriptions in text form, making them multimodal in nature. Earlier classification and information retrieval algorithms focused largely on a single modality. This study leverages multimodal data for categorization utilising neutrosophic fuzzy sets for uncertainty management for information retrieval tasks. This work employs image and text data and, inspired by prior ways of embedding text over an image, seeks to classify the images using neutrosophic classification algorithms. Neurosophic convolutional neural networks (NCNNs) are used to learn feature representations of the generated images for classification tasks. We demonstrate how a pipeline based on NCNN may be applied to learn representations of the unique fusion method. Traditional convolutional neural networks are subject to unexpected noisy conditions in the test phase, and as a result, their performance for the categorization of noisy data degrades. Comparing our technique against individual sources on multi-modal classification dataset provides good results. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 2511-2104 2511-2112 |
DOI: | 10.1007/s41870-023-01529-8 |