A method to refine feature vectors for combining multiple neural networks

A METHOD TO REFINE FEATURE VECTORS FOR COMBINING MULTIPLE NEURAL Abstract As commercial sector concerns data fusion is a primarily approach to extract the silent features and thereby, predicting the future data points for their financial growth perceives. Especially, image processing area, fusing pr...

Full description

Saved in:
Bibliographic Details
Main Authors Jeevan, A. N. Gnana, Chakrabarti, Prasun, Pallathadka, Harikumar, Nagaraju, A, Pai, K. Baba, Das, Anirban, Kumar, Ajay, Neware, Rahul, Ghosh, Amrit, Gowda V., Dankan, Murthy, Ravaleedhar Reddy, Yuvaraj, D
Format Patent
LanguageEnglish
Published 02.12.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A METHOD TO REFINE FEATURE VECTORS FOR COMBINING MULTIPLE NEURAL Abstract As commercial sector concerns data fusion is a primarily approach to extract the silent features and thereby, predicting the future data points for their financial growth perceives. Especially, image processing area, fusing principle is quite common to detect and analysis the depth of the abnormalities points present on the dataset (image or structured data) in which provides early prediction. The fused image method is simple and random in nature, minimum redundancy and contains more silent features about the object information. It gives anatomical structure information of abnormal points which conveys further scanning data process that gets detailed information about the data points. Hence, by image fusing approach is available in medical felid to know about the detailed information of both abnormal structure and glucose liquid flow level in human body. In this research work, a refine feature vectors are generated by using multiple neural network in which training sample set is derived from the pre-defined guidelines net list (Pre-G-Net) method. According to this, entire network is fused with multiple layers and each layer performs regular weight vectors updating function in both fused layer as well as classifier layers respectively. First, the Pre-G-Net method is following optimistic training phase for extracting accurate features which is possible my making weight updating in each layer on regular basics. Later, the performance loss is estimated based on the weight difference vector of both layers. The proposed method is capable of achieving an accuracy percentage upto 90% which is significantly higher than other conventional CNN model (feature extraction only on fused images). Similarly, the proposed method reaches the processing efficiency rate 10% as compared with DNN (feature extraction only on structured dataset).
Bibliography:Application Number: AU20210106658