A PROFICIENT APPROACH FOR FACSIMILE DETECTION

Now-a-days accuracy of databases is much more important, as it is primary and crucial for maintaining a database in current IT-based economy and also several organizations rely on the databases for carrying out their day-to-day operations. Consequently, much study on duplicate detection can also be...

Full description

Saved in:
Bibliographic Details
Published inI-Manager's Journal on Computer Science Vol. 4; no. 3; p. 11
Main Authors M, SREELEKHA, K, BHASKAR NAIK
Format Journal Article
LanguageEnglish
Published Nagercoil iManager Publications 01.09.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Now-a-days accuracy of databases is much more important, as it is primary and crucial for maintaining a database in current IT-based economy and also several organizations rely on the databases for carrying out their day-to-day operations. Consequently, much study on duplicate detection can also be named as entity resolution or facsimile recognition and by various names that focuses mainly on the pair selections increasing both the efficiency and recall. The process of recognizing multiple representations of the things with or in the same real world is named as Duplicate Detection. Among the indexing algorithms, Progressive duplicates detection algorithms is a novel approach whereby using the defined sorting key, sorts the given dataset, and compares the records within a window. So in order to get even faster results, than the traditional approaches, a new algorithm has been proposed combining the progressive approaches with the scalable approaches to progressively find the duplicates in parallel. This algorithm also proves that without losing the effectiveness during limited time of execution, maximizes the efficiency for finding duplicates.
ISSN:2347-2227
2347-6141
DOI:10.26634/jcom.4.3.8285