DocParser: End-to-end OCR-free Information Extraction from Visually Rich Documents
Information Extraction from visually rich documents is a challenging task that has gained a lot of attention in recent years due to its importance in several document-control based applications and its widespread commercial value. The majority of the research work conducted on this topic to date fol...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.04.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Information Extraction from visually rich documents is a challenging task
that has gained a lot of attention in recent years due to its importance in
several document-control based applications and its widespread commercial
value. The majority of the research work conducted on this topic to date follow
a two-step pipeline. First, they read the text using an off-the-shelf Optical
Character Recognition (OCR) engine, then, they extract the fields of interest
from the obtained text. The main drawback of these approaches is their
dependence on an external OCR system, which can negatively impact both
performance and computational speed. Recent OCR-free methods were proposed to
address the previous issues. Inspired by their promising results, we propose in
this paper an OCR-free end-to-end information extraction model named DocParser.
It differs from prior end-to-end approaches by its ability to better extract
discriminative character features. DocParser achieves state-of-the-art results
on various datasets, while still being faster than previous works. |
---|---|
AbstractList | Information Extraction from visually rich documents is a challenging task
that has gained a lot of attention in recent years due to its importance in
several document-control based applications and its widespread commercial
value. The majority of the research work conducted on this topic to date follow
a two-step pipeline. First, they read the text using an off-the-shelf Optical
Character Recognition (OCR) engine, then, they extract the fields of interest
from the obtained text. The main drawback of these approaches is their
dependence on an external OCR system, which can negatively impact both
performance and computational speed. Recent OCR-free methods were proposed to
address the previous issues. Inspired by their promising results, we propose in
this paper an OCR-free end-to-end information extraction model named DocParser.
It differs from prior end-to-end approaches by its ability to better extract
discriminative character features. DocParser achieves state-of-the-art results
on various datasets, while still being faster than previous works. |
Author | Shabou, Aymen Bettaieb, Ghassen Dhouib, Mohamed |
Author_xml | – sequence: 1 givenname: Mohamed surname: Dhouib fullname: Dhouib, Mohamed – sequence: 2 givenname: Ghassen surname: Bettaieb fullname: Bettaieb, Ghassen – sequence: 3 givenname: Aymen surname: Shabou fullname: Shabou, Aymen |
BackLink | https://doi.org/10.48550/arXiv.2304.12484$$DView paper in arXiv |
BookMark | eNotz71OwzAYhWEPMEDhApjwDTj4L7XDhkKglSoVRRVr9MU_wlJiIydF7d0Dgemc6ZWea3QRU3QI3TFaSF2W9AHyKXwVXFBZMC61vELtczJvkCeXH3ETLZkTcdHifd0Sn53D2-hTHmEOKeLmNGcwy_U5jfg9TEcYhjNug_nAP6Hj6OI83aBLD8Pkbv93hQ4vzaHekN3-dVs_7QislSRWqarUShute26pp8JBZb12igrmveFgPNVSro2qrPJaVLoHK5npe1YJzsQK3f9lF1T3mcMI-dz94roFJ74BQJZLxA |
ContentType | Journal Article |
Copyright | http://creativecommons.org/licenses/by/4.0 |
Copyright_xml | – notice: http://creativecommons.org/licenses/by/4.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2304.12484 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2304_12484 |
GroupedDBID | AKY GOX |
ID | FETCH-LOGICAL-a674-d7795878c88b2d0f03ea9df8e7031ffc2acf08446c79d7f8398bad41cbb193213 |
IEDL.DBID | GOX |
IngestDate | Mon Jan 08 05:49:02 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a674-d7795878c88b2d0f03ea9df8e7031ffc2acf08446c79d7f8398bad41cbb193213 |
OpenAccessLink | https://arxiv.org/abs/2304.12484 |
ParticipantIDs | arxiv_primary_2304_12484 |
PublicationCentury | 2000 |
PublicationDate | 2023-04-24 |
PublicationDateYYYYMMDD | 2023-04-24 |
PublicationDate_xml | – month: 04 year: 2023 text: 2023-04-24 day: 24 |
PublicationDecade | 2020 |
PublicationYear | 2023 |
Score | 1.8777913 |
SecondaryResourceType | preprint |
Snippet | Information Extraction from visually rich documents is a challenging task
that has gained a lot of attention in recent years due to its importance in
several... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition |
Title | DocParser: End-to-end OCR-free Information Extraction from Visually Rich Documents |
URI | https://arxiv.org/abs/2304.12484 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV3PS8MwGA1zJy-iqMyf5OA12qbpknqTsTkEnYwpu438-AKD0UnXyfzv_ZJW9OI1CT28hL73JXkvhNzkBosAzS1D7gQmAJcxiqOccSOEAS9MEZOYnl_64zfxNM_nHUJ_vDC62i0_m3xgs7kLO5a3yEBK7JE9zsOVrcfJvDmcjFFc7fjfcagxY9MfkhgdkoNW3dGHZjqOSAfKYzLFH_kr1o9Q3dNh6Vi9ZlA6OhlMma8AaOsICgjR4a6uGqsBDcYP-r7cbPVq9UWDAZ7ih7bRknZCZqPhbDBm7VMGTPelYE7KIldSWaUMd4lPMtCF8wpCerz3lmvrE4WVmZWFkx5FizLaidQaEwRWmp2SbrkuoUeoTwGkzpSWTgifCZVolxqNzO5QWVhzRnoRgMVHk1axCNgsIjbn_3ddkP3wjno4JuHiknTragtXyLa1uY6QfwMhcn_B |
link.rule.ids | 228,230,783,888 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DocParser%3A+End-to-end+OCR-free+Information+Extraction+from+Visually+Rich+Documents&rft.au=Dhouib%2C+Mohamed&rft.au=Bettaieb%2C+Ghassen&rft.au=Shabou%2C+Aymen&rft.date=2023-04-24&rft_id=info:doi/10.48550%2Farxiv.2304.12484&rft.externalDocID=2304_12484 |