Visual Descriptor Extraction from Patent Figure Captions: A Case Study of Data Efficiency Between BiLSTM and Transformer

Technical drawings used for illustrating designs are ubiquitous in patent documents, especially design patents. Different from natural images, these drawings are usually made using black strokes with little color information, making it challenging for models trained on natural images to recognize ob...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries pp. 1 - 5
Main Authors Wei, Xin, Wu, Jian, Ajayi, Kehinde, Oyen, Diane
Format Conference Proceeding
LanguageEnglish
Published ACM 20.06.2022
Subjects
Online AccessGet full text
DOI10.1145/3529372.3533299

Cover

More Information
Summary:Technical drawings used for illustrating designs are ubiquitous in patent documents, especially design patents. Different from natural images, these drawings are usually made using black strokes with little color information, making it challenging for models trained on natural images to recognize objects. To facilitate indexing and searching, we propose an effective and efficient visual descriptor model that extracts object names and aspects from patent captions to annotate benchmark patent Figure datasets. We compared two state-of-the-art named entity recognition (NER) models and found that with a limited number of annotated samples, the BiLSTM-CRF model outperforms the Transformer model by a significant margin, achieving an overall F1 =96.60%. We further conducted a data efficiency study by varying the number of training samples and found that BiLSTM consistently beats the transformer model on our task. The proposed model is used to annotate a benchmark patent Figure dataset. CCS CONCEPTS * Computing methodologies Information extraction.
DOI:10.1145/3529372.3533299