How Useful Is Image-Based Active Learning for Plant Organ Segmentation?

Training deep learning models typically requires a huge amount of labeled data which is expensive to acquire, especially in dense prediction tasks such as semantic segmentation. Moreover, plant phenotyping datasets pose additional challenges of heavy occlusion and varied lighting conditions which ma...

Full description

Saved in:
Bibliographic Details
Published inPlant phenomics Vol. 2022; p. 9795275
Main Authors Rawat, Shivangana, Chandra, Akshay L, Desai, Sai Vikas, Balasubramanian, Vineeth N, Ninomiya, Seishi, Guo, Wei
Format Journal Article
LanguageEnglish
Published United States AAAS 01.01.2022
American Association for the Advancement of Science (AAAS)
Online AccessGet full text

Cover

Loading…
More Information
Summary:Training deep learning models typically requires a huge amount of labeled data which is expensive to acquire, especially in dense prediction tasks such as semantic segmentation. Moreover, plant phenotyping datasets pose additional challenges of heavy occlusion and varied lighting conditions which makes annotations more time-consuming to obtain. Active learning helps in reducing the annotation cost by selecting samples for labeling which are most informative to the model, thus improving model performance with fewer annotations. Active learning for semantic segmentation has been well studied on datasets such as PASCAL VOC and Cityscapes. However, its effectiveness on plant datasets has not received much importance. To bridge this gap, we empirically study and benchmark the effectiveness of four uncertainty-based active learning strategies on three natural plant organ segmentation datasets. We also study their behaviour in response to variations in training configurations in terms of augmentations used, the scale of training images, active learning batch sizes, and train-validation set splits.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2643-6515
2643-6515
DOI:10.34133/2022/9795275