Cost-Efficient Multi-Instance Multi-Label Active Learning Via Correlation of Features

Multi-instance multi-label active learning (MIMAL) usually uses example uncertainty and label correlation to select the most valuable example-label pairs, maximizing the learner's performance. However, the existing MIMAL solutions do not consider the correlation of example features when selecti...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE International Conference on Image Processing (ICIP) pp. 410 - 414
Main Authors Su, Guoliang, Wu, Zhangquan, Ye, Yujia, Chen, Maoxing, Zhou, Jun
Format Conference Proceeding
LanguageEnglish
Published IEEE 08.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multi-instance multi-label active learning (MIMAL) usually uses example uncertainty and label correlation to select the most valuable example-label pairs, maximizing the learner's performance. However, the existing MIMAL solutions do not consider the correlation of example features when selecting example-label pairs. Here, this paper proposes a novel MIMAL framework that can effectively exploit the relationship between examples and features to reduce annotation cost. We first perform feature screening on the examples. It effectively eliminates the interference of useless features on the example to the annotations. Next, we quantify the correlation between features and examples as the basis for selecting example-label pairs. Finally, we query for the most likely positive subexample-label pair among the selected example-label pairs. The extensive experiments on multi-label datasets from diverse domains show that the proposed MIMAL can better save query cost and achieve superior performance than state-of-the-art MIMAL methods.
DOI:10.1109/ICIP49359.2023.10222329