Coping with AI errors with provable guarantees

AI errors pose a significant challenge, hindering real-world applications. This work introduces a novel approach to cope with AI errors using weakly supervised error correctors that guarantee a specific level of error reduction. Our correctors have low computational cost and can be used to decide wh...

Full description

Saved in:
Bibliographic Details
Published inInformation sciences Vol. 678; p. 120856
Main Authors Tyukin, Ivan Y., Tyukina, Tatiana, van Helden, Daniël P., Zheng, Zedong, Mirkes, Evgeny M., Sutton, Oliver J., Zhou, Qinghua, Gorban, Alexander N., Allison, Penelope
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:AI errors pose a significant challenge, hindering real-world applications. This work introduces a novel approach to cope with AI errors using weakly supervised error correctors that guarantee a specific level of error reduction. Our correctors have low computational cost and can be used to decide whether to abstain from making an unsafe classification. We provide new upper and lower bounds on the probability of errors in the corrected system. In contrast to existing works, these bounds are distribution agnostic, non-asymptotic, and can be efficiently computed just using the corrector training data. They also can be used in settings with concept drifts when the observed frequencies of separate classes vary. The correctors can easily be updated, removed, or replaced in response to changes in distributions within each class without retraining the underlying classifier. The application of the approach is illustrated with two relevant challenging tasks: (i) an image classification problem with scarce training data, and (ii) moderating responses of large language models without retraining or otherwise fine-tuning. •In this work, we introduce a novel approach to address the challenge of random AI errors.•It enables correcting errors with provable guarantees regardless of data distribution and with small training sets.•The theory is illustrated with examples highlighting the application of the method to challenging machine learning problems.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2024.120856