“A Good Bot Always Knows Its Limitations”: Assessing Autonomous System Decision-Making Competencies through Factorized Machine Self-Confidence

How can intelligent machines assess their competency to complete a task? This question has come into focus for autonomous systems that algorithmically make decisions under uncertainty. We argue that machine self-confidence—a form of meta-reasoning based on self-assessments of system knowledge about...

Full description

Saved in:
Bibliographic Details
Published inACM transactions on human-robotic interaction Vol. 14; no. 4; pp. 1 - 63
Main Authors Israelsen, Brett, Ahmed, Nisar R., Aitken, Matthew, Frew, Eric W., Lawrence, Dale A., Argrow, Brian M.
Format Journal Article
LanguageEnglish
Published 31.12.2025
Online AccessGet full text

Cover

Loading…
More Information
Summary:How can intelligent machines assess their competency to complete a task? This question has come into focus for autonomous systems that algorithmically make decisions under uncertainty. We argue that machine self-confidence—a form of meta-reasoning based on self-assessments of system knowledge about the state of the world, itself, and ability to reason about and execute tasks—leads to many computable and useful competency indicators for such agents. This article presents our body of work, so far, on this concept in the form of the Factorized Machine Self-Confidence (FaMSeC) framework, which holistically considers several major factors driving competency in algorithmic decision-making: outcome assessment, solver quality, model quality, alignment quality, and past experience. In FaMSeC, self-confidence indicators are derived via “problem-solving statistics” embedded in Markov Decision Process solvers and related approaches. These statistics come from evaluating probabilistic exceedance margins in relation to certain outcomes and associated competency standards specified by an evaluator. Once designed, and evaluated, the statistics can be easily incorporated into autonomous agents and serve as indicators of competency. We include detailed descriptions and examples for Markov Decision Process agents and show how outcome assessment and solver quality factors can be found for a range of tasking contexts through novel use of meta-utility functions, behavior simulations, and surrogate prediction models. Numerical evaluations are performed to demonstrate that FaMSeC indicators perform as desired (references to human subject studies beyond the scope of this article are provided).
ISSN:2573-9522
2573-9522
DOI:10.1145/3732794