Modeling and reasoning about uncertainty in goal models: a decision-theoretic approach

Goal models have been a popular subject of study by researchers in requirements engineering, due to their ability to capture and analyze alternative solutions through which a software system can achieve business objectives. A plethora of analysis methods for automated identification of optimal alter...

Full description

Saved in:
Bibliographic Details
Published inSoftware and systems modeling Vol. 21; no. 6; pp. 1 - 24
Main Authors Liaskos, Sotirios, Khan, Shakil M., Mylopoulos, John
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.12.2022
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Goal models have been a popular subject of study by researchers in requirements engineering, due to their ability to capture and analyze alternative solutions through which a software system can achieve business objectives. A plethora of analysis methods for automated identification of optimal alternatives have been proposed. However, such methods often assume an idealized reality where all tasks are successfully performed when attempted and all goals are eventually satisfied with certainty when pursued according to a solution. In reality, some tasks run the risk of failure while others produce chance outcomes. In this paper, we extend the standard goal modeling language to allow representation and reasoning about both uncertainty and preferential utility in goals. Tasks are extended to allow for probabilistic effects and preferential statements of stakeholders are captured and translated into utilities over possible effects. Moreover, solutions are not mere specifications (functions, quality constraints, and assumptions), but rather policies, that is sequences of situational action decisions, through which stakeholder goals can be fulfilled. An AI reasoning tool is adapted and used for identifying optimal policies with respect to the value they offer to stakeholders measured against their probability of failure. Evaluation of the approach includes a simulation study and scalability experiments to assess the applicability of automated reasoning for larger problems.
ISSN:1619-1366
1619-1374
DOI:10.1007/s10270-021-00968-w