Cyber Security: Effects of Penalizing Defenders in Cyber-Security Games via Experimentation and Computational Modeling

Cyber-attacks are deliberate attempts by adversaries to illegally access online information of other individuals or organizations. There are likely to be severe monetary consequences for organizations and its workers who face cyber-attacks. However, currently, little is known on how monetary consequ...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in psychology Vol. 11; p. 11
Main Authors Maqbool, Zahid, Aggarwal, Palvi, Pammi, V. S. Chandrasekhar, Dutt, Varun
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 28.01.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Cyber-attacks are deliberate attempts by adversaries to illegally access online information of other individuals or organizations. There are likely to be severe monetary consequences for organizations and its workers who face cyber-attacks. However, currently, little is known on how monetary consequences of cyber-attacks may influence the decision-making of defenders and adversaries. In this research, using a cyber-security game, we evaluate the influence of monetary penalties on decisions made by people performing in the roles of human defenders and adversaries via experimentation and computational modeling. In a laboratory experiment, participants were randomly assigned to the role of "hackers" (adversaries) or "analysts" (defenders) in a laboratory experiment across three between-subject conditions: Equal payoffs (EQP), penalizing defenders for false alarms (PDF) and penalizing defenders for misses (PDM). The PDF and PDM conditions were 10-times costlier for defender participants compared to the EQP condition, which served as a baseline. Results revealed an increase (decrease) and decrease (increase) in attack (defend) actions in the PDF and PDM conditions, respectively. Also, both attack-and-defend decisions deviated from Nash equilibriums. To understand the reasons for our results, we calibrated a model based on Instance-Based Learning Theory (IBLT) theory to the attack-and-defend decisions collected in the experiment. The model's parameters revealed an excessive reliance on recency, frequency, and variability mechanisms by both defenders and adversaries. We discuss the implications of our results to different cyber-attack situations where defenders are penalized for their misses and false-alarms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
This article was submitted to Cognitive Science, a section of the journal Frontiers in Psychology
Edited by: Cyril Onwubiko, Centre for Multidisciplinary Research, Innovation and Collaboration (C-MRiC), United Kingdom
Reviewed by: Yilun Shang, Northumbria University, United Kingdom; Stefan Sütterlin, Østfold University College, Norway
ISSN:1664-1078
1664-1078
DOI:10.3389/fpsyg.2020.00011