To log, or not to log: using heuristics to identify mandatory log events – a controlled experiment
Context User activity logs should capture evidence to help answer who, what, when, where, why, and how a security or privacy breach occurred. However, software engineers often implement logging mechanisms that inadequately record mandatory log events (MLEs), user activities that must be logged to en...
Saved in:
Published in | Empirical software engineering : an international journal Vol. 22; no. 5; pp. 2684 - 2717 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.10.2017
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Context
User activity logs should capture evidence to help answer who, what, when, where, why, and how a security or privacy breach occurred. However, software engineers often implement logging mechanisms that inadequately record
mandatory log events
(MLEs), user activities that must be logged to enable forensics.
Goal
The objective of this study is to support security analysts in performing forensic analysis by evaluating the use of a heuristics-driven method for identifying mandatory log events.
Method
We conducted a controlled experiment with 103 computer science students enrolled in a graduate-level software security course. All subjects were first asked to identify MLEs described in a set of requirements statements during the pre-period task. In the post-period task, subjects were randomly assigned statements from one type of software artifact (traditional requirements, use-case-based requirements, or user manual), one readability score (simple or complex), and one method (standards-, resource-, or heuristics-driven). We evaluated subject performance using three metrics: statement classification correctness (values from 0 to 1), MLE identification correctness (values from 0 to 1), and response time (seconds). We test the effect of the three factors on the three metrics using generalized linear models.
Results
Classification correctness for statements that
did not
contain MLEs increased 0.31 from pre- to post-period task. MLE identification correctness was inconsistent across treatment groups. For simple user manual statements, MLE identification correctness
decreased
0.17 and 0.12 for the standards- and heuristics-driven methods, respectively. For simple traditional requirements statements, MLE identification correctness
increased
0.16 and 0.17 for the standards- and heuristics-driven methods, respectively. Average response time decreased 41.7 s from the pre- to post-period task.
Conclusion
We expected the performance of subjects using the heuristics-driven method to improve from pre- to post-task and to consistently demonstrate higher MLE identification correctness than the standards-driven and resource-driven methods across domains and readability levels. However, neither method consistently helped subjects more correctly identify MLEs at a statistically significant level. Our results indicate additional training and enforcement may be necessary to ensure subjects understand and consistently apply the assigned methods for identifying MLEs. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1382-3256 1573-7616 |
DOI: | 10.1007/s10664-016-9449-1 |