Condition-based maintenance with dynamic thresholds for a system using the proportional hazards model
•CBM considering dynamic thresholds and multiple maintenance actions.•A discretization method for computing semi-Markov decision process quantities.•Optimization by a modified policy-iteration algorithm in the SMDP framework.•Consideration of non-stationary continuous-state PHM covariate process.•Th...
Saved in:
Published in | Reliability engineering & system safety Vol. 204; p. 107123 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Barking
Elsevier Ltd
01.12.2020
Elsevier BV |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •CBM considering dynamic thresholds and multiple maintenance actions.•A discretization method for computing semi-Markov decision process quantities.•Optimization by a modified policy-iteration algorithm in the SMDP framework.•Consideration of non-stationary continuous-state PHM covariate process.•The proposed policy outperforms the other widely used CBM policies.
The hazard rate of many practical systems depends not only on age but also on a diagnostic covariate process. Effective maintenance decisions for such systems need to combine both age information and the covariate information obtained from condition monitoring. This paper proposes a condition-based maintenance (CBM) policy with dynamic thresholds and multiple maintenance actions for such a system subject to periodic inspection. The hazard rate is described by the proportional hazards model with a continuous-state covariate process. At each inspection epoch, appropriate action is selected from no maintenance, imperfect maintenance, and preventive replacement based on two dynamic thresholds. Over an inspection interval, the system may experience minor failure or catastrophic failure that can be addressed by minimal repair and corrective replacement, respectively. The objective is to determine the optimal thresholds that minimize the long-run average cost rate. A modified policy-iteration algorithm is developed to solve the optimization problem in the semi-Markov decision process (SMDP) framework. The effectiveness of the proposed approach is illustrated by a practical numerical example. The comparison with the other CBM policies confirms the outstanding performance of the proposed policy. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0951-8320 1879-0836 |
DOI: | 10.1016/j.ress.2020.107123 |