A new approach to reliability assessment of dental caries examinations
Objectives The objective of this study is to evaluate reliability of the International Caries Detection and Assessment System (ICDAS) and identify sources of disagreement among eight Kuwaiti dentists with no prior knowledge of the system. Methods A 90‐min introductory e‐course was introduced followe...
Saved in:
Published in | Community dentistry and oral epidemiology Vol. 41; no. 4; pp. 309 - 316 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Denmark
Blackwell Publishing Ltd
01.08.2013
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Objectives
The objective of this study is to evaluate reliability of the International Caries Detection and Assessment System (ICDAS) and identify sources of disagreement among eight Kuwaiti dentists with no prior knowledge of the system.
Methods
A 90‐min introductory e‐course was introduced followed by an examination of extracted teeth using the ICDAS coding system on the first day. Then three sessions of clinical examinations were performed. This study only used the data from the last session where 705 tooth surfaces of 10 patients were examined to assess bias in caries examination and on which codes the examiners had the highest disagreement. Compared with the gold standard, we evaluated bias of the ICDAS coding using three approaches (Bland–Altman plot, maximum kappa statistic, and Bhapkar's chi‐square test). Linear weighted kappa statistics were computed to assess interexaminer reliability.
Results
Marginal ICDAS distributions for most examiners were significantly different from that of the gold standard (bias present). The primary source of these marginal differences was misclassifying sound surfaces as noncavitated lesions. Interexaminer reliability of the 3‐level ICDAS (codes 0, 1–2, and 3–6) classification ranged between 0.43 and 0.73, indicating evidence of substantial inconsistency between examiners. The primary source of examiner differences was agreeing on diagnosing noncavitated lesions.
Conclusion
This study highlights the importance of assessing both systematic and random sources of examiner agreement to correctly interpret kappa measures of reliability. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0301-5661 1600-0528 |
DOI: | 10.1111/cdoe.12020 |