A generalization of the uniform association model for assessing rater agreement in ordinal scales
Recently, the data analysts pay more attention to the assessment of rater agreement, especially in areas of medical sciences. In this context, the statistical indices such as kappa and weighted kappa are the most common choices. These indices are simple to calculate and interpret, although, they fai...
Saved in:
Published in | Journal of applied statistics Vol. 37; no. 8; pp. 1265 - 1273 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Abingdon
Taylor & Francis
01.08.2010
Taylor and Francis Journals Taylor & Francis Ltd |
Series | Journal of Applied Statistics |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recently, the data analysts pay more attention to the assessment of rater agreement, especially in areas of medical sciences. In this context, the statistical indices such as kappa and weighted kappa are the most common choices. These indices are simple to calculate and interpret, although, they fail to describe the structure of agreement, particularly when the available outcome has an ordinal nature. In the previous decades, statisticians suggested more efficient statistical tools such as diagonal parameter, linear by linear association and agreement plus linear by linear association models for describing the structure of rater agreement. In these models, the equal interval scores are the common choice for the levels of the ordinal scales. In this manuscript, we show that choosing the common equal interval scores does not necessarily lead to the best fit and propose a modification using a power transformation for the ordinal scores. We also use two different data sets (IOTN and ovarian masses data) to illustrate our suggestion more clearly. In addition, we utilize the category distinguishability concept for interpreting the model parameter estimates. |
---|---|
ISSN: | 0266-4763 1360-0532 |
DOI: | 10.1080/02664760903012666 |