Meta-analysis of diagnostic test studies using individual patient data and aggregate data
A meta‐analysis of diagnostic test studies provides evidence‐based results regarding the accuracy of a particular test, and usually involves synthesizing aggregate data (AD) from each study, such as the 2 by 2 tables of diagnostic accuracy. A bivariate random‐effects meta‐analysis (BRMA) can appropr...
Saved in:
Published in | Statistics in medicine Vol. 27; no. 29; pp. 6111 - 6136 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Chichester, UK
John Wiley & Sons, Ltd
20.12.2008
Wiley Subscription Services, Inc |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | A meta‐analysis of diagnostic test studies provides evidence‐based results regarding the accuracy of a particular test, and usually involves synthesizing aggregate data (AD) from each study, such as the 2 by 2 tables of diagnostic accuracy. A bivariate random‐effects meta‐analysis (BRMA) can appropriately synthesize these tables, and leads to clinical results, such as the summary sensitivity and specificity across studies. However, translating such results into practice may be limited by between‐study heterogeneity and that they relate to some ‘average’ patient across studies.
In this paper we describe how the meta‐analysis of individual patient data (IPD) from diagnostic studies can lead to clinical results more tailored to the individual patient. We develop IPD models that extend the BRMA framework to include study‐level covariates, which help explain the between‐study heterogeneity, and also patient‐level covariates, which allow one to assess the effect of patient characteristics on test accuracy. We show how the inclusion of patient‐level covariates requires a careful separation of within‐study and across‐study accuracy‐covariate effects, as the latter are particularly prone to confounding. Our models are assessed through simulation and extended to allow IPD studies to be combined with AD studies, as IPD are not always available for all studies. Application is made to 23 studies assessing the accuracy of ear thermometers for diagnosing fever in children, with 16 IPD and 7 AD studies. The models reveal that between‐study heterogeneity is partly explained by the use of different measurement devices, but there is no evidence that being an infant modifies diagnostic accuracy. Copyright © 2008 John Wiley & Sons, Ltd. |
---|---|
Bibliography: | National Coordinating Centre for Research Capacity Development istex:E4D96148A8D2FC5808E8B3373A8DBEC97D24E37F ark:/67375/WNG-N94V3W0L-W ArticleID:SIM3441 ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 0277-6715 1097-0258 |
DOI: | 10.1002/sim.3441 |