Can I trust my fake data – A comprehensive quality assessment framework for synthetic tabular data in healthcare

[Display omitted] •Diverging taxonomy and evaluation criteria in the current literature for quality evaluation of synthetic tabular healthcare data.•Lack of emphasis on emerging topics of “Fairness” and “Carbon Footprint and computational complexity” in existing synthetic data evaluations, these con...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of medical informatics (Shannon, Ireland) Vol. 185; p. 105413
Main Authors Vallevik, Vibeke Binz, Babic, Aleksandar, Marshall, Serena E., Elvatun, Severin, Brøgger, Helga M.B., Alagaratnam, Sharmini, Edwin, Bjørn, Veeraragavan, Narasimha R., Befring, Anne Kjersti, Nygård, Jan F.
Format Journal Article
LanguageEnglish
Published Ireland Elsevier B.V 01.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:[Display omitted] •Diverging taxonomy and evaluation criteria in the current literature for quality evaluation of synthetic tabular healthcare data.•Lack of emphasis on emerging topics of “Fairness” and “Carbon Footprint and computational complexity” in existing synthetic data evaluations, these considerations should be included in evaluation frameworks.•Proposed conceptual framework on quality assurance of synthetic tabular data in healthcare addresses taxonomy and semantic ambiguities, supporting effective communication and prepares for real-life implementation. Ensuring safe adoption of AI tools in healthcare hinges on access to sufficient data for training, testing and validation. Synthetic data has been suggested in response to privacy concerns and regulatory requirements and can be created by training a generator on real data to produce a dataset with similar statistical properties. Competing metrics with differing taxonomies for quality evaluation have been proposed, resulting in a complex landscape. Optimising quality entails balancing considerations that make the data fit for use, yet relevant dimensions are left out of existing frameworks. We performed a comprehensive literature review on the use of quality evaluation metrics on synthetic data within the scope of synthetic tabular healthcare data using deep generative methods. Based on this and the collective team experiences, we developed a conceptual framework for quality assurance. The applicability was benchmarked against a practical case from the Dutch National Cancer Registry. We present a conceptual framework for quality assuranceof synthetic data for AI applications in healthcare that aligns diverging taxonomies, expands on common quality dimensions to include the dimensions of Fairness and Carbon footprint, and proposes stages necessary to support real-life applications. Building trust in synthetic data by increasing transparency and reducing the safety risk will accelerate the development and uptake of trustworthy AI tools for the benefit of patients. Despite the growing emphasis on algorithmic fairness and carbon footprint, these metrics were scarce in the literature review. The overwhelming focus was on statistical similarity using distance metrics while sequential logic detection was scarce. A consensus-backed framework that includes all relevant quality dimensions can provide assurance for safe and responsible real-life applications of synthetic data. As the choice of appropriate metrics are highly context dependent, further research is needed on validation studies to guide metric choices and support the development of technical standards.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-3
content type line 23
ObjectType-Review-1
ISSN:1386-5056
1872-8243
DOI:10.1016/j.ijmedinf.2024.105413