Reply to: Transparency and reproducibility in artificial intelligence
More generally, widely releasing data considerably alters the risk-benefit calculus for patients, so institutions must be thoughtful about how and when they do this. Because of these considerations, large medical image datasets with associated breast cancer outcomes are rarely made openly available3...
Saved in:
Published in | Nature (London) Vol. 586; no. 7829; pp. E17 - E18 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
London
Nature Publishing Group UK
15.10.2020
Nature Publishing Group |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | More generally, widely releasing data considerably alters the risk-benefit calculus for patients, so institutions must be thoughtful about how and when they do this. Because of these considerations, large medical image datasets with associated breast cancer outcomes are rarely made openly available3-5. Because liability issues surrounding artificial intelligence in healthcare remain unresolved8, providing unrestricted access to such technologies may place patients, providers, and developers at risk. [...]increasing evidence suggests that a model's learned parameters may inadvertently expose properties of its training set to attack; how to safeguard potentially susceptible models is the subject of active research9. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 ObjectType-Correspondence-1 ObjectType-Commentary-2 content type line 23 |
ISSN: | 0028-0836 1476-4687 1476-4687 |
DOI: | 10.1038/s41586-020-2767-x |