The Next Era of Biomedical Research
Photo by Clément Hélardot on Unsplash INTRODUCTION The history of biomedical research in the United States is both inspiring and haunting. From the first public demonstration of anesthesia in surgery at Massachusetts General Hospital to the infamous Tuskegee Experiment, we see the significant advanc...
Saved in:
Published in | Voices in bioethics Vol. 7 |
---|---|
Main Author | |
Format | Journal Article |
Language | English |
Published |
Columbia University Libraries
01.11.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Photo by Clément Hélardot on Unsplash INTRODUCTION The history of biomedical research in the United States is both inspiring and haunting. From the first public demonstration of anesthesia in surgery at Massachusetts General Hospital to the infamous Tuskegee Experiment, we see the significant advances made for the medical field and the now exposed power dynamics that contribute to injustices when they are left unmonitored.[1] Over the past century, biomedical research led to positive change but also reinforced structural racism. Henrietta Lacks, whose tissue was used without her consent to generate HeLa cells, and the Tuskegee study research subjects, who were denied an existing treatment for syphilis, exemplify how biomedical research in the US has been a vector for exploiting minority groups in exchange for knowledge creation. As we usher in the age of computational medicine, leaders of the field must listen to calls from communities around the country and world to decrease the prevalence of structural racism in the next wave of medical advances.[2] We are vulnerable to perpetuating structural racism through algorithms and databases that will drive biomedical research and aid healthcare systems in developing new methods for diagnosing and treating illness. With guidelines from governmental funding agencies and inclusivity of racial and ethnic minorities in research and development communities, we can inch closer to a more just future for our nation's health. BACKGROUND Many health systems rely on commercial software to store and process their patient’s data. This software commonly comes with patented predictive algorithms that help providers assign a risk score to patients based on health needs. However, biases held by algorithm developers can reflect racial disparities and incorporate them in the algorithms if proper counterbalances are not in place to audit the work of algorithm designers.[3] Despite the recent digitization of healthcare data across the United States, racial bias has already found its way into healthcare algorithms that manage populations. ANALYSIS l. Use of Algorithms One landmark study that interrogated a widely used algorithm demonstrated that Black patients were considerably sicker than white patients at a given risk score, evidenced by signs of uncontrolled disease.[4] The algorithm predicted the need for additional help based on past expenditures, and we historically spend less on Black patients than white patients. Rectifying this bias would lead to three times as many Black patients receiving additional resources. The algorithm produces a treatment gap due to a history of unequal access to care and lower spending on people of color compared to white people in the healthcare system. The disconnect between the clinical situation and historical resource allocation exemplifies how certain predictors may produce an outcome that harms patients. The study shows that using a proxy measure for healthcare spending to quantify how sick a patient is instead of using physiologic data can amplify racial disparities. This example highlights the need for collaboration between clinicians, data scientists, ethicists, and epidemiologists of diverse backgrounds to ensure model parameters do not perpetuate racial biases. ll. Use of Big Data in Algorithm Creation In addition to eradicating algorithms that make decisions based on proxy measures encoding racial inequities, we must also be diligent about the content of databases employed in algorithm development. Racial disparities in a database may result from the intentional selection of a homogenous population or unintentional exclusion due to systemic issues such as unequal distribution of resources. For example, a genetic study conducted in a Scandinavian country is more likely to be racially homogenous and not generalizable to a broader population. Applying algorithms derived from homogenous populations to either diverse populations or to different homogenous populations would fail to account for biological differences and could result in a recess of care when used beyond the appropriate population. Additionally, companies like Apple or FitBit could de-identify consumer data collected using their wearable sensors and make it available in research. This is problematic because the demographic distribution of people who have access to their technology may not reflect the general population. To combat these potential disparities, we must construct freely accessible research databases containing patients with diverse demographic characteristics that better model the actual populations that a given model will serve. lll. Government-Based Safeguards Armed with an understanding of how systemic bias is integrated into algorithms and databases, we must strive to construct safeguards that minimize systemic racism in computational biomedical research. A potential way to step forward as a society would be aligning incentives to produce the desired results. Governmental agencies wield enormous power over the trajectory of publicly funded research. Therefore, it is crucial that computational biomedical research funding is regulated by procedures that encourage diverse researchers to investigate and develop healthcare algorithms and databases that promote our nation's health. For example, the National Institutes of Health (NIH) has set forth two large initiatives to catalyze equitable growth of knowledge and research in healthcare artificial intelligence (AI). The first initiative is the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD), which focuses on increasing diversity in researchers and data within AI/machine learning (ML). The program states that “these gaps pose a risk of creating and continuing harmful biases in how AI/ML is used, how algorithms are developed and trained, and how findings are interpreted.” Increased participation of researchers and communities currently underrepresented in AI/ML modeling can prevent continued health disparities and inequities. Programs like AIM-AHEAD are crucial to reducing the risk of creating and continuing harmful biases in biomedical research. With the four key focus areas of partnerships, research, infrastructure, and data science training, AIM-AHEAD and its future incarnations can promote health equity for the next era of medicine and biomedical research. The second initiative announced by the NIH is known as the Bridge to Artificial Intelligence (Bridge2AI) program.[5] This program has a different approach to tackling systemic racism and bias by focusing on the content and process of AI/ML research. Two key components of AI/ML research are rich databases and algorithm development protocols. To develop reproducible and actionable algorithms, researchers must have access to large, well-labeled databases and follow best practices in their development of algorithms. However, large databases are not readily available across the healthcare research ecosystem. As a result, many investigators struggle to gain access to databases that would enable them to carry out AI/ML research at their home institution. A movement toward more freely available databases like the Medical Information Mart for Intensive Care (MIMIC) and electronic Intensive Care Unit (eICU) through the PhysioNet platform created at the Massachusetts Institute of Technology Laboratory for Computational Physiology can improve access to data for research.[6] By adopting the practice of freely available databases commonly used in the AI/ML research communities outside of medicine, MIMIC and eICU lowered the barrier to entry for data scientists interested in health care. The improved access from MIMIC and eICU has led to over 2,000 publications to date. While this is a solid foundational step for the healthcare AI/ML research community, it is essential to reflect on progress and ensure that freely accessible databases are racially and geographically diverse. In this manner, Bridge2AI will facilitate the expansion of healthcare databases that are ethically sourced, trustworthy, and accessible. Without government programs such as AIM-AHEAD and Bridge2AI, the US biomedical research community is at higher risk of perpetuating systemic racism and biases in how AI/ML is used, how algorithms are developed, and how clinical decision support results are interpreted when delivering patient care. lV. Private Sector Standards Even with the proper incentives delivered from governmental agencies, there can be a disconnect between the public and private sectors, leading to racial bias in algorithms used in patient care. Privately funded AI/ML algorithms used in care decision-making should be held to the same ethical standards as those developed by publicly funded research at academic institutions. Publicly funded research is usually peer-reviewed before publication, giving reviewers a chance to evaluate algorithmic bias or deficiencies. Algorithms used in care may have avoided similar scrutiny. Corporations have an inherent conflict between protecting intellectual property and providing transparency of algorithmic design and inputs. The Food and Drug Administration (FDA) is responsible for regulating AI/ML algorithms. It has classified them as Software as a Medical Device (SaMD), focusing on the development process and benchmarking.[7] The importance of holding privately funded algorithm development to the same standards as publicly funded research is highlighted in a September 2020 review of FDA-approved AI/ML algorithms. All SaMD approved by the FDA are registered by private companies.[8] Regulators must be well-versed in structural racism and equipped to evaluate proprietary algorithms for racial bias and maintain oversight as population data drifts occur and the algorithms continue to optimize themselves. The FDA role is crucial to clinical use of SaMD. CONCLUSION Computational decision support |
---|---|
AbstractList | Photo by Clément Hélardot on Unsplash INTRODUCTION The history of biomedical research in the United States is both inspiring and haunting. From the first public demonstration of anesthesia in surgery at Massachusetts General Hospital to the infamous Tuskegee Experiment, we see the significant advances made for the medical field and the now exposed power dynamics that contribute to injustices when they are left unmonitored.[1] Over the past century, biomedical research led to positive change but also reinforced structural racism. Henrietta Lacks, whose tissue was used without her consent to generate HeLa cells, and the Tuskegee study research subjects, who were denied an existing treatment for syphilis, exemplify how biomedical research in the US has been a vector for exploiting minority groups in exchange for knowledge creation. As we usher in the age of computational medicine, leaders of the field must listen to calls from communities around the country and world to decrease the prevalence of structural racism in the next wave of medical advances.[2] We are vulnerable to perpetuating structural racism through algorithms and databases that will drive biomedical research and aid healthcare systems in developing new methods for diagnosing and treating illness. With guidelines from governmental funding agencies and inclusivity of racial and ethnic minorities in research and development communities, we can inch closer to a more just future for our nation's health. BACKGROUND Many health systems rely on commercial software to store and process their patient’s data. This software commonly comes with patented predictive algorithms that help providers assign a risk score to patients based on health needs. However, biases held by algorithm developers can reflect racial disparities and incorporate them in the algorithms if proper counterbalances are not in place to audit the work of algorithm designers.[3] Despite the recent digitization of healthcare data across the United States, racial bias has already found its way into healthcare algorithms that manage populations. ANALYSIS l. Use of Algorithms One landmark study that interrogated a widely used algorithm demonstrated that Black patients were considerably sicker than white patients at a given risk score, evidenced by signs of uncontrolled disease.[4] The algorithm predicted the need for additional help based on past expenditures, and we historically spend less on Black patients than white patients. Rectifying this bias would lead to three times as many Black patients receiving additional resources. The algorithm produces a treatment gap due to a history of unequal access to care and lower spending on people of color compared to white people in the healthcare system. The disconnect between the clinical situation and historical resource allocation exemplifies how certain predictors may produce an outcome that harms patients. The study shows that using a proxy measure for healthcare spending to quantify how sick a patient is instead of using physiologic data can amplify racial disparities. This example highlights the need for collaboration between clinicians, data scientists, ethicists, and epidemiologists of diverse backgrounds to ensure model parameters do not perpetuate racial biases. ll. Use of Big Data in Algorithm Creation In addition to eradicating algorithms that make decisions based on proxy measures encoding racial inequities, we must also be diligent about the content of databases employed in algorithm development. Racial disparities in a database may result from the intentional selection of a homogenous population or unintentional exclusion due to systemic issues such as unequal distribution of resources. For example, a genetic study conducted in a Scandinavian country is more likely to be racially homogenous and not generalizable to a broader population. Applying algorithms derived from homogenous populations to either diverse populations or to different homogenous populations would fail to account for biological differences and could result in a recess of care when used beyond the appropriate population. Additionally, companies like Apple or FitBit could de-identify consumer data collected using their wearable sensors and make it available in research. This is problematic because the demographic distribution of people who have access to their technology may not reflect the general population. To combat these potential disparities, we must construct freely accessible research databases containing patients with diverse demographic characteristics that better model the actual populations that a given model will serve. lll. Government-Based Safeguards Armed with an understanding of how systemic bias is integrated into algorithms and databases, we must strive to construct safeguards that minimize systemic racism in computational biomedical research. A potential way to step forward as a society would be aligning incentives to produce the desired results. Governmental agencies wield enormous power over the trajectory of publicly funded research. Therefore, it is crucial that computational biomedical research funding is regulated by procedures that encourage diverse researchers to investigate and develop healthcare algorithms and databases that promote our nation's health. For example, the National Institutes of Health (NIH) has set forth two large initiatives to catalyze equitable growth of knowledge and research in healthcare artificial intelligence (AI). The first initiative is the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD), which focuses on increasing diversity in researchers and data within AI/machine learning (ML). The program states that “these gaps pose a risk of creating and continuing harmful biases in how AI/ML is used, how algorithms are developed and trained, and how findings are interpreted.” Increased participation of researchers and communities currently underrepresented in AI/ML modeling can prevent continued health disparities and inequities. Programs like AIM-AHEAD are crucial to reducing the risk of creating and continuing harmful biases in biomedical research. With the four key focus areas of partnerships, research, infrastructure, and data science training, AIM-AHEAD and its future incarnations can promote health equity for the next era of medicine and biomedical research. The second initiative announced by the NIH is known as the Bridge to Artificial Intelligence (Bridge2AI) program.[5] This program has a different approach to tackling systemic racism and bias by focusing on the content and process of AI/ML research. Two key components of AI/ML research are rich databases and algorithm development protocols. To develop reproducible and actionable algorithms, researchers must have access to large, well-labeled databases and follow best practices in their development of algorithms. However, large databases are not readily available across the healthcare research ecosystem. As a result, many investigators struggle to gain access to databases that would enable them to carry out AI/ML research at their home institution. A movement toward more freely available databases like the Medical Information Mart for Intensive Care (MIMIC) and electronic Intensive Care Unit (eICU) through the PhysioNet platform created at the Massachusetts Institute of Technology Laboratory for Computational Physiology can improve access to data for research.[6] By adopting the practice of freely available databases commonly used in the AI/ML research communities outside of medicine, MIMIC and eICU lowered the barrier to entry for data scientists interested in health care. The improved access from MIMIC and eICU has led to over 2,000 publications to date. While this is a solid foundational step for the healthcare AI/ML research community, it is essential to reflect on progress and ensure that freely accessible databases are racially and geographically diverse. In this manner, Bridge2AI will facilitate the expansion of healthcare databases that are ethically sourced, trustworthy, and accessible. Without government programs such as AIM-AHEAD and Bridge2AI, the US biomedical research community is at higher risk of perpetuating systemic racism and biases in how AI/ML is used, how algorithms are developed, and how clinical decision support results are interpreted when delivering patient care. lV. Private Sector Standards Even with the proper incentives delivered from governmental agencies, there can be a disconnect between the public and private sectors, leading to racial bias in algorithms used in patient care. Privately funded AI/ML algorithms used in care decision-making should be held to the same ethical standards as those developed by publicly funded research at academic institutions. Publicly funded research is usually peer-reviewed before publication, giving reviewers a chance to evaluate algorithmic bias or deficiencies. Algorithms used in care may have avoided similar scrutiny. Corporations have an inherent conflict between protecting intellectual property and providing transparency of algorithmic design and inputs. The Food and Drug Administration (FDA) is responsible for regulating AI/ML algorithms. It has classified them as Software as a Medical Device (SaMD), focusing on the development process and benchmarking.[7] The importance of holding privately funded algorithm development to the same standards as publicly funded research is highlighted in a September 2020 review of FDA-approved AI/ML algorithms. All SaMD approved by the FDA are registered by private companies.[8] Regulators must be well-versed in structural racism and equipped to evaluate proprietary algorithms for racial bias and maintain oversight as population data drifts occur and the algorithms continue to optimize themselves. The FDA role is crucial to clinical use of SaMD. CONCLUSION Computational decision support |
Author | James Brogan |
Author_xml | – sequence: 1 fullname: James Brogan |
BookMark | eNotzM1LwzAYgPEgCs65o_eC59YkzZuPo445B0NBei9vmjcuo1skHUP_e0E9PfA7PDfs8piPxNid4A1IKdTDOfnmbFJjLagLNpPaiVpZA9dsMU17zrlULXCjZ-y-21H1Sl-nalWwyrF6SvlAIQ04Vu80EZZhd8uuIo4TLf47Z93zqlu-1Nu39Wb5uK2D4KBqlOA0F8QHcM6YIDF6Im8UKHJagw4gnY8CDUHU6Dx4iEEjGoggeTtnm79tyLjvP0s6YPnuM6b-F3L56LGc0jBSr8i2JE206FvlMFodvLGCBs2DVBHaHwnhTWc |
ContentType | Journal Article |
DBID | DOA |
DOI | 10.52214/vib.v7i.8854 |
DatabaseName | DOAJ Directory of Open Access Journals |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Biology |
EISSN | 2691-4875 |
ExternalDocumentID | oai_doaj_org_article_4e83e27f8ab349af86db781ec60d24f5 |
GroupedDBID | ALMA_UNASSIGNED_HOLDINGS FRS GROUPED_DOAJ M~E OK1 |
ID | FETCH-LOGICAL-d1054-a259601e0c59977d2afbeeb7454e96656d529bf1a7e5f6a9b5b5fd6aa75f5203 |
IEDL.DBID | DOA |
IngestDate | Wed Aug 27 01:31:35 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-d1054-a259601e0c59977d2afbeeb7454e96656d529bf1a7e5f6a9b5b5fd6aa75f5203 |
OpenAccessLink | https://doaj.org/article/4e83e27f8ab349af86db781ec60d24f5 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_4e83e27f8ab349af86db781ec60d24f5 |
PublicationCentury | 2000 |
PublicationDate | 2021-11-01 |
PublicationDateYYYYMMDD | 2021-11-01 |
PublicationDate_xml | – month: 11 year: 2021 text: 2021-11-01 day: 01 |
PublicationDecade | 2020 |
PublicationTitle | Voices in bioethics |
PublicationYear | 2021 |
Publisher | Columbia University Libraries |
Publisher_xml | – name: Columbia University Libraries |
SSID | ssj0002435076 |
Score | 2.1629813 |
Snippet | Photo by Clément Hélardot on Unsplash INTRODUCTION The history of biomedical research in the United States is both inspiring and haunting. From the first... |
SourceID | doaj |
SourceType | Open Website |
SubjectTerms | Bioethics Biomedical Research Health Equity Inclusive Race and Justice Structural Racism |
Title | The Next Era of Biomedical Research |
URI | https://doaj.org/article/4e83e27f8ab349af86db781ec60d24f5 |
Volume | 7 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrZ27a8MwEMZFCRS6lD7pG0G7urFlPeyxKQmh0EwpZDMn6wRZnBLSQP_7nCS3ZOvS1YONPxnd78On7xh7MrluW4tkS6DWWWD0DFrADIUjPJUGAMMf3feZnn7It4Va7I36Cj1hKR44CTeUWJUojK_AlrIGX2lnTVVgq3MnpI_ppVTz9sxU2IMFUQA59BSqSYhRyOF2aZ-3ZklfRor-_w3oj5VkcsKOewTkL-nRp-wAuzN2mIZCfp-zR1o5PqM9k4_XwFeej-IJ-SAm_-mTu2DzyXj-Os36UQaZI4CRGZDLIOuDeatqIi4nwFtEa6SSSIZDaadEbX0BBpUn2ayyyjsNYJRXIi8v2aBbdXjFuANLEOcJA9BIR9UaJQgsS-9aobzEazYKr9Z8prCKJsRHxwskatOL2vwl6s1_3OSWHYnQABIP7t2xwWb9hfdUwTf2IS7WDn-3mZU |
linkProvider | Directory of Open Access Journals |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=The+Next+Era+of+Biomedical+Research&rft.jtitle=Voices+in+bioethics&rft.au=James+Brogan&rft.date=2021-11-01&rft.pub=Columbia+University+Libraries&rft.eissn=2691-4875&rft.volume=7&rft_id=info:doi/10.52214%2Fvib.v7i.8854&rft.externalDBID=DOA&rft.externalDocID=oai_doaj_org_article_4e83e27f8ab349af86db781ec60d24f5 |