How AI Can Remedy Racial Disparities In Healthcare

The story of American medicine is one of incredible scientific advancements, from the use of penicillin to treat syphilis and other bacterial infections to the countless biomedical breakthroughs made possible by cell-line research. 

Too often, however, these stories ignore an uncomfortable truth: Some of our nation’s most significant medical discoveries were made possible through the mistreatment of Black patients—from the exploitation of African American farmers during the Tuskegee Syphilis Experiments to the tragic case of Henrietta Lacks, a black patient whose cells were stolen by doctors and used for decades of cell-line research.

Racism is woven into our nation’s medical past but is also part of our present, as evidenced by the Covid-19 crisis. From testing to treatment, Black and Latino patients have received a lower quality and quantity of care compared white Americans.

As a country, we now have the opportunity to reverse course. Rather than advancing medicine through racist actions, we can combat racism in medicine with the use of science and technology. Artificial intelligence and data-based algorithms can help address health disparities and break down the barriers to healthcare equity, like these:

1. Unequal testing and treatment  

At some point during medical school, all future doctors are instructed to treat everyone equally, regardless of a person’s race, ethnicity, gender, religion or sexual orientation. Studies have shown just how difficult this edict proves in practice.

Even when physicians have the best of intents, their actions are beset by unconscious prejudices. Researchers have found that two out of three clinicians harbor what is called an “implicit bias” against African Americans and Latinos. These are biases that exist outside the doctor’s awareness but are nonetheless harmful to minority patients. 

In one example, epidemiological data demonstrate that Black individuals have experienced a two to three times higher likelihood of dying from Covid-19 than white patients.

Physicians attribute this discrepancy to the “social determinants of health,” a phrase that encapsulates the many aspects of life that influence our health, including where we live, work, play and socialize. But before we accept this explanation and let healthcare professionals off the hook, consider what we learned early in the pandemic: According to national studies, white patients who came to the emergency room with symptoms likely to be Covid-19 were tested far more often than Black patients with identical symptoms.

Or consider this: Studies show Black women are less likely to be offered breast reconstruction after mastectomy than white women. Or this: Research shows that Black patients are 40 percent less likely than white patients to receive pain medication after surgery.

In a medical culture that falsely believes all patients are treated equally, white physicians fail to recognize how often Black and Latino patients are treated as other.

Technology can mitigate this threat. Early experiments using artificial intelligence have shown some success in replacing or supplementing the physician’s judgment (and implicit biases) when diagnosing a patient’s pain or medical needs. 

And by using information pulled from each doctor’s electronic health record, AI applications can compare treatments provided to patients of different racial or ethnic backgrounds within the individual physician’s practice. That data can be used to program alerts, notifying doctors when they are providing unequal treatment to a patient of different race. 

2. Bias in medical research and data

A little over a year before the coronavirus pandemic reached our shores, the racism problem in U.S. healthcare was making big headlines. 

But it wasn’t doctors or nurses being accused of bias. Rather, a study published in Science concluded that a predictive healthcare algorithm had, itself, discriminated against Black patients.

The story originated with Optum, a subsidiary of insurance giant UnitedHealth Group, which had designed an application to identify high-risk patients with untreated chronic diseases. The company’s ultimate goal was to help re-distribute medical resources to those who’d benefit most from added care. And to figure out who was most in need, Optum’s algorithm assessed the cost of each patient’s past treatments. 

Unaccounted for in the algorithm’s design was this essential fact: The average Black patient receives $1,800 less per year in total medical care than a white person with the same set of health problems. And, sure enough, when the researchers went back and re-ranked patients by their illnesses (rather than the cost of their care), the percentage of Black patients who should have been enrolled in specialized care programs jumped from 18{50531db320f8e8a316d79d6a285e47c71b6e4f6739df32858cb86474d7e720e9} to 47{50531db320f8e8a316d79d6a285e47c71b6e4f6739df32858cb86474d7e720e9}.

Journalists and commentators pinned the blame for racial bias on Optum’s algorithm. In reality, technology wasn’t the problem. At issue were the doctors who failed to provide sufficient medical care to the Black patients in the first place. Meaning, the data was faulty because humans failed to provide equitable care. 

Artificial intelligence and algorithmic approaches can only be as accurate, reliable and helpful as the data they’re given. If the human inputs are unreliable, the data will be, as well. 

Let’s use the identification of breast cancer as an example. As much as one-third of the time, two radiologists looking at the same mammogram will disagree on the diagnosis. Therefore, if AI software were programmed to act like humans, the technology would be wrong one-third of the time. 

Instead, AI can store and compare tens of thousands of mammogram images—comparing examples of women with cancer and without—to detect hundreds of subtle differences that humans often overlook. It can remember all those tiny differences when reviewing new mammograms, which is why AI is already estimated to be 10{50531db320f8e8a316d79d6a285e47c71b6e4f6739df32858cb86474d7e720e9} more accurate than the average radiologist.

What AI can’t recognize is whether it’s being fed biased or incorrect information. Adjusting for bias in research and data aggregation requires that humans acknowledge their faulty assumptions and decisions, and then modify the inputs accordingly. 

Correcting these types of errors should be standard practice by now. After all, any research project that seeks funding and publication is required to include an analysis of potential bias, based on the study’s participants. As example, investigators who want to compare people’s health in two cities would be required to modify the study’s design if they failed to account for major differences in age or education or other factors that might inappropriately tilt the results. 

Given how often data is flawed, the possibility of racial bias should be explicitly factored into every AI project. With universities and funding agencies increasingly focused on racial issues in medicine, this expectation has the potential to become routine in the future. Once it is, AI will force researchers to confront bias in healthcare. As a result, the conclusions and recommendations they provide will be more accurate and equitable.

3. Institutional racism 

The biases of individual doctors and researchers aren’t always the biggest barriers to equitable healthcare. Often, the problem is institutional. 

Institutional (or systemic) racism is invisible yet omnipresent. It is woven into the fabric of American healthcare, embedded into the practices, policies and perceptions of the entire industry. As a result, this form of racism can’t be resolved by modifying the behavior of individual doctors. 

But with the help of AI technologies, the actions that contribute to institutional racism can be decoded, identified and potentially corrected. 

A distressing example of institutional racism involves childbirth. Most Americans don’t realize it, but the United States ranks last among all developed nations in maternal mortality (the measure of how often mothers die during or soon after childbirth).

Most of these deaths could be prevented, and yet the maternal mortality rate has been increasing in the United States since 2000. Two decades after The Journal of Perinatal Education first described the issue of racial disparities in maternal care as “alarming,” Black women remain three times more likely to die from childbirth than white women. 

Obstetricians know the most common causes of maternal death are (a) unrecognized bleeding and (b) uncontrolled high blood pressure. What they can’t explain is exactly why a woman’s skin color has such a significant influence on her risk of dying. Ask doctors what’s going on and they’ll list a number of contributing factors, ranging from the higher risk of hypertension in Black patients to greater life stresses to difference in diet and education. 

But none of those factors help explain this: When the treating clinician is Black, the disparity in deaths between white and Black mothers all but vanishes. 

The problem in understanding this discrepancy isn’t a lack of data. Almost all U.S. hospitals have comprehensive inpatient electronic health records (EHRs) that provide a rich tapestry of details about the women giving birth and the care they receive. And as of 2017, all 50 states were required to add a standardized “maternal mortality checkbox” to their data reporting systems.

And yet we still don’t know why the race of the doctor makes such a difference or how to close the gap when the physician is white. We also don’t know if the race of the nurses providing the care matters. We also don’t know whether the frequency of blood-pressure monitoring or care checks varies based on the patient’s race, the staff member’s race, or both. 

Most medical research focuses on the causations or correlations between two easily isolated data sets (like the race of doctors and the mortality of patients). Addressing systemic racism in medicine requires that we analyze far more data (all at once) than we do today. AI is the perfect application for this task. What we need is a national commitment to use these types of technologies to answer medicine’s most urgent questions.  

The need for cultural change

Thirteen months into the pandemic, Covid-19 continues to kill Black individuals at a rate three times higher than white people. For years, health plans and hospital leaders have talked about the need to address health disparities like these. And yet, despite good intentions, the solutions they put forth always look a lot like the failed efforts of the past. 

Industry leaders would be wise to remember the words of antiracism activist Amanda Calhoun: “You’re either actively working against racism, and you’re actively supporting policies and behaviors that are working to rectify a racist system, or you are upholding a racist system.” 

There is no antidote to the problem of racism in medicine. But combining AI with a national commitment to root out bias in healthcare would be a good start, putting our medical system on a path toward antiracism.