An algorithm that a major medical center used to identify patients for extra care has been shown to be racially biased.
The algorithm screened patients for enrollment in an intensive care management program, which gave them access to a dedicated hotline for a nurse practitioner, help refilling prescriptions, and so forth. The screening was meant to identify those patients who would most benefit from the program. But the white patients flagged for enrollment had fewer chronic health conditions than the black patients who were flagged.
In other words, black patients had to reach a higher threshold of illness before they were considered for enrollment. Care was not actually going to those people who needed it most.
Alarmingly, the algorithm was performing its task correctly. The problem was with how the task was defined.
The findings, described in a paper that was just published in Science, point to a system-wide problem, says coauthor Ziad Obermeyer, a physician and researcher at the UC Berkeley School of Public Health. Similar screening tools are used throughout the country; according to industry estimates, these types of algorithms are making health decisions for 200 million people per year.