Local NPR for the Cape, Coast & Islands 90.1 91.1 94.3
Play Live Radio
Next Up:
Available On Air Stations

Fixing Bias in Algorithms is Possible, And This Scientist is Doing It

Data for Asian patients had a high error rate, which could lead to inadequate care.
Eduardo García Cruz,
Data for Asian patients had a high error rate, which could lead to inadequate care.

Algorithms and artificial intelligence are playing ever larger roles in our daily lives – from Google searches and Facebook feeds to self-driving cars and sentencing convicted criminals. And it’s increasingly clear that the decisions algorithms make are often biased, even outright racist and discriminatory.
Irene Chen wants to fix that. Chen is a graduate student in the MIT Computer Science and Artificial Intelligence Lab where her research focuses on machine learning in healthcare and making algorithms fairer. She was working with an algorithm intended to predict who needs the highest level of attention in an intensive care unit.

“I was very surprised to find out that the Asian population was having a higher error than the rest of the population,” Chen told Living Lab Radio. “As an Asian American myself I thought, ‘I'm not biased, I'm not trying to make this discriminatory, what's going on here?’”

It turns out that Asians made up only 3 percent of the data set, whereas white patients made up 50 to 60 percent. This caused a higher error rate for Asians.

“The algorithm might say that the Asian patient is not going to die and then the hospital will not allocate resources to them,” Chen said. “But as a result, because the algorithm is wrong, the Asian patient might have been extremely high-risk and end up dying due to lack of resources.”

Chen says it would be great to go out and collect more data on Asian patients, but that’s not feasible for a computer scientist. So Chen and her colleagues made the assumption that additional data would be similar to the data that they already had.

“In the medical setting, that's actually a reasonable assumption,” she said. “Because a lot of times the limiting agent is the clinician who doesn't want to label it, or it's hard to get different providers to give us the data. But in the end, it'll be the same type of patients, same population of patients.”

Using this method, they were able to get increased accuracy and fairness.

“And that was a really exciting result to find,” she said.

That approach won’t always work, Chen cautioned. Sometimes you will need more data or different types of data. And sometimes the mathematical model will be at fault. But Chen says that algorithms are providing life-saving advances in healthcare that are worth pursuing.

“The results that we’ve shown from healthcare algorithms are so powerful that we really do need to see how we could implement those carefully, safely, robustly, and fairly,” she said.

Web post produced by Elsa Partan.

Elsa Partan is a producer for Living Lab Radio. She first came to the station in 2002 as an intern and fell in love with radio. She is a graduate of Bryn Mawr College and the Columbia University Graduate School of Journalism. From 2006 to 2009, she covered the state of Wyoming for the NPR member station Wyoming Public Media in Laramie. She was a newspaper reporter at The Mashpee Enterprise from 2010 to 2013. She lives in Falmouth with her husband and two daughters.