Might sound strange but facial recognition feature goes racial. A recent research from the MIT’s media lab is putting the seal of confirmation to a report long suspected by before. This refers to the facial technology being subjected to biases based on the data sets provided and the conditions in which algorithms get created.

Further on this, Joy Buolamwini, a researcher at the MIT Media Lab, had recently built a dataset of 1,270 faces, using the faces of politicians, selected based on their country’s rankings for gender parity or the countries having a significant number of women in public office. With this, Buolamwini also tested the accuracy of three facial recognition systems, made Microsoft, IBM, and Megvii of China. The results, which were originally reported in The New York Times, showed inaccuracies in gender identification dependent on a person’s skin color.

The results regarded that Gender was misidentified in less than one percent of lighter-skinned males, in up to seven percent of lighter-skinned females. Again, up to 12 percent of darker-skinned males and up to 35 percent in darker-skinned females.

Listing the findings and co-authoring with Timnit Gebru, a Microsoft researcher, Buolamwini wrote a paper that, “Overall, male subjects were more accurately classified than female subjects replicating previous findings and lighter subjects were more accurately classified than darker individuals. An intersectional breakdown reveals that all classifiers performed worst on darker female subjects”.

To look back, this is hardly the first time that facial recognition technology has been proven inaccurate. But even after this more and more evidence have pointed towards the need for diverse data sets, as well as diversity among the people who create and deploy these technologies. This is in lieu of the algorithms to accurately recognize individuals regardless of race or other identifiers.

To look back, two years ago, the Atlantic has reported on how facial recognition technology used for law enforcement purposes may “disproportionately implicate African Americans.” Till date, this has remained one of the larger concerns around this still-emerging technology. This further refers that innocent people could become suspects in crimes because of inaccurate tech and something that Buolamwini and Gebru have also covered in their paper. They have reported about a year-long investigation across 100 police departments that revealed: “African-American individuals are more likely to be stopped by law enforcement and be subjected to face recognition searches than individuals of other ethnicities.”

The Atlantic has also pointed out that, that other groups have found in the past that facial recognition algorithms developed in Asia were more likely to accurately identify Asian people than white people. Also, algorithms developed in parts of Europe and the US were able to identify white faces better.

Finally, to be precise, the algorithms aren’t intentionally biased, but more research supports the notion that a lot more work needs to be done to limit these biases. So, to this Buolamwini wrote, that, “Since computer vision technology is being utilized in high-stakes sectors such as healthcare and law enforcement, more work needs to be done in benchmarking vision algorithms for various demographic and phenotypic groups”.

LEAVE A REPLY

Please enter your comment!
Please enter your name here