Racist machines? Twitter’s photo preview problem reignites AI bias concern | Latest News India - Hindustan Times
close_game
close_game

Racist machines? Twitter’s photo preview problem reignites AI bias concern

Hindustan Times | By, New Delhi:
Sep 21, 2020 06:11 AM IST

The problem was first discovered when education tech researcher Colin Madland posted about how video-calling software Zoom cropped the head out of a black person on the other side of a call, seemingly unable to detect it as a human face.

Social media users stumbled upon discrepancies on Sunday in how Twitter displays people with different skin tones, reopening a debate over whether computer programmes – particularly algorithms that “learn” – manifest or amplify real-world biases such as racism and sexism.

A Twitter spokesperson acknowledged the problem and said the company was looking into it.(REUTERS)
A Twitter spokesperson acknowledged the problem and said the company was looking into it.(REUTERS)

The problem was first discovered when education tech researcher Colin Madland posted about how video-calling software Zoom cropped the head out of a black person on the other side of a call, seemingly unable to detect it as a human face. When Madland posted a second photo combination showing the acquaintance visible, Twitter’s image display algorithm appeared to show his face in the preview.

Unlock exclusive access to the story of India's general elections, only on the HT App. Download Now!

Madland appeared to be a Caucasian with white skin.

Soon, several users replicated Twitter’s seemingly discriminatory manner of prioritising faces. In one of the most shared tweets, posted by cryptography engineer Tony Arcieri, Twitter only showed the face of Republican senator Mitch McConnell – a Caucasian -- as the preview of a combo photo that also involved former US President Barack Obama, who is of partly African descent.

A Twitter spokesperson acknowledged the problem and said the company was looking into it. “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do. We’re looking into this and will continue to share what we learn and what actions we take,” this person told HT.

Twitter’s chief design officer Dantley Davis responded to some of the tweets, detecting variations in how the system responded based on further manipulations of the image. Davis also linked to an older blog by Twitter engineers that detailed how the auto-cropping feature worked. The feature uses neural network algorithms, a type of a machine learning approach that attempts to mimic how the human brain processes data.

Multiple groups of researchers have found that such technologies, which usually rely on artificial intelligence are prone to reflecting sociological biases, in addition to flaws in design.

“Automated systems are not inherently neutral. They reflect the priorities, preferences, and prejudices - the coded gaze - of those who have the power to mould artificial intelligence,” said the authors of the Gender Shades project, which analysed 1,270 images to create a benchmark for how accurately three popular AI programmes classified gender.

The researchers used images of lawmakers from three African and three European countries, and found that all three popular software most accurately classified white and male faces, followed by white women. Black women were most prone to be incorrectly classified, found the research led by authors from MIT in their 2018 paper.

“Whatever biases exist in humans enter our systems and even worse, they are amplified due to the complex sociotechnical systems, such as the Web. As a result, algorithms may reproduce (or even increase) existing inequalities or discriminations,” said a research review note by Leibniz University Hannover’s Eirini Ntoutsi and colleagues from multiple other European universities.

This, they added, could have implications for applications such as where AI-based tech such as facial recognition is used for law enforcement and health care.

An American crime risk-profiling software, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was found to have a bias against African-Americans, the authors noted as an example. “COMPAS is more likely to assign a higher risk score to African-American offenders than to Caucasians with the same profile. Similar findings have been made in other areas, such as an AI system that judges beauty pageant winners but was biased against darker-skinned contestants, or facial recognition software in digital cameras that overpredicts Asians as blinking.”

Discover the complete story of India's general elections on our exclusive Elections Product! Access all the content absolutely free on the HT App. Download now!

Get Current Updates on India News, Lok Sabha election 2024 live, Election 2024 along with Latest News and Top Headlines from India and around the world.
SHARE THIS ARTICLE ON
Share this article
  • ABOUT THE AUTHOR
    author-default-90x90

    Binayak reports on information security, privacy and scientific research in health and environment with explanatory pieces. He also edits the news sections of the newspaper.

SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Tuesday, April 16, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On