Twitter is announced the results An open competition to find algorithmic biases in photo cropping systems. My company has disabled automatic photo cropping. in March Last year, Twitter users suggested that they prefer white faces over black ones after experimenting. We then launched an algorithmic bug bounty to try and analyze the problem more closely.
The competition, organized with the support of DEF CON AI Village, confirmed these early results. The top-ranked entries found Twitter’s cropping algorithm favoring “slender, young, light or warm skin tones and soft skin textures, typical feminine facial traits”. 2nd and 3rd place entries showed that the system was biased towards people with white or gray hair, suggesting age discrimination and preferring English over Arabic characters in images.
In presenting these results at DEF CON 29, Rumman Chowdhury, director of Twitter’s META team (Machine Learning Ethics, Transparency, and Responsibility Studies), commended participants for showing the real effects of algorithmic bias.
“When you think about bias in our model, it’s not just about academic or experimental things. […] But it does talk about the way we think in society and how it works,” said Chowdhury. “I use the phrase ‘a life that resembles art that resembles life’. We create these filters because we think they are beautiful, and in the end we train our models and lead to unrealistic notions of what it means to be attractive.”
The first entry and winner of the top prize of $3,500 was Bogdan Kulynych, a graduate student at EPFL, a Swiss research university. Kulynych used an AI program called StyleGAN2 to create a number of realistic faces, varying in skin color, masculine versus masculine facial features, and slimness. He then fed this variant into Twitter’s photo cropping algorithm to find his favourites.
As Kulynych points out in the summary, these algorithmic biases amplify society’s biases, literally cutting out “those who do not meet the algorithm’s weight, age, and skin color preferences.”
Such prejudices are more prevalent than you might think. Another participant in the competition, Vincenzo Di Chico, Received special mention for his innovative approach, he showed that image cropping algorithms favor light-skinned emojis over dark-skinned emojis. 3rd place, Loya Pakjad, the founder of the technology advocacy group Taraaz said that the bias also applies to the written function. Pakzad’s study compared memes using English and Arabic scripts, and showed that the algorithm regularly crops images to highlight English text.
While the results of Twitter’s biased competition to confirm the nature of social bias that pervades algorithmic systems may seem discouraging, it also shows how tech companies can address these issues by opening their systems to outside scrutiny. Chowdhury said:
Twitter’s open approach contrasts with other tech companies’ responses when faced with similar challenges. For example, when researchers led by MIT’s Joy Buolamwini discovered racial and gender bias in Amazon’s facial recognition algorithms, the company launched a substantive campaign to discredit those involved and to “mislead” and “false” their work. I did. After wrestling with the findings of the investigation for months, Amazon eventually decided to temporarily ban the use of these same algorithms from law enforcement agencies.
Twitter competition judge and AI researcher in the field of algorithmic discrimination, Patrick Hall, stresses that this bias is present in all AI systems and that companies must proactively work to find them. “AI and machine learning are just western days, no matter how skilled you think data science teams are,” Hall said. “If you don’t find your bug or the bug bounty doesn’t find your bug, who’s looking for your bug? Because obviously there is a bug.”