Danielle Sinay
Aug 11, 2021
Getty Images/iStockphoto
A “hackerone” contest challenged cyber security experts to expose the “harms” of Twitter’s algorithmic bias, and contestants have gone and done just that.
As reported by WIRED, participants found that the AI program responsible for Twitter’s photo cropping algorithm demonstrates strong bias in regards to age, weight, and Western Romance languages.
The top entry, conducted by Bogdan Kulynych, a computer security Master’s Candidate at Swiss Federal Institute of Technology Lausanne, demonstrated how the social platform’s image-cropping algorithm specifically favours people who look thinner and younger. Kulynych tested the mechanism by uploading a variety of faces, generated via deepfake technology, to the app, then utilising the cropping tool to analyse its response.
The winners of Twitter's algorithmic bias bug bounty were published on Sunday. They had to find unreported bias on… https://t.co/kuKIj0p1FG— Chronicles of the automated society (@Chronicles of the automated society) 1628580374
“Basically, the more thin, young, and female an image is, the more it’s going to be favoured,” Patrick Hall, a principal scientist at AI consulting company BNH and judge for the contest, told WIRED. Other contestants discovered that the algorithm displayed bias against individual with white hair, while another exposed its inclination towards the English language and Latin text over Arabic script.
Ariel Herbert-Voss, a security researcher OpenAI who also served as a judge for the contest, said that the algorithm’s apparent biases mirror the those of the actual humans who initially supplied it with data for training. That said, that in-depth investigation of the tool would greatly assist production teams with eliminating the issues stemming from AI. “It makes it a lot easier to fix that if someone is just like ‘Hey, this is bad,’” she explained.
Thus, the “algorithm bias bounty challenge,” held last week at a computer security conference, implies that allowing independent researchers to examine algorithms for potential faulty behaviour could prove beneficial for the companies using them, helping them find solutions before they cause actual harm.
The winner was @hiddenmarkov, who built a tool to optimize pictures to increase their "saliency" (i.e. likelihood n… https://t.co/ibJTi2PWGv— Chronicles of the automated society (@Chronicles of the automated society) 1628580688
“It’s really exciting to see this idea be explored, and I’m sure we’ll see more of it,” Amit Elazari, director of global cybersecurity policy at Intel and a lecturer at UC Berkeley, told WIRED. She believes that investigating AI bias “can benefit from empowering the crowd.”
Last year, a student on Twitter took note of the app’s photo-cropping algorithm, addressing its apparent preference for white faces and women. Twitter quickly uncovered other examples of the technology demonstrating gender and racial bias.
Twitter provided the code for the image-cropping algorithm to all participants of last week’s contest, proposing awards to teams who found evidence of harmful algorithmic bias in the app.
It is telling that the most forward-thinking and exciting initiative for external/independent algo auditing (ever)… https://t.co/M07Rg31l8v— Emma Lurie (@Emma Lurie) 1628639739
Hall thinks other companies will soon follow suit. “I think there is some hope of this taking off because of impending regulation, and because the number of AI bias incidents is increasing.”
Top 100
The Conversation (0)
x