Over the past weeks, we are living in “unprecedented times,” and not only because of the global pandemic. In recent weeks, tens of thousands of people have marched in the cities and small towns across America and the world, in demonstrations and peaceful protests condemning racism. It’s a topic that we’re all thinking about, especially as it relates to our own social biases.
Would you be surprised to learn that racism not only affects our human society, but that applications using Artificial Intelligence (AI) are now under the spotlight for their biases? It has been demonstrated that AI applications are created using unfair data, and as a consequence, their decisions may also be unfair. As our society relies more and more on AI-informed decisions, this could affect certain social groups in very unfair ways.
Twitter users have recently pointed out controversial situations like these, showing how an algorithm that creates high definition images of faces from low quality photos, is converting black or asian people to white.
And it’s perhaps because of these very types of biases that some of the biggest global companies such as IBM, Microsoft or Amazon, are not letting police use their facial recognition systems, as they know that this type of technology is strongly biased and the decisions taken using it could be unfair.
But as we work in fashion technology, we may be tempted to think that this type of bias issue doesn’t play a role in our work. Well, it turns out that’s not the case. Let’s talk about biases in fashion.
The rules that traditionally have driven the fashion industry created strong bias in society, not only in the gender perspective (skirt and dresses are traditionally worn by women and suits by men), but also in clothing worn by different cultures (kimonos worn in Japanese culture)
Subset of skirt products labelled with gender
However, as Gen Z grows up, these rules are gradually changing, driven by globalization and the embrace of gender fluidity. In fact, many new brands are born with gender fluidity as part of their DNA, along with large-scale traditional brands releasing new collections based on the same theme.
Sources: The Independent and The Hollywood Reporter
But…are our AI algorithms ready for this shift?
Tech bias & the fashion case
There are multiple techniques for creating AI algorithms depending on the type of the problem that you are trying to solve. One of the most popular is supervised learning which consists of training models in a similar way as we as humans learn: by example and trial and error.
These types of algorithms use a large amount of products previously carefully labeled in the domain in which the model is learning (e.g. products with different sleeve lengths). During training, the method will iterate, and the algorithm will try to predict a label for each data instance. Then, after evaluating how good the predictions were, the training method adjusts the system parameters. This process is repeated multiple times, until it arrives at the highest predictive accuracy for products that the system has never seen before.
But what happens when the training data contains information correlated with the labels but not related with what we are trying to learn? What if, for example, all the products used for long sleeves, belonged to kids and products for short sleeves belonged to men? Then our system may learn to predict the sleeve length based on characteristics not at all related to the sleeve, and even if the resulting predictions seemed ok, such a system would fail in the real world.
This very thing is happening not only at Amazon, Google, and IBM, but also amongst many fashion AI systems. So, after realizing that our systems were biased, we conducted research on various algorithms from other companies and the results are quite interesting. Most of them showed ethnicity issues, for example detecting with a very high score – a traditional “white dress – suit” wedding based on an image. But at the same time, they had a lot of difficulty correctly recognizing Chinese or Indian weddings (where red is the traditional color.)
The same thing happens with gender: images with heels or skirts will probably be tagged with female gender, even if worn by men.
In order to evaluate the problem and understand how it impacts AI systems, there is an emerging research area in Machine Learning called “eXplainable Artificial Intelligence” (XAI). It is not only focused on obtaining good systemic results, but also on how these results were obtained. In other words, it shows precisely what your AI system has learned, in order to predict one thing or another.
There are various libraries that help explain the output of an AI system for general purposes like Lime or Shap, and some others which put special focus on fairness and bias mitigation like FairML, Google What-If or IBM Fairness 360. What is certain, is that XAI is here to stay, and further research will uncover new tools to help us understand AI systems better.
After evaluating our biases, we need to mitigate the bias from our training data and train our system with neutral information. Just as it’s an ongoing learning process to remove our own biases, for AI systems, this is also a tricky process, because sometimes our data is not rich enough in the underrepresented areas. Nevertheless, there are Machine Learning techniques to overcome these issues:
- Generating synthetic images & products for underrepresented groups using Generative Adversarial Networks
- Making our models pay more attention to underrepresented data, even if they are less probable than other groups
- Utilizing advanced neural network techniques like the Adversarial debiasing, that evaluates the bias during training and tries to mitigate it by penalizing biased predictions
In the same way that our society is undergoing a revolution of human rights, social fairness and equality, our AI systems and its applications need attention paid to avoid biases. Let’s all strive to make our data fair and unbiased, shall we?