Artificial intelligence has managed, over the years, to be applied in different productive areas.
“A happy woman and a serious man embracing in a park.” That your computer is already able to describe your travel photos in this way is great. And for many people who browse the internet using screen readers, it is also an almost essential technology. All this thanks to artificial intelligence.
Of course, the machines are not perfect, sometimes they fail. But lately we are discovering something worrying: sometimes computers fail as a human fails. For a woman and a man with the same expression, AI systems may tend to believe that she is happy and he is in a bad mood. We call this type of errors biases , and they include racist, sexist, ableist tendencies… that can end up doing real harm to people.
The face as a mirror of the soul
To study these biases we are going to focus on a specific application, the automatic recognition of emotions in photographs.
First, we need to make it clear to the computer what we mean by “emotion”. The most used classification is based on 6 basic emotions: fear, sadness, happiness, anger, disgust and surprise. This classification was proposed by psychologist Paul Ekman in the 1970s.
These emotions have been shown to be more or less universal and recognized by everyone. However, it has also been shown that they recognize each other somewhat better between people of the same social group, gender, age… Not all of us express ourselves exactly the same, nor do we read the expressions of the rest in the same way. Even without realizing it, we are biased.
These differences are seen in many contexts, and sometimes turn into stereotypes and prejudices. For example, we expect women to be more happy than angry, and the opposite is true for men. And this is reflected on the internet, where the photos tend to include, above all, smiling women.
On the other hand, for an artificial intelligence system to learn to distinguish these emotions, we also need to think about how people understand them. In reality, the face is only one part of a very complex puzzle. Gestures, posture, our words also contribute… Although work is being done to resolve all these modalities with artificial intelligence, the most popular and versatile form is recognition based on face photos.
How does an artificial intelligence learn?
Creating bias-free artificial intelligence is quite a challenge. And it all starts with how we make this technology “learn”. We call the field of artificial intelligence dedicated to this learning machine learning . Although there are many different forms of learning, the most common is supervised learning.
The idea is simple: we learn from examples. And the artificial intelligence needs to know for each example what we want to get. To learn to recognize emotions, we need a bunch of photos of faces with different emotions: happy, sad, etc. The key is that for each photo, we must know what emotion appears.
Then, we pass the photos and their associated emotions to artificial intelligence. Through a learning algorithm, the system will learn “alone” to relate the photos with the emotions that appear. Picture by picture, we ask you to predict an emotion: if it’s right, we move on, and if it’s wrong, we adjust the model to correct this case. Little by little, you will learn and fail fewer and fewer examples. If you think about it, it’s not that different from how we humans learn.
As you can see, examples are essential in this process. Although there are advances that allow us to learn with few examples, or examples with errors, a large and well-labeled set of examples is vital to achieve good artificial intelligence.
Unfortunately, in practice it is common to have examples with errors. In our case, they would be from faces labeled with the wrong emotion to photos without faces or with animal faces. But there are other problems, sometimes more subtle and worrying: racism, sexism, ableism…
When the algorithms go wrong
As you can imagine, if our examples are biased, the machine will learn and reproduce these biases. Sometimes it will even multiply the effect of the biases. For example, if in our photos we only have angry dark-skinned people and happy light-skinned people, it is very likely that the artificial intelligence will end up confusing skin color with mood. He will tend to predict anger whenever he sees dark-skinned people.
Unfortunately, this is not just a theory. It has already been shown, for example, that facial analysis systems to recognize gender fail more for black women than for white men and that they regularly make mistakes with people who are trans or non-normative in appearance .
One of the most notorious examples was when in 2018 an artificial intelligence system mistakenly identified 28 US congressmen as criminals . Of the politicians identified, 40% were people of color, although they only represented 20% of Congress. All this because the system had been trained mostly on white people, and confused people of color with each other.
Detecting and reducing these biases is a very active field of research with a great social impact. Many of them are subtle and related to several demographic factors at the same time, making it a difficult analysis. In addition, all phases of learning must be reviewed, from data collection and its measurements to the final application. And normally it is not the same people who work in each phase.
One database to include them all
Let’s go back to emotion recognition. On the internet there are many databases of emotions already labeled. Unfortunately, the largest databases often also have strong sex/gender, race, and age biases.
It is necessary that little by little we develop diverse and balanced databases on which to work. That is, we need to include all kinds of people in our databases. In addition, all of them must be well represented in each emotion.
Finally, if we want to collect data without bias, we have to think about the whole process. All phases, from data collection to the final tests of an artificial intelligence, must be carried out in a careful and accessible manner. And it is necessary to involve people who can recognize and point out possible bias in all of them.
And all this for what?
This whole business of recognizing emotions may sound abstract, but it already has important applications. The most common is assistive technology, such as automatic photo description for the visually impaired. It is also already used in domestic robots. It can even be applied in medicine, where it has been possible to automatically recognize pain in newborns who sometimes do not express it through crying.
In any case, the study of biases in artificial intelligence goes beyond emotions. The technologies we develop have a huge impact on people’s lives. We have a moral duty to make sure they are fair, that their impact on the world is positive.
We want to build an artificial intelligence that we can trust, that makes us smile.