My children are not AI – #2 They both use stereotypes.

My children are not AI – #2 They both use stereotypes.

Tuesday, October 17, 2023 ChildrenAI

IA and my children are directly impacted by stereotypes and unconscious bias. Let's see how.

One day, my then 4-year old son pointed at a Latino woman dressed perfectly normal in the bus telling me out loud "Look mom, she looks like a cleaning lady!" While managing this embarrassing moment, I found it pretty interesting to question how my innocent little boy would believe Latino women are all cleaning ladies. In fact, he has been exposed to very few data about Latino people and cleaners in his whole life. Since our cleaning lady at that time was Latino, and he does not know any others, he has made his own deduction and told me proudly about it without any bad intent.

No one can really blame him for that, we are all impacted by unconscious bias. These are mechanisms human beings have developed over time to survive. It is the same mechanism that homo sapiens used to process the volume and complexity of information they perceived from their environment to react quickly and save their lives, e.g., wolf -> danger -> run. Today, the dangers we face are more complex and we have learned to think logically and thoughtfully to overcome them. However, we all still have a number of unconscious brain mechanisms which make us try to find logic, to categorise and to simplify the world around us. Over the years, stereotypes developed by individuals were reinforced and spread within and across communities. More specifically, stereotypes developed by the dominant class (i.e. white males) have become prominent in our modern society. This is why racism and sexism are so hard to fight today; we all have inherited and been exposed to these stereotypes, which have been constantly reinforced by our own brains.

AI, as opposed to my son, has been exposed to loads and loads of data. AI is not human and has not developed survival mechanisms to simplify their reality; they take it as it is. Then how would AI have developed unconscious bias too? Simply because data they are being trained on are generated by humans, hence already biased. Let’s take for example AI trained on data taken from the internet. A pretty old article in The Guardian already warned the public in 2018 of the dangers of the internet being dominated by white men: “So 20% of the world or less shapes our understanding of 80% of the world.” The numbers may have changed since then, but the conclusion is the same: In producing the majority of the internet content, this small portion of the population is spreading its own vision of the world and its own stereotypes. While this was already problematic 6 years ago, it has become much more worrying, now that AI is being trained on this data. AI is trying to find logic and correlation in all of this data without any critical sense faculties. They reproduce, and, more worrying, sometimes further develop these stereotypes. For example, in a (much more recent) article published by Bloomberg, a text-to-image AI tool called Stable Diffusion was proved to amplify stereotypes about race and gender, including the stereotype my son has built by himself!

Defeating unconscious bias in human brains and in AI is challenging and will determine if we want to build a more inclusive and egalitarian society. For my children, I do my best to follow the good advice found in "How to Raise Kids Who Aren't Assholes" by Melinda Wenner Moyer: Understanding my own bias, explain differences to my kids, letting them experience and enjoy diversity, and learning how to fight racism. Regarding AI, the ideal solution would be to use flawless input data and to avoid the “garbage in, garbage out” phenomenon. Spending time and resources to clean data upfront is a worthy investment to avoid future ethical, reputational, and, probably one day, legal issues. But even with good intentions to clean input data, one must always question the algorithm itself, try to understand its thought processes, test it over and over again, because you never know when new unconscious bias could emerge and cause unforeseen consequences. As for you, I invite you to test your own cognitive biases with the Harvard Project Implicit, you may be surprised by the results and want to work on them too!

No comments yet
Search