top of page

AI through an Intersectional lens

Some of us might have watched the recent Indian anthology ‘Ajeeb Dastans’ on Netflix, and wondered about the premise of the third film, ‘Geeli Pucchi’. What was it about? Was it about gender bias? Sexuality? Casteism? Or was it about patriarchy?

In 1982, Audre Lorde said:

There is no such thing as a single-issue struggle because we do not lead single-issue lives.”

Geeli Pucchi was about all the above; it was intersectional. Simply put, intersectionality is how different aspects of our identity, like gender, race, caste, class, ableism converge to uniquely shape our experiences of privilege and oppression. As depicted in the movie, a Dalit, dark-skinned, queer woman’s experience at the workplace is different from that of an upper caste, fair-skinned, queer, married woman. The former is not even considered a woman or feminine and denied a desk job she is well qualified for, whereas the latter not only gets the coveted job but has the Boss fussing and clucking around her. However, both wage a battle against patriarchy in their own ways.

Konkona Sen Sharma and Aditi Rao Hydari in short story Geeli Pucchi in Ajeeb Daastaans.

In 1989, Kimberlé Crenshaw introduced the Theory of Intersectionality in her paper ‘Demarginalizing the Intersection of Race and Sex.’ The essay critiqued the inability of the law to comprehend that a black woman could be discriminated against based on race, gender, and often a combination of both these identities. It highlighted that the experiences of discrimination that black women faced were different from those faced by white women.

Though initially confined within the formidable walls of academia despite being acknowledged, the past decade witnessed intersectionality make its way to the Oxford English Dictionary in 2015, the Women’s March in 2017, the Oscars in 2018, and across geographic boundaries. Given the intersectional bias that exists in our societies, it’s hardly surprising to find the same bias within technology. Let’s take Artificial Intelligence (AI) for example. AI is the ability of a machine to exhibit or imitate intelligence behavior of humans. While the possibility of machines possessing human-like intelligence or surpassing them is still a long while away, we do have AI systems that can execute tasks much faster and efficiently than humans, we have systems that can also remember past actions and learn from them to make the next decision better.


Image Credits: Bernhard Lang / Getty Images

Typically, an AI system is fed data, a large amount of it from various sources, the system then processes this data, applies intelligent algorithms, and learns from existing patterns and features of the data. This learning enables it to come up with solutions.

Why does intersectionality even matter in AI? Consider this, Facial Recognition Technologies, an AI system is better at recognizing light-skinned males than dark-skinned females. A study conducted by Joy Buolamwini and Timnit Gebru, in 2018 titled ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’ found that three prominent facial recognition technology systems incorrectly identified 35% of the dark-skinned females, 12% of the dark-skinned males, 7% of light-skinned females and 0.8% of light-skinned males.

How does this happen? Algorithms train on and learn from data and in most scenarios, it is this underlying data that is the cause. Data could be reflective of existing societal biases and subjectiveness. Amazon’s hiring algorithm which was biased against women for technical jobs was just exhibiting the existing social bias that men are more suited for these jobs. It’s possible that the data provided does not have an adequate representation of the real world or there isn’t enough data for the system to predict accurately.

Another reason could be the algorithms draw certain incorrect statistical inferences, e.g. if data fed to an image classifier system contains more images of women in the kitchen, it might incorrectly include that all individuals in kitchens are women. The individuals developing these systems may also incorporate their personal biases into the system.

There are efforts to develop fair or bias-free AI but most of them treat bias as a single axis issue but as we can see from the study on Facial recognition technologies, algorithmic bias is not based on a single variable, it's intersectional and should be tackled as such. With AI poised to be used in all sectors like healthcare, education, criminal justice, financial services and pretty much everything else, intersectionality deserves to be included in AI dialogue, policy, regulation, and ethics.

(This blog is submitted by a Guest Author, Chithra Madhusudhanan. Chithra is a Software engineer by profession and is currently on a break, raising a 1 year old, rediscovering her passion for reading , writing and eating cake. Her writings are an attempt to understand the world around her.)

bottom of page