Monday, 8 December 2025

The Invisible Violence of Algorithms

Ramatu Ada Ochekliye
 By Ramatu Ada Ochekliye

I recently sat down with Imoh from West Africa Democracy Radio (WADR) in Dakar to discuss an issue that is shaping the daily realities of women online: the invisible violence of algorithms

Itohoimo Edet, known as Imoh, is a journalist, producer, and supervisor of the English desk at WADR, where he also serves as assistant partnerships officer. The conversation, which was broadcast across Bénin, Burkina Faso, Côte d'Ivoire, Ghana, Guinée, Liberia, Mali, Niger, Nigeria, Senegal, Sierra Leone, Tchad, and Togo, was part of our activities to commemorate the 16 Days of Activism Against Gender-Based Violence. We discussed the sobering look at how technology, often viewed as neutral, can reinforce and deepen the harm women face every day.

Imoh: What do we mean by “invisible violence” in algorithms, and how does it show up for women?

Ramatu: When I talk about invisible violence in algorithms, I am referring to the harm hidden within the design choices, data sets, and automated decisions that govern digital spaces. This violence is subtle but pervasive. It shapes what women see, how they are seen, and how they are treated online, usually without their knowledge or consent.

Algorithms determine which posts are promoted, which are suppressed, and which behaviours are rewarded. When these systems amplify misogynistic content—something they often do because of high engagement—they normalise hostility and hatred towards women. At the same time, the very tools meant to protect women, such as content reporting or automated filtering, frequently fail to recognise gendered abuse. Women find themselves jumping through hoops just to have harmful content noticed, let alone removed. Those from already underserved communities (women of colour, gender non-conforming women, and women in underrepresented regions) are hit hardest. Their content is misinterpreted more often, while harmful content directed at them is ignored for longer.

Let me share an example. A while ago, I was using one of the free image platforms for my work as a blogger. One day, I searched for pictures of African women, and the result shocked me: monkeys, orangutans, and other primates were the predominant images. I was scandalized and rightfully angry. Someone at the company was cataloging African women with primates, something that clearly stemmed from racism. 

I would usually have cussed them out on social media, but I paused a bit. I wrote them an email explaining my experience on their site and called out the racism in their algorithm for what it was. The company reached back out to me and quickly apologized, reassuring me by taking accountability and promptly addressing the problem. I checked their site again and found the bias roundly addressed. It has been more than ten years since the incident, and I still use the site today. When I search for African women today, I get all shades of us in all spheres of our lives. 

Similarly, Google was called out for their search algorithm of women always turning up with women in states of undress, either willingly through pornography, or forcefully through rape, abuse, and degradation. I joined the petition against the tech giant, and they responded by correcting the error in their algorithm. 

Imoh: Why isn’t online harassment detected or removed quickly enough, and what role does biased data play?

Ramatu: A major part of the problem lies in the datasets used to train AI moderation tools. These systems are only as good as the examples they learn from, and unfortunately, the data often lacks the nuance of real-world gendered harassment. Many AI models can easily detect generic insults but fail to recognise coded misogyny, cultural slurs, or the specific forms of harassment that women face in different parts of the world.

In Senegal, for example, what can be said in local French parlance can be missed because it does not fit into Parisian French.  In Nigeria and Ghana, our slang can often be very nuanced to us. Women are often slut-shamed online, with words like ashawo, olosho, karuwa, akunna, etc. These words are known within popular culture or the specificity of the tribes that use them. However, even though they are words that are synonymous with whore, prostitute, and slut, they are often unable to be tracked or reviewed because most algorithms are written either in Western or Eastern languages. 

When training datasets do not include examples of gender-based abuse in African languages or dialects, the system becomes blind to it. In other cases, annotators—the people who label data—do not always recognise subtle misogynistic behaviours themselves, and their oversight becomes baked into the system. The result is that abusive content remains online longer than it should, while women are forced to repeatedly report issues or simply endure the harassment. The burden of safety, once again, falls on the victims rather than the platforms.

Imoh: Why are posts on women’s rights or gender-based violence misclassified as harmful?

Ramatu: A particularly troubling pattern is the misclassification of posts that advocate for women’s rights or raise awareness about gender-based violence. Algorithms often rely on keyword detection, flagging words like “rape,” “violence,” or “abuse” without understanding context. When survivors share their stories or activists educate others, these posts can be mistaken for harmful content and taken down.

This is partly because many models are trained in contexts far removed from gender justice work. They lack the cultural and social sensitivity needed to differentiate between harmful content and content intended to inform, protect, or empower. Without robust context detection, the system reacts to isolated words rather than their purpose or tone. The result is shadow bans, down-ranking, or outright removal of posts—effectively muting the very voices that digital spaces should uplift.

I have been a victim of this multiple times because of my work and insistence on calling words what they are. I am currently shadow-banned on TikTok because of this. But words matter. Calling rape ‘grape’ takes away from the vile abuse a victim or survivor has had to deal with at the hands of her rapist. Refusing to allow us to use femicide allows words like ‘unalive’ to become the norm, taking away from all the rights and dignity a person loses when they are murdered because of their gender. Again, the algorithm doesn't understand the nuance of the work we do. 

Imoh: What responsibilities do tech companies and regulators have?

Ramatu: Tech companies hold enormous power, and with that power comes responsibility. They need to build datasets that reflect the diversity of women’s experiences and ensure that the people developing moderation systems come from diverse backgrounds themselves. This includes African experts, women, and individuals who understand the linguistic and cultural nuances of online harassment in different regions.

Recently, Facebook announced that it had fired the fact-checking arm of its organization. This came in alignment with some global (but especially United States) policies eroding diversity, equity, and inclusion in the workplace. A similar thing has happened with Twitter (X). What has been the direct consequence? Harassment of women and girls is at an all-time high. We have people using AI to ‘undress women’, calcifying new ways to abuse women's and girls' rights. 

Companies must commit to conducting regular bias audits, publishing transparency reports, and prioritizing safety during the design phase rather than as an afterthought. Regulators also have a critical role. They must enforce standards for algorithmic transparency and fairness, require platforms to assess gendered impacts of their AI systems, and protect digital rights through stronger appeal processes and privacy protections. Only when tech companies and regulators work together can we prevent AI from reinforcing existing inequalities or silencing marginalized voices.

Imoh: What steps can African governments, civil society, and platforms take next?

Ramatu: African governments must begin developing digital safety policies that acknowledge the unique ways online violence affects women. Investing in research, strengthening cyber-harassment laws, and enforcing them effectively are necessary steps. Civil society organisations should continue championing digital literacy programs for women and girls while holding tech companies accountable through advocacy and research. One powerful contribution civil society can make is supporting the creation of local datasets that help AI systems better understand African languages and cultural contexts.

Digital platforms must improve moderation in African languages and co-design safety features in partnership with women’s rights groups. They need to expand and refine protection tools—introducing context-aware filters, rapid response channels, and user-controlled safety settings to give women more agency and support.

Across government, civil society, and technology sectors, one thing is clear: inclusive innovation is essential. Women must be involved in designing AI systems, shaping digital policy, and leading technology teams. When we centre women’s voices, especially those from underserved communities, we build digital spaces that are safer, more equitable, and truly empowering.

The invisible violence embedded within algorithms may be subtle, but its impact is profound. By naming it, challenging it, and redesigning the systems that uphold it, we move one step closer to a digital world where women are not silenced but heard, respected, and protected.

At Shades of Us, we will continue to advocate for the rights of women and girls, and hold governments and tech companies accountable for safe online spaces. 

The invisible violence embedded within algorithms may be subtle, but its impact is profound. By naming it, challenging it, and redesigning the systems that uphold it, we move one step closer to a digital world where women are not silenced but heard, respected, and protected. But this work is not abstract. It is not theoretical. It is deeply urgent for Africans, because Africa sits at an inflection point: the continent is young, connected, and swiftly digitizing. Every tool we use—our search engines, our social platforms, our AI assistants—shapes how we communicate, organize, mobilize, educate, and demand justice. If these tools are already tilted against women and girls, then the future being built is one where inequality becomes automated. 

Bias becomes scalable. 

Violence becomes efficient. 

And silence becomes the default.

For African women and girls, this is a matter of safety, dignity, and participation in public life. When algorithms amplify misogyny, women retreat from online spaces that should have belonged to them. When AI mislabels our advocacy as harmful, movements lose visibility. When our languages and lived experiences are excluded from training data, we are erased from the future being built.

This is why our work matters. This is why it must continue.

We cannot allow systems created elsewhere, trained on people who do not look like us or live like us, to determine our realities. We cannot accept a digital world where African women are perpetually misunderstood, unprotected, or muted. We cannot afford to let technology, powerful as it is, become a new frontier for the same old injustices.

At Shades of Us, we understand this. We know that advocacy cannot stop at demanding justice in physical spaces; it must extend to the virtual ones that increasingly shape our lives. We will continue to raise our voices for women and girls. We will continue to push for accountable, transparent, and gender-responsive tech ecosystems. We will continue to challenge governments that ignore digital harms and corporations that profit from unsafe platforms. And we will keep insisting that African languages, cultures, and contexts are not afterthoughts but essential parts of global digital design.

Because when we fight to reform algorithms, we are not just fixing code. We are expanding freedom. We are protecting futures. We are making sure that our stories, our bodies, our voices, and our rights are never again reduced to data points that can be ignored.

The digital world must be a place where African women thrive, not one where they disappear. And we will not relent until that becomes our reality.

No comments:

Post a Comment

The Invisible Violence of Algorithms

Ramatu Ada Ochekliye   By Ramatu Ada Ochekliye I recently sat down with Imoh from West Africa Democracy Radio (WADR) in Dakar to discuss an ...