Daily Bulletin

  • Written by Hannah Klose, Teaching Associate/PhD Candidate, Monash University
Online abuse against women is rife, but some women suffer more – and we need to step up for them

Women online suffer a disproportionate amount of harm and abuse, but it isn’t all based on their gender. This “cyber violence” is also shaped by a range of other intersecting factors such as race, religion, class, caste and disability.

Our ongoing research involves collecting case studies from both India and Australia to understand how various marginalised identities can impact young women’s experiences of online violence, and how social media companies – including Facebook, Twitter and Instagram – aren’t doing enough to stop it.

India is a rich case study for this research, as it’s a country where women have many different expressions of identity in large numbers – and where there remains a lot of racial, religious and social tension across society.

However, although Australia and India have significantly different cultures, women in both countries fall victim to online crimes, including cyber stalking and cyber harassment. And those with marginalised identities have to deal with more more stigma and targeting.

What’s worse is platform content moderators are failing to recognise this cyber violence – often because they don’t understand the nuance and contexts in which stigmas operate.

What is cyber violence?

Cyber violence can be understood as harm and abuse facilitated by digital and technological means.

In 2019, there was a 63.5% increase in the number of cyber violence cases being reported in India, compared to 2018. There has since been a further rise in cases against women from marginalised communities, including Muslim and Dalit women.

One prominent example is the “Bulli Baiapp, which turned up on GitHub in July last year. The app developers used the images of some 100 Muslim women without their permission, to put them up “for sale” in a fake auction. The purpose was to denigrate and humiliate Muslim women in particular.

This is mirrored in Australia. Young Indigenous women are susceptible to being on the receiving end of cyber violence which not only targets them by gender, but also race.

A 2021 research report by eSafety found Aboriginal and Torres Strait Islander women felt victimised by racist and threatening comments made online, usually in public Facebook groups. They also reported feeling unsafe and having their mental health significantly impacted.

Another example comes from New South Wales Greens Senator Mehreen Faruqi, who has received extremely high levels of online abuse as Australia’s first female Muslim senator. Speaking on behalf of women from marginalised backgrounds, Faruqi said:

It is based on where I come from, what I look like, my religion.

Young women with marginalised identities

Research on cyber violence against women in India reveals how hatred towards certain religions, races and sexual orientations can make gender-based violence even more harmful.

When women express their opinions or post pictures online, they are targeted based on their marginalised identities. For instance, Kiruba Munusamy, an advocate practising in the Supreme Court of India, received racist and caste-based slurs for speaking out about sexual violence online.

And women with marginalised identities continue be victimised online, despite attempts to control this.

Read more: A better way to regulate online hate speech: require social media companies to bear a duty of care to users

Take Australia’s “Safety by Design” framework, developed by the eSafety commissioner. Despite having some gathered traction in the past few years, it remains a “voluntary” code that encourages technology companies to prevent online harm through product design.

In India, hate speech against Muslims in particular has been on the rise. India has laws (albeit flawed) that can be used to deal with online abuse, but better implementation is needed.

With a Hindu majority, and radicalisation, it can be difficult to report incidents. Victims are concerned about safety and secondary victimisation, wherein they may face further abuse as a result of reporting a crime.

Read more: Why Modi's India has become a dangerous place for Muslims

It’s hard to know the exact amount of cyber violence perpetrated against women with marginalised identities. Yet it’s clear these identities are linked to the amount of, and type of, abuse women face online.

One study by Amnesty International found Indian Muslim women politicians faced 94.1% more ethnic or religious slurs than women politicians of other religions, and women from marginalised castes received 59% more caste-based slurs than women from more general castes.

We’ve long understood the need for an intersectional approach to feminism. We now need the same approach to protecting women’s safety online. Shutterstock

Recognition in platform design

Five years ago, Amnesty International submitted a report to the United Nations highlighting the need for moderators to be trained in identifying gender-related and identity-related abuse on platforms.

Similarly, in 2019 Equality Labs in India published an advocacy report discussing how Facebook failed to protect people from marginalised Indian communities. This is despite Facebook having caste, religion and gender as “protected” categories under hate speech guidelines.

Yet in 2022 social media companies and moderators still need to do more to approach cyber violence through an intersectional lens. While platforms have country-specific moderation teams, moderators will often lack cultural competency and literacy on matters of caste, religion, sexuality, disability and race. There could be various reasons for this, including a lack of diversity among staff and contractors.

In a 2020 report by Mint, one moderator working for Facebook India said she’s expected to achieve an accuracy report of 85% minimum to keep her job. In practise, this means she can’t spend more than 4.5 seconds on content being reviewed. Such structural issues can also contribute to the problem.

The way forward

In March 2022, the eSafety Commission in Australia joined a global partnership to end cyber violence against women. But a great deal of work still needs to be done.

Content moderation can be complex, and requires collective expertise from communities and advocates. One way forward is to enforce transparency, accountability and resource allocation to build solutions within social media companies.

In November last year, the Australian government released the draft of a bill aimed at holding social media companies accountable for content posted on their platforms, and protecting people from trolls.

It’s anticipated these regulations will ensure platforms are held responsible for harmful content that affects users.

Read more: Leigh Sales showed us the abuse women cop online. When are we going to stop tolerating misogyny?

Authors: Hannah Klose, Teaching Associate/PhD Candidate, Monash University

Read more https://theconversation.com/online-abuse-against-women-is-rife-but-some-women-suffer-more-and-we-need-to-step-up-for-them-183646

Business News

A Guide to Finance Automation Software

When running a business, it is critical to streamline certain processes to maintain efficiency. Too much to spent manually on tasks can wind up being detrimental to the overall health of the organis...

Daily Bulletin - avatar Daily Bulletin

Top Tips for Cost-effective Storefront Signage

The retail industry is highly competitive and if you are in the process of setting up a retail store, you have come to the right place, as we offer a few tips to help you create a stunning storefront...

Daily Bulletin - avatar Daily Bulletin

How Freight Forwarding Simplifies Global Trade Operations

Global trade operations are becoming increasingly complex due to international regulations, customs procedures, and the sheer scale of global logistics. For businesses looking to expand internation...

Daily Bulletin - avatar Daily Bulletin