Social media accelerates trolling – just look at Raygun. How can we stop viral moments from spiralling?
- Written by David Tuffley, Senior Lecturer in Applied Ethics & CyberSecurity, Griffith University
For Australian breaker Rachael “Raygun” Gunn, the 2024 Paris Olympics were marred by an outpouring of online trolling. Gunn’s performance was met with a savage backlash on social media.
Viral videos and memes mocking Gunn’s unconventional style quickly spread, along with a barrage of racist, sexist and body-shaming comments. Trolls accused her of “disgracing” the Olympics and called for her disqualification. Days after the event was over, anonymous online critics continued to attack Gunn’s appearance, talent and identity.
The Raygun incident perfectly demonstrates how a combination of high-profile events, social media platform design and human psychology can reinforce each other to create a storm.
The factors that lead to trolling are complex, but they also point us towards a solution.
Algorithmic acceleration
At the heart of the issue is the way social media platforms are designed. Algorithms that prioritise engagement and virality can rapidly amplify and spread content, both good and bad.
It can turn an initial handful of trolls into a mob. Posts and memes that generate strong emotional reactions, whether positive or negative, are more likely to be shared and commented on. This fuels their momentum and reach. As the criticism of Raygun spread across platforms, more people piled on.
Such “algorithmic acceleration” creates echo chambers where extreme views become normalised. In an echo chamber, users are only shown information that reinforces their existing beliefs.
This can intensify negative sentiments and lead to a more hostile online environment. Dissenting voices tend to be silenced.
Anonymity, disinhibition and dark leisure
The relative anonymity of online spaces also plays a key role. People feel emboldened to say things they would never dare say in person. This “online disinhibition effect” lowers their restraints and empathy, and makes trolling more likely.
Surprisingly, the role of “fun” is also significant in trolling. Trolls are not solely motivated by malice. For some, trolling serves as a dark form of leisure.
They derive enjoyment from humiliation, causing pain and eliciting reactions from victims and others. This often leads trolls to seek out topics and spaces in which to “play” and express their sadistic nature.
Meme culture and virality
The viral spread of content such as memes is another distinctive feature of online trolling. Trolls weaponise the visual language of memes, creating satirical or derogatory images that are easily shareable and remixable.
These can quickly take on a life of their own, becoming embedded in the cultural consciousness and making the original target of the abuse even more visible.
In Raygun’s case, the trolls seized on her mode of dress and mannerisms, turning them into endlessly reproduced caricatures. Such meme-driven harassment amplifies the personal nature of the attacks.
How can we combat trolling?
Tackling online trolling is a complex challenge. Platforms, policymakers and communities must approach this from various angles.
Technological solutions
Improved moderation tools. Social media platforms are investing in AI-driven tools to detect and remove harmful content. However, these tools can struggle with context, leading to false positives and false negatives.
Strengthened user reporting and blocking features. To empower people to manage their online interactions more effectively, we need better reporting mechanisms, and the ability to mute and block trolls.
Behaviour change
It’s equally critical to take into account the psychology of human behaviour. If we want people to change their behaviour online, whom should we target and how?
Psychological support services. Providing mental health resources for individuals targeted by online abuse can help mitigate the personal impact of trolling.
Educational campaigns. Raising awareness of the impact of trolling and promoting digital literacy can help users and society in general navigate online spaces more safely. This includes understanding how algorithms work and recognising the signs of echo chambers.
Positively engaging bystanders. Research highlights the critical role of bystanders in limiting violent behaviours, including trolling. When trolls can’t be directly stopped, a social media community can establish rules and expectations. Harmful behaviours can be called out by engaged bystanders, setting a positive tone that discourages bad behaviour.
Regulatory measures. Policymakers should consider implementing strict regulations to discourage individuals from engaging in online harassment and trolling.
All these changes can be difficult to implement. It’s hard to change human behaviour at scale, and providing widespread psychological support requires resources. But the measures above would likely keep infrequent trolls away from this nasty behaviour.
Where to from here?
The solutions above can offer some relief, but they have limitations. Educational campaigns may not reach all users. The global and decentralised nature of the internet makes it challenging to enforce regulations consistently across jurisdictions. Bystanders may be reluctant to get involved.
To combat trolling, we need tech companies, governments and everyday internet users to work together to create safer online environments.
This includes developing more sophisticated AI moderation tools, fostering cross-platform cooperation to limit trolling, and promoting a culture of accountability and respect online.
Gunn has spoken up about the “devastating” effect of the online hate she experienced in the wake of the Olympics. It’s just one incident among many, but it underscores the urgent need for better methods to combat toxic online behaviour on a global scale.
Authors: David Tuffley, Senior Lecturer in Applied Ethics & CyberSecurity, Griffith University