Skip to main content
Blog

The Unseen Frontline: Content Moderators and the Role of Social Media During Conflicts

By November 6, 2023No Comments

Amid the recent Israel-Hamas conflict, a parallel conflict unfolded on the digital battleground of social media platforms, placing immense stress on content moderators and users alike. The surge in content related to the conflict across various social media platforms has posed unprecedented challenges for content moderators, whose responsibility it is to sift through the deluge of graphic images, hate speech, and disinformation. This has invariably raised questions about the role of market-leading social media platforms and the pressures these moderators face, along with the implications for end users. 

The Unseen Frontline: Content Moderators 

Content moderators are constantly exposed to distressing content, from violent imagery to divisive opinions. Their task is not merely technical; it’s profoundly psychological. This continuous barrage can lead to severe mental health challenges, including PTSD, anxiety, and depression. While their role is to protect end users line, they are frequently met with the dilemma of upholding freedom of speech while preventing the spread of harmful content. 

The challenge is amplified by a surge in disinformation – unlike misinformation, this involves the deliberate creation and sharing of false or manipulated information with the intention to deceive or mislead. For instance, The New York Times recently reported on a video showing Israeli children as Hamas hostages which was later debunked, having been circulated in other contexts related to Afghanistan, Syria, and Yemen.1 This instance is just a glimpse into the extensive information warfare campaign where graphic content is strategically used to incite fear, influence views, and engage in psychological manipulation. 

Social Media’s Controversial Response 

Several social media platforms have faced criticism for their inconsistent and sometimes opaque content moderation policies. In the context of the Israel-Hamas conflict, these platforms have been accused of bias, either by over-zealously removing content or allowing the spread of disinformation. 

Since the onset of the conflict, platforms, have been flooded with violent videos and graphic images. Images of dead Israeli civilians near Gaza and distressing audios of Israeli kidnapping victims all made their way onto these platforms, racking up countless views and shares. Disturbingly, much of this content has reportedly been systematically seeded by Hamas with the intent to terrorize civilians, capitalizing on the inadequate content moderation on certain platforms.

Despite claims by one major platform about its special operations centre staffed with experts monitoring content, there’s a growing call for transparency in content moderation practices. Platforms are criticized not only for their lack of accurate monitoring but also for algorithmically curtailing the reach of certain posts, known as “shadow banning.” According to Vox, shadow banning is an often ‘covert form of platform moderation that limits who sees a piece of content, rather than banning it altogether.

Numerous users on Instagram, Facebook, and TikTok allege that these platforms restrict the visibility of their content. However, these tech giants attribute such occurrences to technical glitches, denying any form of bias.  

According to Time Magazine, the reason why it is so difficult to ensure content is accurate is due to several factors – it is hard to disprove a general negative, most platforms want to make sure important content is not censored, users are motivated to share content that shows that their side is in the ‘right’ and people want access to the latest information. All of these elements mean content moderation policies can fail at a time of a major conflict.  

Such issues not only undermine the public’s trust in these platforms but also place undue pressure on content moderators who must navigate these murky waters. 

Implications for Users 

For end users, the implications are two-fold. On one hand, they might be exposed to distressing and traumatic content that is not picked up during the moderation process. On the other hand, they may also be deprived of critical information if it is incorrectly flagged and removed. 

For the average internet user, knowing what information to trust online has never been more challenging and more critical. The challenge amplifies when unverified news rapidly surpasses the speed of verification, ending up in mainstream news and even statements from leaders. This complexity was evident when, according to Vox, US President Joe Biden remarked on unverified claims about Hamas militants’ actions on children during the initial attack.

The Way Forward: Robust Moderation and Psychological Support 

The intensity of the recent conflict underscores the need for stronger content moderation teams that are well-equipped to handle such crises. This not only means employing more moderators but also providing them with the necessary tools and training to discern harmful content based on platform policy and genuine news versus disinformation.  

And let’s remember, for many social media platforms, they have let go many of their Trust and Safety team members, including content moderators. In an article in May 2023, CNBC reported that Meta, Amazon and Twitter all laid off members of their Trust and Safety teams. This means that less people have to do more work, to try to ensure in-platform content is accurate.  

But for many moderators, it can be personal too. Moderators must balance their personal ethics with platform policy enforcement which requires self-awareness and psychological distancing to manage ethical misalignments or dilemmas they may face.  

Dr. Michelle Teo, Health and Wellbeing Director at Zevo Health shares her insights from working directly with moderators, stating that “one of the most unique issues facing moderators is how they process the emotional impact of actioning a ticket when they feel the action doesn’t align with their own values – but it follows platform policy regulations. This can be as simple as not being able to revoke the account of a platform user who has been reported for scamming other users. The moderator may recognize the scam within the materials they are viewing but, without the evidence as per the policy, they have no option but to allow the user to remain on the platform. When a moderator is told that their role is to safeguard users and they feel they cannot do this, therein lies the dilemma. This can bring about feelings of guilt, shame, and anger – all emotions that can have a deep impact on someone’s sense of worth and their sense of meaning and purpose in their work. This impact becomes more profound when we’re talking about content like child and animal abuse, graphic violence, or revenge porn.”  

In fact, the entire moderation ecosystem including policy developers, AI, people managers, workforce management teams, etc. needs to be equipped to respond to these crises. For example, with increasing disinformation campaigns, companies may query if their current AI models can accurately determine whether a video was taken years ago versus in the moment. From the operational lens, workforce management teams need to forecast overtime or increasing headcount on moderation teams’ responsible for monitoring hate speech and graphic violence workflows as the influx of content rises exponentially.  

Equally important is the psychological support that these moderators and their support teams require. Companies must recognize the mental toll that content moderation can take on the people surrounding the content. moderators. Particularly during ongoing crises such as the Hamas-Israel conflict, it is imperative that long-term impacts are considered and addressed. This may necessitate weeks, months, or even year-long support avenues for moderation teams.  

In times of crisis, it is insufficient to simply rely on day-to-day supports. Holistic approaches must consider all stakeholders in the ecosystem, the potential risks inherent to each role and how roles interact with one another, and timely and effective risk mitigation measures. Companies must invest in their moderation teams and their mental health support systems to ensure the psychological health and safety of their people, especially content moderators and their trust and safety teams.