Skip to main content
Blog

Content Moderation in Social Media – The Unseen Frontline During Conflicts

By November 12, 2025December 25th, 2025No Comments

Content Moderators face severe mental health risks during conflicts as they process graphic violence, disinformation, and hate speech at extreme volumes. This article examines their psychological challenges and the support systems platforms must provide.

At a glance:

  • Recommendation algorithms amplify sensational content during crises, increasing moderator queues and exposing both moderators and users to higher volumes of distressing material.
  • Learn how platforms can implement crisis response protocols and psychological support systems for moderation teams.
  • The Zevo Accreditation Program (ZAP) offers CPD-approved certification for mental health professionals supporting Trust & Safety teams.
  • Trust and Safety team layoffs at major platforms have reduced moderator capacity just as conflict-related content volumes surge.
  • Content Moderators must balance personal ethics with platform policy enforcement, which can lead to feelings of guilt, shame, and anger.

Independent audits and transparency reporting are needed to hold platforms accountable for moderation practices during conflicts.

What is the Impact of Conflict on Content Moderation? 

Amid the recent Israel-Hamas conflict, a parallel conflict unfolded on the digital battleground of social media platforms, placing immense stress on Content Moderators for social media and users alike. 

The surge in content related to the conflict across various social media platforms has posed sustained, heavy challenges for Content Moderators. This shows the central importance of content moderation on social media and the responsibility to sift through the deluge of graphic images, hate speech, and disinformation. 

This shift has raised questions about how market-leading social media companies respond, the pressure moderators face, and the implications for end users.

The Unseen Frontline – Content Moderators 

Content Moderators are constantly exposed to distressing content, from violent imagery to divisive opinions. Their task is not purely technical. It is deeply psychological.

Recent peer-reviewed research with commercial content moderators found that over one quarter reported moderate to severe psychological distress and around one quarter reported low wellbeing, underlining how sustained exposure to harmful content can affect mental health.

Mental Health Challenges for Content Moderators

This continuous barrage can lead to severe mental health challenges, including PTSD, anxiety, and depression. While their task is to protect the end users line, they are frequently met with the dilemma of upholding free speech while preventing the spread of harmful content. Effective crisis management for Content Moderators is essential during high-volume events to prevent psychological harm and burnout.

Disinformation and Its Impact on Social Media

The challenge is amplified by a surge in disinformation – unlike misinformation, this involves the deliberate creation and sharing of false or manipulated information with the intention to deceive or mislead.

Algorithmic Amplification and Recommendation Systems 

The problem of disinformation is compounded by algorithmic amplification, where recommendation algorithms prioritize sensational or emotionally charged content that generates engagement over accuracy. 

These systems can inadvertently amplify misleading narratives, increasing the volume of harmful content that moderators must review while simultaneously shaping what users see in their feeds.

Misinformation, Disinformation, and Malinformation Explained 

Recognizing the distinction between misinformation, disinformation, and malinformation (MDM) is essential for content moderation. Malinformation involves genuine information shared to cause harm, adding another layer of complexity to moderation decisions. 

The rise of generative AI further complicates this environment, as AI-generated content can be difficult to detect and may perpetuate AI bias embedded in training datasets.

How Social Media Content Moderators Debunk False Information

For instance, a New York Times 2023 report showed Israeli children as Hamas hostages, which was later debunked, having been circulated in other contexts related to Afghanistan, Syria, and Yemen.

This instance is just a glimpse into the extensive information warfare campaign where graphic content is strategically used to incite fear, influence views, and engage in psychological manipulation.

Strengthening Verification with Fact-Checkers 

Many platforms rely on fact-checking programs that partner with independent fact-checkers to verify content authenticity. These trusted partner programs often face capacity and regional expertise limits, particularly during rapidly changing crises. 

Strengthening local fact-checkers with resources and platform integration can improve the speed and accuracy of information verification.

Social Media’s Controversial Response 

Several social media companies have faced criticism for their inconsistent and sometimes opaque content moderation social media policies. In the context of the Israel-Hamas conflict, these platforms have been accused of bias, either by overzealous removal of content or by allowing the spread of disinformation.

The Risks of Over-Enforcement and Under-Enforcement 

The dual risks of over-enforcement and under-enforcement create significant challenges for platforms and moderators alike. Over-enforcement can result in the removal of legitimate news reporting or documentation of human rights violations, while under-enforcement allows harmful content to spread unchecked. 

This inconsistent enforcement often stems from contextual misclassification, where automated systems or human moderators lack sufficient context to make accurate decisions. Policy overreach in one direction or another can erode user trust and leave moderators struggling with ethical dilemmas.

Flood of Violent Content on Social Media

Since the onset of the conflict, platforms have been flooded with violent videos and graphic images. Images of dead Israeli civilians near Gaza and distressing audio of Israeli kidnapping victims all made their way onto these platforms, racking up countless views and shares. Disturbingly, much of this content has reportedly been systematically seeded by Hamas with the intent to terrorize civilians, capitalizing on the inadequate content moderation on certain platforms.

Infrastructure Challenges and Platform Migration 

The infrastructure challenge extends beyond individual platforms, as hostile actors often operate terrorist-operated websites and migrate to less-moderated apps when facing removal from mainstream platforms. 

This platform migration to unmoderated apps creates an ongoing cat-and-mouse game, requiring coordination between platforms, domain registrar accountability, and law enforcement to disrupt extremist operations more consistently.

Despite claims by one major platform about its special operations center staffed with experts monitoring content, there is an increasing call for transparency in content moderation practices.

Shadow Banning and Allegations of Bias

Platforms are criticized not only for their lack of accurate monitoring but also for algorithmically curtailing the reach of certain posts, known as shadowbanning

User Allegations and Platform Responses

Numerous users on Instagram, Facebook, and TikTok allege that these platforms restrict the visibility of their content. These tech giants attribute such occurrences to technical glitches and deny any form of bias.

It is hard to disprove a general negative. Most platforms want to make sure important content is not censored. Users are motivated to share content that supports their side while seeking the latest information. All of these elements mean content moderation policies can fail during major conflicts.

Such issues reduce the public’s trust in these platforms but also place undue pressure on Content Moderators who work within a highly uncertain environment.

Implications for Social Media Users – Exposure and Information Deprivation

For end users, the implications are two-fold. On one hand, they might be exposed to distressing and traumatic content that is not picked up during the online content moderation. On the other hand, they may also be deprived of critical information if it is incorrectly flagged and removed.

  • For the average internet user, knowing what information to trust online has never been more challenging and more important.
  • The challenge amplifies when unverified news rapidly surpasses the speed of verification, ending up in mainstream news and even statements from leaders.
  • This complexity was evident when, according to Vox, US President Joe Biden remarked on unverified claims about Hamas militants’ actions on children during the initial attack.

The Way Forward – Robust Moderation and Psychological Support 

The intensity of the recent conflict highlights the need for stronger content moderation on social media and for teams that are well-equipped to handle such crises.

Platforms need both sufficient numbers of moderators and the tools and training required to discern harmful content based on platform policy, artificial intelligence, and genuine news versus disinformation.

Implementing Hybrid Moderation Models 

Modern content moderation requires a hybrid approach that combines algorithmic moderation with human-in-the-loop moderation, ensuring that automated systems flag potentially harmful content while trained humans make final decisions on nuanced cases.

Establishing Crisis Response Protocols 

Platforms operating in conflict zones should be required to implement comprehensive crisis response protocols that include emergency escalation procedures and verified source elevation to prioritize credible information during crises. These crisis moderation frameworks should activate automatically when conflicts emerge, ensuring rapid response capabilities.

Ensuring Accountability Through Independent Audits 

To strengthen accountability, platforms must commit to independent audits of their moderation practices, including regular transparency reporting that details enforcement actions and error rates. 

Human rights impact assessments should be mandatory for platforms operating in high-risk regions, and algorithmic transparency measures should allow researchers to examine how recommendation systems affect content distribution during crises.

Impact of Trust and Safety Team Layoffs

In an attempt to balance community standards with the principles of free speech, many social media platforms routinely let go of many of their Trust and Safety team members, including Content Moderators. This means that fewer people have to do more work to try to keep in-platform content accurate.

Balancing Ethics and Platform Policies for Content Moderators

For many moderators, it can be personal, too. Moderators must balance their personal ethics with platform policy enforcement, which requires self-awareness and psychological distancing to manage ethical misalignments or dilemmas they may face.

Dr. Michelle Teo, Health and Wellbeing Director at Zevo Health, shares her insights from working directly with moderators, stating that “one of the most unique issues facing moderators is how they process the emotional impact of actioning a ticket when they feel the action doesn’t align with their own values – but it follows platform policy regulations.”

The Psychological Impact of Policy Enforcement

This can be as simple as not being able to revoke the account of a platform user who has been reported for scamming other users.

The moderator may recognize the scam within the materials they are viewing, but without the evidence as per the policy, they have no option but to allow the user to remain on the platform.

When a moderator is told their job is to protect users and they feel unable to do this, the dilemma becomes very real. This can bring about feelings of guilt, shame, and anger – all emotions that can have a deep impact on someone’s sense of worth and their sense of meaning and purpose in their work.

This impact becomes more profound when moderators review content such as child and animal abuse, graphic violence, or revenge porn.

Equipping the Moderation Ecosystem for Ethical Challenges

The entire moderation ecosystem, including policy developers, AI, people managers, workforce management teams, etc., needs to be equipped to respond to these crises.

For example, with increasing disinformation campaigns, companies may query whether their current AI models can accurately determine if a video was taken years ago versus in the moment.

From the operational lens, workforce management teams need to forecast overtime or increasing headcount on moderation teams responsible for monitoring hate speech and graphic violence workflows as the influx of content rises exponentially.

Why Psychological Support Matters for Content Moderators 

Equally important is the psychological support that these moderators and their support teams require. Companies must recognize the mental toll that content moderation can take on the people surrounding the Content Moderators.

Particularly during ongoing crises such as the Hamas-Israel conflict, it is imperative that long-term impacts are considered and addressed. This may necessitate weeks, months, or even year-long support avenues for moderation teams.

Investing in Mental Health Support Systems

In times of crisis, it is insufficient to simply rely on day-to-day support. Holistic approaches must consider all stakeholders in the ecosystem, the potential risks inherent to each position, how different responsibilities interact, and timely and strong risk mitigation measures.

Companies must invest to improve mental health support systems that ensure the psychological safety of their people, especially Content Moderators and their trust and safety teams.

Build Resilient Teams With Zevo Health

Zevo Health supports organizations worldwide to strengthen team resilience, maintain excellence, and care for the people behind critical work. To discuss what could work for your organization, get in touch with us.

Zevo Accreditation Program

Learn More