Skip to main content
Blog

Moderating Harm, Maintaining Health – Protecting the Wellbeing of Content Moderators

By January 8, 2024January 11th, 2024No Comments

The Unsung Heroes of the Digital Era: Content Moderators 

In the vast landscape of the internet, Content Moderators work tirelessly behind the scenes to protect all of us from egregious content. They are the unsung heroes, the gatekeepers who maintain the safety and integrity of online platforms, making them indispensable in today’s digital world.  

The role of Content Moderators has become increasingly critical in an era where data generation has reached unprecedented levels. According to Techjury, as of 2021, an astonishing 2.5 quintillion bytes of data were created every day. This sheer volume of content, ranging from social media posts and videos to emails and blog comments, requires constant vigilance. 

The work of Content Moderators is not just a job; it’s a daily confrontation with the darker sides of humanity. They are tasked with making rapid decisions on a wide range of content. This includes text-based harms like hate speech and mis- and disinformation as well as graphic and disturbing imagery, such as extremist violence and CSAM (Child Sexual Abuse Material). On top of this, they are paid modest wages and face a relentless stream of such content, leading to considerable mental health challenges. 

How Does Content Moderation Impact the Mental Health of Moderators? 

The mental health implications for Content Moderators are profound. Continuous exposure to harmful content can lead to the development of PTSD (Post-Traumatic Stress Disorder), characterized by symptoms like flashbacks, severe anxiety, and uncontrollable thoughts about the events they’ve witnessed indirectly. Moreover, they may experience high levels of depression and anxiety, manifesting in sleep disturbances, emotional numbness, or persistent feelings of sadness. 

All this exposure and the repetitive nature of viewing such content can also lead to a phenomenon known as Vicarious Traumatization (VT), where individuals develop trauma-related symptoms from indirect exposure to traumatic material. Importantly, moderators who continue to work in these roles while experiencing VT face a heightened risk of long-term damage to their mental health. Prolonged exposure without adequate mental health support can lead to enduring psychological issues, underscoring the necessity of timely intervention and comprehensive mental health care strategies in these workplaces. 

The Importance of Preventative Wellbeing Interventions for Content Moderators 

The publication of the ISO 45003 guideline in 2021 has highlighted the significance of psychosocial hazards in the workplace, particularly in fields like content moderation, which inherently pose risks to psychological health. This guideline urges organizations to identify and mitigate such hazards, emphasizing the need for proactive and preventative measures. 

Psychosocial Hazards in Content Moderation 

Content moderation work involves unique psychosocial hazards, including high job demands, exposure to distressing content, and challenges in workplace relationships and organizational change. Recognizing these hazards is the first step towards mitigating their impact on moderators’ mental health. 

Risk Reduction Strategies 

While it’s challenging to eliminate these hazards completely, risk reduction is essential. Employers should evaluate workloads and provide adequate support to moderators to manage the volume of content and associated stress. Regular risk assessments should include identifying hazards, evaluating risks, and implementing preventative actions. 

Implementing Preventative Wellbeing Interventions 

Preventative wellbeing interventions are crucial in addressing these risks before they cause harm. This includes mental health literacy training during onboarding, increasing awareness of potential mental health risks, and equipping moderators with healthy coping strategies. Proactive interventions like these are key in preparing moderators for their roles and safeguarding their wellbeing. 

What Measures Can Be Taken to Safeguard Moderators’ Mental Health During Their Career Journey? 

In recognizing the critical role of Content Moderators in shaping the digital landscape, organizations must adopt a holistic and proactive approach to support their mental health at every stage of their career journey. This commitment not only reflects a moral responsibility but also ensures the sustainability and effectiveness of content moderation as a crucial function in the digital era. 

Recruitment: Setting the Stage for Transparency and Expectations 

Before onboarding, the recruitment process plays a vital role in setting up prospective moderators for success. Transparency in job specifications is essential. Organizations should provide clear and accurate descriptions of the role, including potential challenges and the nature of the content to be moderated. Adequate screening processes should be in place to ensure that candidates are well-suited for the demands of the job. During the screening and interview stages, it’s crucial to discuss the wellbeing resources available and make it clear that engaging with these services is an integral part of maintaining their mental health. This upfront communication sets realistic expectations and prepares new hires for the nature of their work. 

Onboarding: Laying a Strong Foundation  

The journey of a Content Moderator begins with comprehensive preparation. The onboarding process is more than just a functional orientation; it’s a crucial phase where moderators should be introduced to the mental health resources and support systems in place. This early focus on mental wellbeing sets the tone for their entire tenure, underscoring the organization’s commitment to their health right from the start. 

Supporting New Hires: Cultivating Resilience and Community  

Recognizing the challenges faced by new hires, organizations should emphasize the importance of regular check-ins and group interventions during the initial phase of employment. These measures are not just about monitoring performance but are critical in building resilience and a sense of community among moderators. This approach helps new moderators adapt to the demands of their role while feeling supported and valued. 

In-Production Support: Nurturing Through Continuous Care  

As Content Moderators delve deeper into their roles, organizations need to continually provide robust support. This includes ongoing therapy, group interventions, and comprehensive training focused on developing adequate and effective healthy coping skills for distress tolerance, workload and stress management, and recognition of signs and symptoms of common mental health challenges in moderation work. By doing so, the organization ensures that moderators are not only well-equipped to handle their tasks but also have continuous access to mental health resources. 

Critical Incident Management: Staying Prepared and Responsive  

In the event of critical incidents, organizations need to be prepared with specific training, debriefing sessions, and targeted interventions. These measures are not just reactive; they are part of a broader strategy to maintain a resilient and psychologically safe work environment. 

Offboarding: Ensuring a Smooth Transition  

Organizations must also acknowledge the importance of supporting moderators as they transition out of their roles. This phase is crucial in helping them decompress and prepare for life after content moderation, ensuring that they leave with a positive perception of their experience and the industry’s commitment to their wellbeing. 

The Crucial Role of Therapy in Supporting Content Moderators 

In the challenging land of content moderation, providing moderators with access to therapy is not just a benefit, it’s a necessity. Often constrained from speaking publicly about their work, Content Moderators have expressed through the likes of open letters concerns about insufficient therapy. Therapy offers a vital support system to address the challenges outlined above and maintain their psychological wellbeing. 

Some of the benefits of this support include: 

A Safe Space for Processing and Healing 

Therapy provides a confidential and safe space for moderators to process the difficult content they encounter daily. It allows them to express and work through their feelings and experiences without fear of judgment or repercussions. This space is crucial for their mental and emotional healing. 

Building Resilience and Coping Strategies 

Through therapy, moderators can develop resilience and effective coping strategies. Therapists can help them understand and manage their reactions to traumatic content, equipping them with tools to handle stress, anxiety, and any other mental health challenges that arise from their work. 

Preventing Long-term Psychological Impact 

Regular access to therapy can prevent the long-term psychological impact of content moderation. By addressing issues as they arise, therapy can mitigate the risk of developing more severe mental health conditions, helping moderators maintain a healthy work-life balance and overall wellbeing. 

Enhancing Job Performance and Satisfaction 

Therapy can also contribute to better job performance and satisfaction. When moderators feel mentally supported and have tools to manage their stress, they are more likely to perform effectively and find satisfaction in their work. This not only benefits the individual but also enhances the overall quality of content moderation. 

Balancing Technology and Human Touch: AI’s Impact on Content Moderation Wellbeing 

According to Gartner, it is predicted that by 2030, 90-99% of the data on the internet will be created with the help of Generative AI technologies. This rapid increase in user-generated content (UGC) has made it impossible to rely solely on traditional human-led moderation methods due to the volume and speed required. 

The advent of artificial intelligence (AI) in content moderation has been a game-changer, significantly reducing the volume of harmful content that human moderators need to review. AI technologies, such as AWS’s Rekognition, have revolutionized content moderation by automating the detection and filtering of explicit content. These tools can blur sensitive images, redact offensive language, and mask inappropriate audio recordings in real-time. By filtering out the most egregious content, AI reduces the psychological burden on human moderators, shielding them from direct exposure to potentially traumatic material.  

However, it’s important to note that moderators often have the autonomy to choose whether or not to use these filters. Some may opt not to use them, as it can slow down their work if they need to disable the filter to review the content thoroughly. While these AI tools reduce the initial shock factor, they may not significantly reduce overall exposure to graphic materials. 

Furthermore, a report on AI in online content moderation by Cambridge Consultants, emphasises how imperative it is to emphasize that AI cannot entirely replace the nuanced judgment and contextual understanding that human moderators bring. AI assists in the initial filtering process but often struggles with context, subtleties, and cultural nuances. Therefore, human intervention remains essential for making informed decisions on content that AI flags as borderline or ambiguous. 

Shaping a Healthier Future for Content Moderators: Advocacy for Better Policies and Enhanced Standards 

In the complex world of content moderation, advocacy plays a pivotal role in driving the implementation of policies and standards that prioritize the mental wellbeing of moderators. These policies are essential in creating a safer and more supportive work environment for those at the frontline of digital content control. 

The Emergence of the Online Safety Act and the Digital Services Act 

While recent legislative initiatives like the Online Safety Act and Digital Services Act represent significant strides in regulating online content and user safety, they still fall short in comprehensively addressing the specific mental health needs and wellbeing of Content Moderators, underscoring a critical area for further development and focus in future policy-making. The impact of these acts can be significant: 

  • Increased Responsibility for Platforms: The act mandates platforms to be more proactive in identifying and removing harmful content. This responsibility has led to more rigorous content moderation processes, necessitating robust support systems for moderators. 
  • Transparency Reports: Under the DSA, organizations labelled as ‘Very Large Online Platforms’ who have 45 million European Users or more are required to file a transparency report detailing their content policy actions every six months. This has led to the release of some very interesting insights including how Pinterest’s moderators removed 6.8 million adult-related posts on its platform in one month. 
  • Emphasis on User and Moderator Safety: With a user-centric approach, these acts focus on the safety of both users and moderators, driving the development of tools and practices to minimize moderators’ exposure to harmful content. 
  • Setting Standards for Content Moderation: The act sets clear guidelines and standards for content moderation, including the types of content to be moderated and the timeframe for its removal. This clarity can help in structuring moderation workflows more effectively. 

Advancing Mental Health Support Policies  

Beyond these legislative frameworks, there is a growing need for policies specifically tailored to the mental health of Content Moderators. These policies should address not only the availability of support but also the barriers to accessing and utilizing these resources effectively: 

  • Regular Mental Health Assessments: Policies could mandate regular mental health screenings for moderators to identify signs of stress, anxiety, or PTSD early on. 
  • Professional Psychological Support: Providing access to professional counseling and therapy as part of employment benefits can help moderators cope with the emotional toll of their work. 
  • Training and Education: Policies might require access to psychoeducation or comprehensive training in mental health awareness, stress management, and resilience-building for moderators. 
  • Creating a Supportive Work Culture: Encouraging a culture that values mental health, including peer support groups, open dialogue about mental health challenges, and a stigma-free work environment. 
  • Transparency and Accountability in Utilization of Services: Policies should also require organizations to report transparently on the utilization of mental health services while protecting user data. This transparency is key to holding organizations accountable for the support they provide. 
  • Addressing Barriers to Engagement: It’s important to recognize and mitigate barriers to accessing mental health services. This includes addressing issues like poor tooling, frequent policy changes, and challenging productivity metrics that can impact moderators’ mental health. 
  • Protecting Moderators from Penalization: Policies should mandate that organizations do not penalize moderators, such as impacting their bonuses or job status, if their productivity drops due to dealing with vicarious traumatization or other mental health issues. Such protections are crucial in creating a truly supportive work environment. 

The Role of Industry and Advocacy Groups 

The success of these policies depends on the collaboration between industry leaders, advocacy groups, and policymakers: 

  • Industry Leadership: Tech companies and platforms must take a leading role in advocating for and implementing these policies. Their commitment to moderator wellbeing can set industry standards. 
  • Advocacy Groups: These groups can raise awareness about the challenges faced by Content Moderators and lobby for stronger protections and support systems. 
  • Collaboration with Mental Health Professionals: Working with mental health experts can help in developing effective support programs and interventions for moderators. 

Conclusion: Prioritizing the Wellbeing of Content Moderators 

Content Moderators stand as crucial yet often overlooked guardians of online safety and integrity. While legislative measures like the Online Safety Act and Digital Services Act have begun to address the challenges in content moderation, there remains a significant gap in adequately supporting the mental health of these key players. 

The integration of AI in content moderation is a step forward, reducing the burden on humans. However, it’s vital to maintain a balance, recognizing that AI supplements but does not replace the nuanced judgment of human moderators. The path forward demands collaborative efforts from industry leaders, advocacy groups, and mental health experts to develop comprehensive policies and practices that prioritize moderator wellbeing. 

As we continue to navigate and shape the digital landscape, ensuring the mental health and overall wellbeing of Content Moderators must be a top priority. By fostering a supportive work environment and advocating for enhanced standards, we can build a more sustainable and humane digital world, where the welfare of those who maintain our online spaces is paramount. 

To find out more about how Zevo can support your content moderation teams, please get in touch today.