Skip to main content
Blog

Online Harassment: Mental Health Challenges for Content Moderators

By April 2, 2024No Comments

Defining Online Harassment and Prevalence 

The Digital Trust and Safety Partnership (2023) defines online harassment as “unsolicited repeat behavior against another person, with the intent to intimidate or cause emotional distress”. They highlight that it may occur over any medium including social media, email and via online services, with the potential to result in real-world abuse or vice versa. Perpetrators of online harassment may target one specific individual or a group of individuals; alternatively, a group of individuals may target one person or a group of peoples. 

The Pew Research Center measured six distinct behaviors constituting online harassment including physical threats, stalking, sustained harassment, sexual harassment (categorized as more severe behaviours), offensive name-calling, and purposeful embarrassment (categorized as less severe behaviours). They conducted a study amongst over 10,000 Americans in 2020 and their findings report that 41% of Americans have experienced some form of online harassment, with 25% of those individuals experiencing more severe harassment behaviours. Comparatively, only 14% of people reported experiencing more severe harassment behaviours in 2014. In their report, the Pew Research Center highlights that harassment beyond politics often impacts individuals based on their gender or their racial or ethnic background. 

Thomas et al. (2021) share in their paper titled SoK: Hate, Harassment, and the Changing Landscape of Online Abuse seven categories of abuse based on three criteria: audience (A1-2), medium (M1), and capabilities for the attack to suceed (C1-4). The seven categories they have identified include:  

  1. Toxic content – comprising abuses like bullying, trolling, sexual harassment, unwanted explicit content, etc.
  2. Content leakage – doxxing, deadnaming and/or outing, non-consensual image exposure (“revenge porn”), etc. 
  3. Overloading – notification bombing, comment spam, dogpiling, negative reviews and ratings, etc. 
  4. False reporting – SWATing, falsified abuse reporting, falsified abuse flag 
  5. Impersonation – including synthetic pornography 
  6. Surveillance – stalking, device monitoring, etc. 
  7. Lockout and control – browser manipulation, content deletion, account lockout, etc. 

Table II in their paper demonstrates the prevalence of online hate and harassment experiences reported by participants. 

Online harassment can take many forms and impact various peoples based on their demographic characteristics, and the prevalence rates of various online harassment behaviours indicates that a concerted effort is needed by platforms to reduce potential harms to users. 

Psychological Impacts of Online Harassment 

There is a wealth of literature highlighting the psychological impacts of online harassment perpetrated against children and youth (primarily cyberbullying and sexual exploitation). Additionally, research conducted with adults focused on gender-based violence online and its facilitation of offline or “real-world” harm is widely available – for example, this paper by Chan discussing the incel (involuntary celibate) community in Canada. However, comprehensive research investigating the impacts of multiple forms of online harassment across the lifespan is lacking. Therefore, it becomes challenging to discern how reviewing online harassment can impact Content Moderators’ psychological health.  

Bearing this in mind, we posit that reviewing online harassment including toxic content, content leakage, overloading, etc. can result in similar mental ill health symptomology that may be experienced by mental health professionals or other allied health professionals. This postulate is based on our literature review of factors contributing to vicarious trauma conducted in 2020. Mental health professionals regularly hear accounts of traumatic experiences from their clients during counselling sessions, and the literature suggests that the amount of time spent counselling these victims was the best predictor of trauma scores. Additionally, the literature highlights that ethnicity/race was a contributing factor in the development of compassion fatigue and burnout; specifically, African Americans and Asians were significantly more likely to report burnout than Caucasians, and Hispanics were significantly more likely to report compassionate fatigue that whites. 

In considering the parallel to moderation work, we can take these learnings and make an educated assumption that regular secondary exposure via witnessing or viewing online harassment perpetrated against platform users may result in Content Moderators experiencing compassion fatigue, burnout, and other symptoms of vicarious trauma. These can include lingering feelings or anger of sadness about users’ victimization, bystander guilt and shame, hopelessness and pessimism resulting in negative worldview, and preoccupation with users’ experiences outside working hours. 

Considerations of Collective Trauma 

In addition, we posit based on the literature that the experience of collective trauma may be a potential impact of regular exposure to online harassment perpetrated against platform users. According to Render Turmaud in Psychology Today, collective trauma refers to “the impact of a traumatic experience that affects and involves entire groups of people, communities, or societies”. This article highlights how collective trauma “can also change the entire fabric of a society […], impact relationships, alter policies and governmental processes, alter the way the society functions, and even change its social norms.” We have seen this occur in relation to the COVID-19 pandemic where the collective trauma experienced was the pandemic itself and how society functioned was deeply altered. Furthermore, there were significant changes in societal norms including the use of face masks, hand sanitizing stations everywhere, and vulnerable populations remaining isolated for long periods. 

As a group, Content Moderators are facing a collective traumatic experience – and due to the diversity of these professionals, they may experience additional collective trauma experiences if they identify with groups targeted by online harassment. For example, a Content Moderator who identifies as non-binary is likely to encounter bullying and trolling as part of their job which may exacerbate the harmful psychological impact due to their personal connection to the story. The Pew Research Center highlights in their research that “50% of lesbian, gay or bisexual adults who have been harassed online say they think it occurred because of their sexual orientation”. They also share that “roughly half of women (47%) say they have encountered harassment online because of their gender” and “about half or more Black (54%) or Hispanic online harassment targets (47%) say they were harassed due to their race or ethnicity”. These statistics highlight how individuals in minority or historically marginalized groups who are working as Content Moderators are likely to view more online harassment that reflects their own identities. 

Content Moderators’ intersectional identities must be deeply considered therefore, to mitigate risks of psychological harm when reviewing online harassment. 

Conclusion 

There is no doubt that Content Moderators moderating online harassment as part of their roles may experience mental health challenges including acute secondary stress and vicarious trauma. Not only is the exposure to content a risk factor, but the collective trauma also that may be experienced by Content Moderators based on their intersectional identities needs to be at the forefront of our minds.  

Zevo Health takes a tailored approach to our content moderation wellbeing services, including psycho-educational training about the psychological impacts of working on unique abuse areas like online harassment with a focus on teaching Content Moderators effective coping skills to manage their psychological health. Speak to our Trust & Safety Solutions Directors to learn more about our services and how we can collaborate with you to protect your Content Moderator workforce.