The wellbeing of content moderators can be impacted by a complex array of organizational factors. Moderators play a crucial role in maintaining the integrity and safety of online platforms, and as a result are often exposed to disturbing, graphic, and harmful content in their day-to-day work. They rely on policy to comply with company and community standards, as well as the broader legal and regulatory environment. Though the role of policy is to support moderator work, it can sometimes hinder their mental and emotional wellbeing.
The purpose of content guidelines and policy is to provide a set of rules to users and moderators alike, setting out standards of idealistic behavior to foster healthy online communities. Moderators’ decisions to remove content or users from a platform are informed by these policies. The policies’ guidelines determine if the content they are viewing violates said policy, thereby addressing platform abuse effectively.
However, vague, inconsistent, or overly restrictive policies can increase stress and uncertainty among moderators making decisions. Conversely, clear and well-defined policies can help them make these decisions more confidently. For policy developers, it can be a challenge to balance ‘freedom of speech’ while still ensuring community safety on their platforms.
How policy impacts content moderation wellbeing
Adhering to policy is not just about making an informed decision. There are several areas within policy development that can have an impact on the psychological wellbeing and mental capacity of content moderators. These areas include:
Grey areas within policy
- A common stressor for CMs is deciding whether grey area content violates guidelines. This places additional constraints on the moderator’s productivity levels and their quality assurance standards i.e., making the ‘right’ decisions based on the policy.
- Both productivity and quality metrics have a direct impact on moderators’ during performance reviews. There are potential financial incentive implications also as these are frequently paired with meeting productivity or quality standards e.g., bonuses for exceeding a minimum quality standard monthly.
Frequency of updates
- When updates are too frequent and close together, it can create a lack of stability for moderators. On the contrary, when updates are too far apart, the amount of information to take in and process can pile up leading to cognitive overload.
- It requires mental capacity to understand the policy itself, learn the changes to the policy, and subsequently implement those changes with almost immediate effect. Policies must be adaptable and flexible therefore; moderators are required to keep up with these ongoing changes which can present as a unique occupational stressor.
Ethical dilemmas (i.e., personal values versus policy requirements)
- Sometimes, content can affect CMs morally as the decision they make may result in a conflict between their personal values and the policy guidelines. For example, a moderator may personally feel that a user is scamming other users financially via an online marketplace however, based on a lack of credible evidence presented to them according to the policy, they have no option but to keep the user profile active. They may feel pressured to overlook their personal values for the sake of adhering to policy and the challenge lies in trying to balance the two.
- They may also experience additional stress worrying about the implications to users if their generated content is actioned or not. They are under pressure to be ‘Guardians of the Internet’ and safeguard users, which brings its own set of expectations.
Practical tips for policy teams
The following are some practical tips for policy teams could implement to increase the wellbeing of content moderators:
Conscious scheduling of changes
- Try to provide roadmaps to vendors about updates on any policy changes. These updates should occur every quarter to alleviate stress caused by uncertainty.
- Consider the frequency of update: If they occur once per year, updates may be overwhelming. However, if updates happen too frequently, there is no stability or chance to adapt to changes.
- Plan to launch changes for similar policies at the same time and limit the types of changes that are rolled out together. For example, you may try to group policies under themes such as fraud, hate speech or terrorism, rolling them out similarly themed policy changes simultaneously to reduce cognitive overload.
Adequate support for changes that are launching
- Ensure policy documents are easy and clear to implement
- Provide training materials that include multiple examples and practice content sets
- Provide quick resolution of questions that are raised in cases of emergency updates that do not follow regular procedures
The more we consider how policy development impacts content moderators, the greater we can safeguard them from these workplace stressors. This requires ongoing open communication between policy teams and content moderators. Rather than solely providing moderators with training and updates, encourage conversations between policy developers and moderators about how recent updates or new policies have impacted how their ability to work efficiently and effectively. In addition, explore how these updates or policies have impacted their overall wellbeing by asking about the grey areas, potential ethical dilemmas, and cognitive overload.
Safeguarding moderators also requires well-being service providers like Zevo Health to understand the wider ecosystem that impacts their wellbeing. By sharing and using this knowledge, providers can develop, implement, and evaluate solutions that support all stakeholders that are part of this ecosystem – including policy teams.
In summary, the policies and practices related to content moderation directly impact the mental and emotional wellbeing of content moderators. Platforms and employers should prioritize the creation of comprehensive policies that protect their moderators’ psychological health and well-being, provide them with adequate supports, and maintain a collaborative work environment with the end-goal of safeguarding all platform users.