Skip to main content
BlogMental Health

Upcoming Supreme Court Decision: A Crucial Juncture for Content Moderators’ Mental Health

By February 26, 2024April 4th, 2024No Comments

As the Supreme Court prepares to hear a case that could significantly alter the rules of content moderation on major digital platforms, we are closely monitoring what potential impact this could have on the mental wellbeing of Content Moderators. These individuals are essential in shaping the online experience, yet their psychological health is often overlooked. 

A group of 20 Republican attorney generals has petitioned the Supreme Court to limit Big Tech’s ability to moderate content on their platforms, citing concerns over free speech. This legal action spearheaded by Missouri Attorney General Andrew Bailey, along with his colleagues, seeks to support state laws that would restrict the ability of Internet platforms to moderate content, especially based on political viewpoints. This request comes in the context of the Supreme Court preparing to hear arguments in cases that question the extent to which these companies can exercise control over the content on their platforms. 

The concern raised is that the power wielded by major tech companies like X (formerly Twitter) and Meta in moderating content could set a precedent for censorship, potentially infringing on First Amendment rights. 

These legal challenges come at a time when the nation is gearing up for the U.S. elections, a period typically marked by increased political discourse and disinformation sharing online. The outcome of this Supreme Court case could thus have profound implications not just for free speech and the operations of Big Tech, but also for the mental health of Content Moderators, who may face an unprecedented volume and variety of content to monitor and manage. 

The Mental Health Impact – Heightened Exposure to Harmful Content 

Content moderation, as it currently stands, exposes workers to a range of disturbing material, including hate speech, graphic violence, and abuse. This exposure has been linked to mental health issues such as stress, anxiety, depression, and PTSD, mirroring the symptoms experienced by first responders 

Moreover, there’s an additional dimension to consider: the potential impact on moderators’ worldviews. Prolonged exposure to hate speech and harassment can skew a Content Moderator’s perspective, leading to the development of a more cynical and negative worldview. This psychological impact goes beyond immediate stress and anxiety, potentially altering how moderators perceive society and interpersonal interactions. 

Efforts to manage these risks include using AI tools to filter extreme content and implementing robust wellbeing programs.   

However, if the Supreme Court rules in favour of less restrictive content moderation policies, the volume and intensity of harmful content could increase, potentially overwhelming existing support systems and exacerbating the mental health risks for moderators. The balance between maintaining free speech and protecting mental health becomes even more delicate in this scenario. 

Stakeholders Must Consider the Human Cost 

Given the pivotal nature of this case, it’s essential for all stakeholders, including the judiciary, tech companies, and policymakers, to consider the human cost of content moderation. The Supreme Court’s decision will not only influence the legal framework surrounding digital speech and Big Tech’s content moderation practices but will also directly impact the lives of thousands of Content Moderators. These individuals, who are often the unsung heroes maintaining the civility and safety of our online communities, face significant mental health challenges due to the nature of their work. 

At Zevo Health, we understand the vital importance of mental wellbeing, especially for those in high-stress roles like content moderation. We advocate for a future where the mental health of these workers is given paramount consideration, balanced appropriately with the rights to free speech and expression. As the digital world continues to evolve, it is our collective responsibility to ensure that those who work tirelessly to safeguard our online experiences are supported, protected, and valued.