Skip to main content
Blog

Navigating the Storm: Social Media Platforms, Elections, and the Psychological Toll on Content Moderators

By February 19, 2024No Comments

2024 is an unprecedented year for elections as more than 50% of the world’s population go to the polls. According to the Economist, this is the largest election year in history, with elections in India, France, Germany, Ukraine, Indonesia, and Taiwan.  

These elections pose significant challenges for platforms in managing disinformation and misinformation – especially as many of them have reduced their Trust and Safety and content moderation teams. In this article, we’ll look at the psychological impact on Content Moderators tasked with safeguarding users from misleading content. This article looks at the impending challenges, the role of social media platforms, and strategies to support Content Moderators during this critical period. 

2024 – Superbowl of Elections

According to a recent article from NBC, disinformation is a major threat to the US 2024 presidential election. A World Economic Forum survey from January 2024 named misinformation and disinformation from artificial intelligence (AI) as the top global risk over the next two years — ahead of climate change and war.  

Social media can be imperative for voters to understand the candidates, local and national election issues, and increase civic engagement, voter registration and early voting.  The chart below shows the level of social media activity by candidates in the 2016 and 2020 elections – showing a major increase across the board.   

The Impact of Social Media on Election Information 

Research from MIT Sloan professor Sinan Aral and Deb Roy found that false rumors spread faster and wider than true information. Their research found that falsehoods were 70% more likely to be retweeted than the truth and reach 1,500 people six times faster.   

Major platforms have generally worked hard to ensure misinformation is labeled as such, but there is the danger of an ‘implied truth effect,’ when users see certain posts labeled as potentially not true, they assume everything else that is not labeled is true.  There is one final issue – which is that platforms tend to push users into bubbles when they determine which side of the election they are on, called ‘information gerrymanding’ by David Rand, Professor of Management Science and Brain and Cognitive Sciences at MIT, which can bias the outcome of a vote 60% of the time.  

Types of Dis and Misinformation 

There are three forms of disinformation, developed by Camile François from Graphika – called the Disinformation ABC: 

  1. Manipulative Actors, who have a clear intent to disrupt democratic processes and information around the election.
  2. Deceptive Behaviors – Tactics and techniques used by manipulative actors.   
  3. Harmful Content – Used to undermine and hurt individuals, organizations, processes and influence the public debate.  

But looking at the forms of disinformation is just the start. Trust and Safety teams need to understand who is behind it, and why they are acting in this way.  

There are generally four actors: 

  • Foreign influence – Usually this is carried out through very professional, coordinated, well-prepared campaigns and aims to polarize voters over sensitive issues.  
  • Political Disinformation – Using fake identities, websites, amplifying elements of the debate and manipulating content are just some of the ways this can used.  
  • Issue-Based Disinformation – This happens when actors mobilize around a specific issue and using deceptive behaviors try to target online groups.  
  • Lucrative Disinformation – This is about profiting from disinformation, often using clickbait to encourage people to go to a specific website, where the actors earn profit from online ads.  

Real World Examples of Dis and Misinformation in Elections 

Social media has had a profound impact on elections around the world, shaping political discourse, influencing voter behavior, and providing a platform for the rapid dissemination of information.  

Here are some examples that highlight the multifaceted impact of social media on electoral processes: 

1. Deepfakes and Misleading Content 

In the 2020 Taiwanese presidential election, deepfake technology was used to create misleading videos purporting to show President Tsai Ing-wen making false statements. The intent was to damage her credibility and influence public opinion. 

2. False Claims of Rigged Elections

Following the 2020 U.S. presidential election, false claims of widespread voter fraud and a rigged election spread rapidly on social media platforms, particularly on Twitter and Facebook. This misinformation contributed to the storming of the U.S. Capitol on January 6, 2021, by a mob of supporters of Donald Trump. 

3. WhatsApp Misinformation in India

Misinformation circulated on WhatsApp contributed to a series of lynchings in India in 2018. False rumors about child abduction and organ trafficking spread rapidly on the messaging platform, leading to mob violence and several deaths. The incident prompted concerns about the role of private messaging apps in disseminating unchecked information. 

4. Russian Disinformation Campaigns 

Russian influence campaigns, such as those during the 2016 U.S. presidential election, have involved the creation and dissemination of false information on social media platforms. Fake accounts, manipulated images, and divisive content were used to exploit existing social and political tensions, fostering a climate of mistrust. 

5. Brazilian WhatsApp Misinformation

Misinformation on WhatsApp played a significant role in the Brazilian presidential election. Falsehoods and misleading content were disseminated through private groups, making it challenging for fact-checkers and authorities to address the issue promptly. 

6. Facebook and Myanmar’s Rohingya Crisis 

Facebook has been criticized for its role in the spread of hate speech and misinformation contributing to the Rohingya crisis in Myanmar. False narratives and anti-Rohingya content circulated on the platform, exacerbating ethnic tensions and violence against the Rohingya minority. 

7. YouTube Algorithm Amplification

During the COVID-19 pandemic, YouTube faced criticism for its algorithm’s role in amplifying misinformation. False claims about the virus, its origins, and potential treatments gained traction due to the platform’s recommendation algorithms, leading to increases in illness, public health concerns, lack of trust in the health system and likely deaths. 

Social media platforms offer unprecedented opportunities for political engagement and information sharing, but they also pose significant challenges in terms of the spread of misinformation, erosion of trust, and the potential manipulation of public opinion. As elections continue to evolve, understanding and addressing the impact of social media on democratic processes remain critical.  

Addressing these challenges requires a multi-faceted approach involving platform accountability, user education, and collaboration with fact-checkers and authorities. On the front line of this work is your Trust and Safety team, including your Content Moderators.  

9 Ways to Support your Content Moderators 

Content Moderators serve as the frontline defense against the proliferation of disinformation and misinformation during elections. These individuals face an immense psychological burden, sifting through a relentless stream of content, often graphic or disturbing, while making real-time decisions about its credibility. The pressure to maintain a safe online environment puts them in a unique position, one that demands a nuanced understanding of the human psyche. 

How can you help? 

  1. Cognitive Fatigue and Decision-Making – Content Moderators must make swift and accurate decisions, distinguishing between genuine political discourse and deceptive content. The cognitive load associated with this task can lead to decision fatigue, compromising the ability to consistently apply moderation policies. Leadership needs to implement regular breaks, mindfulness practices, and stress-relief measures to mitigate the impact of prolonged cognitive exertion. 
  2. Emotional Toll – The nature of the Content Moderators’ work exposes them to distressing and emotionally charged material. Continuous exposure to harmful narratives, hate speech, and graphic content can lead to emotional exhaustion, compassion fatigue, and other psychological difficulties. Regular breaks, peer support sessions and offering employees rotation through less egregious content workflows will help them maintain their wellbeing. 
  3. Empowering Content Moderators – Recognizing the challenges ahead, platforms can implement proactive strategies to empower Content Moderators and enhance their overall wellbeing. We recommend strategic planning of wellbeing interventions, inclusive of resilience and mental health awareness trainings for Content Moderators and Mental Health Champion Training for team leads and management to appropriately support their moderation teams.  
  4. Robust Training Programs – Platforms should invest in comprehensive training programs that equip moderators with the skills necessary to discern nuanced forms of misinformation. This includes understanding cognitive biases, recognizing psychological manipulation tactics, and staying updated on evolving political narratives. Providing ongoing education fosters a sense of competence and confidence among moderators and gives them a break from the content.
  5. Transparent Communication Channels – Maintaining open communication channels between policy teams and Content Moderators is crucial, ensuring regular updates on policy changes, interpretation and emerging disinformation trends. It is especially important during times of potential political unrest globally to ensure Content Moderators can rely on their cross-functional teams for support. 
  6. Remember Why we Do the Work – A sense of purpose can contribute positively to their psychological wellbeing, however we need to avoid placing the stress of defending a democratic election on their shoulders, as this causes more stress.  
  7. Personal Biases – Everyone needs to recognize that Content Moderators – like all other human beings – will have an opinion about the election, the candidates and the issues raised. We need to be aware of the impact of our own biases on our interpretation of policies, especially when we personally experience an emotional reaction to a post. Going through unconscious bias training can be a helpful way to prepare Content Moderators to identify their biases and learn how to mitigate the risk of personal biases on decision-making.  
  8. Peer Support Networks – Establishing peer support networks allows Content Moderators to share experiences, coping mechanisms, and emotional support. Peer-led initiatives, moderated discussion forums, and mentorship programs create a sense of community and resilience, helping moderators navigate the challenges they face collectively.  
  9. Wellbeing Programs – Proactively focusing on Content Moderators’ psychological health and safety through a very personalized and specific wellbeing program can help protect their wellbeing. And remember, a proactive approach to wellbeing is an investment in the long-term performance and effectiveness of the moderation team. 

Conclusion 

Social media platforms must take proactive measures to support Content Moderators wellbeing. By recognizing the unique challenges posed by the influx of disinformation and misinformation, implementing robust training programs, fostering transparent communication, and prioritizing mental health, platforms can empower people to navigate this critical period successfully. As an expert psychologist and Trust and Safety specialist, advocating for the holistic wellbeing of Content Moderators is not just a strategic imperative but a moral responsibility in the age of digital information.  

Talk to use today to learn more about how we can help your content moderation teams, leadership and the entire organization to ensure they are able to focus on their work, and not their declining mental health.