Skip to main content
Blog

The Innovative Role of AI in Content Moderation and Why it’s Important

By March 1, 2024No Comments

I was on a webinar the other day where AWS were showcasing their content moderation AI tools, called Rekognition, which included automatic blurring of egregious images, including simple things like detecting a beer bottle in the camera feed of the user and blurring that part of the image. The presenter demonstrated automated text removal, replacing the written ‘go to hell’ with asterisks in real-time, and warning the user that their content was being changed. They also can redact offensive words from transcriptions and can also mask audio recordings. Very cool.  

One of the key things I learnt from the session is that users believe it is the brand’s responsibility to moderate content on the platform – whether that is User Generated Content (UGC) or any other type of content. This means that the impact on the brand reputation and value is directly related to the level of mis or dis-information on the platform.  

We have seen how X’s advertising revenues have been impacted by concerns about the level of dis-information on the platform. The social media platform has been generating about $600 million in ad revenue per quarter this year, a decline from the over $1 billion per quarter it earned in 2022, Bloomberg reported on December 12th. 

So, how are people using AI to help with the content moderation work – and how can organizations, content moderation team leads and technologists in these platforms think about AI adoption? Let’s see.  

Organizational Perspective 

For leading social media platforms, we are seeing more and more of our customers incorporating AI in content moderation as a commitment to user safety and community standards. However, one of the other major drivers for AI adoption is to protect your content moderation team as well – enabling them to avoid some of the more egregious images, videos, audio and text.  

AI also helps enable the following benefits: 

  • Speed – AI enables the platform to handle large volumes of content efficiently. This scalability is crucial for a platform experiencing rapid growth in user-generated content. The speed of identification and removal of content is a key requirement in many of the global legislation.  
  • Consistency – AI can consistently apply content policies, reducing the likelihood of human error and ensuring a more uniform enforcement of community guidelines. This can ensure fairness and transparency in the content moderation process. 
  • Real-time Detection – AI allows for real-time content detection, enabling swift response to potentially harmful content. This helps in maintaining a safer and more secure online environment.  
  • Customization: Highlight the ability to customize AI models to adapt to the platform’s unique content nuances. This customization ensures that content moderation aligns with the specific community standards of the platform. 

Content Moderator Team Lead Perspective 

From the perspective of a content moderation team lead, it’s important to address the collaboration between human moderators and AI systems.  

  • Human Oversight – AI is a tool to aid human moderators, it will not replace them. Human judgment is crucial in understanding context, sarcasm, and cultural nuances that AI may struggle with. 
  • Complex Decision-Making – AI can handle straightforward cases efficiently, allowing human moderators to focus on more complex content that requires nuanced decision-making. This synergy enhances the overall effectiveness of content moderation. 
  • Emotional Toll – Acknowledge the emotional toll on human moderators and highlight how AI can assist in moderating explicit and disturbing content, thereby reducing the psychological impact on the moderation team. 

Technologist Perspective 

From a technologist’s standpoint, delve into the technical aspects of AI in content moderation and address potential challenges and ethical considerations: 

  • Algorithmic Bias – Discuss the ongoing efforts to mitigate algorithmic bias and ensure that AI systems are fair and unbiased in their content moderation decisions. Transparency in the AI development process is crucial. 
  • Continuous Improvement – Explain how AI models are continuously trained and improved based on feedback loops. This iterative process helps address evolving online trends and new forms of inappropriate content. 
  • Ethical AI Use – Stress the importance of ethical AI use, including privacy considerations, user consent, and adherence to regulatory frameworks. It’s crucial to build and deploy AI systems responsibly. 

Challenges with AI in content moderation 

While AI has made significant strides in content moderation, there are several challenges associated with its implementation, including: 

  • False Positives and Negatives – AI models may incorrectly identify content as violating policies (false positives) or fail to detect inappropriate content (false negatives). Over-zealous filtering can lead to the suppression of legitimate content, while false negatives allow inappropriate content to go unnoticed. 
  • Contextual Understanding – AI may struggle to understand the context, humor, and nuances in content, leading to misinterpretations. Content that is contextually acceptable may be wrongly flagged, and harmful content that uses sarcasm or cultural references may go undetected. 
  • Dynamic and Evolving Content – The internet is dynamic, and new forms of content constantly emerge. AI models may struggle to keep up with evolving trends. Emerging threats, trends, or variations in content may not be adequately addressed by pre-existing AI models, requiring frequent updates. 
  • Bias – AI models can inherit biases present in training data, leading to unfair or discriminatory content moderation decisions. Biased algorithms can disproportionately affect certain demographics, reinforcing existing prejudices and compromising the platform’s commitment to fairness. 
  • Adversarial Attacks – Malicious actors can intentionally manipulate content to bypass AI filters, creating content that appears innocuous to humans but is harmful or violates policies. Adversarial attacks can undermine the effectiveness of AI models, requiring constant vigilance and adaptation. 
  • Resource Intensity – Training and maintaining effective AI models require substantial computational resources and expertise. Smaller platforms or organizations with limited resources may find it challenging to implement and sustain advanced AI-driven content moderation systems. 
  • User Privacy Concerns – AI models often analyze user-generated content, raising concerns about privacy and data security. Balancing the need for content moderation with user privacy expectations can be a delicate task. Ensuring responsible data handling practices is crucial. 
  • Cultural Sensitivity – AI models may not be equally effective across different languages, cultures, or regions. Content moderation systems may inadvertently favor certain languages or cultural norms, leading to inconsistencies in enforcement.  
  • Transparency – Many AI models operate as “black boxes,” making it challenging to explain the reasoning behind specific moderation decisions. Lack of transparency can erode user trust and hinder the ability to address concerns or disputes regarding moderation actions. 

As some additional reading, there is an excellent paper from Big Data & Society which looks at Ethical Scaling for Content Moderation in the extreme speech area – and the authors concluded that “lack of cultural contextualization has resulted in false positives and over-application.” 

A 2021 Research Paper from Virgina Tech on the Psychological Well-Being of Content Moderators found that “Human abilities still exceed state-of-the-art AI for many data processing tasks, including content moderation. While AI will continue to improve, the use of human computation enables companies to deliver greater system capabilities today.” 

In conclusion, a comprehensive understanding of AI’s role in content moderation involves a balanced approach, acknowledging both the benefits and challenges. Effective communication and transparency are key to gaining user trust and maintaining a safe online environment. 

To find out more about how Zevo can support your content moderation teams, please talk to us today.