
Most eyes are on the Supreme Court in the US as they present arguments and discuss the level of content moderation that should be required from platforms.
Looking at the overall legal environment, we wanted to start a series of blog posts on legislation related to content moderation and the reporting obligations set out in law.
Many of these laws are new. Courts and regulators will interpret them in practice for the first time, and many observers will watch those decisions closely.
Why are Governments Implementing Social Media Regulations?
We are all aware that in recent years the digital world has changed due to increased social media use, with social media platforms becoming integral to our daily lives. As the influence of these platforms has grown, so too has the concern for the psychological safety of their users.
Governments are implementing social media regulations around the world, responding to these challenges by proposing and enacting legislation aimed at regulating content on social media to ensure a safer online environment.
This article provides an in-depth review of social media usage, trends, and landmark legislation. We’ll explore the Digital Services Act in the EU, the Online Safety Bill in the UK, and relevant legislation in the US and Australia.
Digital Services Act (EU) and Social Media Compliance
The European Union’s Digital Services Act is aimed at creating a safer digital space for users within the EU. It introduces a dual-tier system, categorizing platforms as either very large online platforms (VLOPs) or online intermediaries.
VLOPs are subject to more stringent obligations, including increased content moderation measures and transparency reporting.
Non-compliance with the Digital Services Act can result in fines of up to 6% of a company’s annual global turnover. This highlights the EU’s commitment to ensuring strong enforcement of online safety regulations.
The Dual-Tier System for Platforms
Platforms are categorized based on size, with very large online platforms (45 million users in the EU) facing more stringent obligations. This includes measures to combat the spread of illegal content and ensure user safety.
Content Regulation on Social Media Standards
The act outlines standards for content regulation on social media, requiring platforms to employ measures to prevent the dissemination of specific types of content, including terrorist content, child sexual abuse material, and hate speech.
Transparency and Accountability Mandates
Platforms are mandated to provide transparency reports detailing their content moderation actions. Authorities are empowered to assess the effectiveness of content moderation measures.
Alignment with National Laws
Illegal content will be based on the national content moderation regulations in the country and the overarching EU law. For example, removing photos of Nazi symbols will be required in Germany, but would not be required in Denmark.
Impact of the Digital Services Act on Content Moderation
The Digital Services Act has direct consequences for how platforms organize content moderation, allocate resources, and demonstrate transparency to regulators. The points below highlight the main areas where larger services face new expectations and potential penalties.
-
Stricter Requirements for Larger Platforms
Very large platforms face more comprehensive obligations, necessitating advanced content moderation technologies and algorithm transparency to meet the prescribed standards.
-
Collaboration with Authorities
Platforms are required to cooperate with national authorities to address illegal content. This collaboration enhances the effectiveness of content moderation efforts with support from government agencies.
-
Financial Penalties
Fines of up to 6% of global turnover serve as a substantial deterrent for non-compliance. The financial impact shows the EU’s commitment to strong enforcement.
-
Operating Ban
In exceptional cases of serious harm, the legislation allows for a temporary ban on operations, where platforms could be shut down. The Commission will have enforcement powers similar to those it has under antitrust proceedings.
-
Enhanced Cooperation Across the EU
An EU-wide cooperation mechanism will be established between national regulators and the Commission. Already, we have seen the EU open a formal probe of TikTok under the DSA, citing child safety, risk management, and other concerns.
Risk Management and Data Protection
The DSA requires platforms to conduct comprehensive risk assessments alongside existing General Data Protection Regulation (GDPR) obligations. These data protection laws work in tandem with DSA requirements to ensure both content safety and user privacy across the EU’s extra-territorial scope.
Germany’s Network Enforcement Act (NetzDG)
Germany has been a pioneering example in European social media regulation with its Network Enforcement Act (NetzDG). This law establishes strict 24-hour removal timelines for clearly illegal content and requires platforms to maintain transparent complaint procedures.
The NetzDG demonstrates how individual member states complement EU-wide frameworks with their own enforcement mechanisms, including judicial oversight requirements for content decisions.
UK Online Safety Bill and Platform Duty of Care
The Online Safety Bill in the UK represents a comprehensive effort to tackle online harm and protect users. It requires social media companies to take responsibility for the content on their platforms, particularly content that may cause harm.
The bill proposes the establishment of a regulatory framework with the power to impose hefty fines on platforms that fail to adhere to prescribed standards.
Companies can face fines of up to 10% of their global turnover for non-compliance, providing a significant financial incentive for adherence to the regulations.
Duty of Care Obligations
The bill establishes a legal duty of care on social media companies to protect users from harmful content. Platforms are required to take proactive measures to ensure user safety.
Regulatory Oversight and Enforcement
The legislation proposes the creation of an independent regulatory body with the authority to enforce compliance. This body will set codes of practice outlining the expectations for online safety.
Defining Harmful Content
The bill defines harmful content broadly, encompassing illegal content as well as content that may be legal but harmful, such as cyberbullying, hate speech, and misinformation.
Children’s Safety and Age Verification
The bill places particular emphasis on protecting children online, requiring platforms to implement age verification measures and provide verifiable parental consent mechanisms. These age-appropriate design requirements aim to protect young users from harmful content and ensure platforms maintain appropriate safety standards for minors.
Impact of the Online Safety Bill on Content Moderation
The Online Safety Bill reshapes how social media companies plan, resource, and monitor their content moderation systems. The main areas of impact include:
-
Increased Accountability
Social media companies are held accountable for the content on their platforms. The duty of care places the onus on platforms to implement strong content moderation policies to detect and remove harmful content promptly.
-
Transparency Reporting
Platforms are required to publish transparency reports outlining the actions taken to address harmful content. This approach supports transparency and allows users to gauge the effectiveness of content moderation efforts, with provisions for a right to appeal moderation decisions.
-
Fines for Non-Compliance
Hefty fines, up to 10% of global turnover, act as a strong deterrent for non-compliance. This financial penalty incentivizes companies to invest in advanced content moderation technologies and practices. Senior management could also face jail time if the non-compliance is egregious enough.
US Social Media Legislation – State and Federal Proposals
In the United States, the regulatory environment for social media is more complex, with various state-level initiatives and ongoing discussions at the federal level.
The proposed SAFE TECH Act, for instance, seeks to reform Section 230 of the Communications Decency Act, which currently provides conditional immunity for platforms, making platforms more accountable for harmful content.
Individual states, such as California, have introduced the Silenced No More Act, allowing users to sue platforms for content moderation decisions. While federal legislation is still pending, the trend towards increased scrutiny of social media platforms is evident.
The SAFE TECH Act Proposal
This proposed legislation seeks to reform Section 230, making platforms more accountable for content moderation decisions. It removes immunity in cases involving illegal content and ensures platforms act in good faith when moderating content.
California’s Silenced No More Act
Allows users to sue platforms for content moderation decisions, establishing a private right of action, promoting transparency and accountability.
Children’s Online Safety Legislation
Multiple proposals address children’s safety online, including the Kids Online Safety Act (KOSA) and California’s Age-Appropriate Design Code (CAADCA). These measures build upon existing protections under the Children’s Online Privacy Protection Act (COPPA), requiring enhanced protections for minors and age-appropriate content design.
State-Specific Social Media Regulations
Since 2021, 38 states have introduced over 250 bills to regulate social media across digital services’ platforms. Many of these laws are unconstitutional, conflict with federal law, and would place major barriers on the platform’s ability to restrict dangerous content.
-
New York
SB 9465 establishes a task force on social media and violent extremism, and AB 7865/SB 4511 requires social media networks to provide and maintain mechanisms for reporting hateful conduct on their platform.
-
Minnesota
SF 3933/HF 3724 regulates algorithms that target user-generated content for users under the age of 18.
-
Ohio
The purpose of HB441 is to prohibit social media platforms from censoring a user, their expression, or a user’s ability to receive the expression of another user.
Impact of US Social Media Legislation on Content Moderation
US proposals and state laws are already shaping how platforms structure moderation policies, manage legal risk, and respond to user complaints. The points below highlight two measures that have direct implications for content decisions and accountability.
-
Section 230 Reform
If enacted, the SAFE TECH Act would prompt platforms to reevaluate their content moderation practices, aligning with the proposed changes to Section 230, potentially leading to more cautious moderation policies.
-
Increased Accountability in California
The Silenced No More Act introduces a legal avenue for users to challenge content moderation decisions, which supports increased accountability and transparency.
Australian Social Media Legislation – Online Safety Act 2021
Australia has taken steps to address online safety concerns through the Online Safety Act 2021. This legislation grants the eSafety Commissioner the authority to issue removal notices for harmful online content and imposes fines for non-compliance.
Companies failing to remove specified content within the designated timeframe may face financial penalties. The legislation emphasizes the need for swift and effective content moderation to protect Australian users.
Powers of the eSafety Commissioner
Empowers the eSafety Commissioner to issue removal notices for harmful content. Failure to comply with removal notices can result in fines.
-
Definitions of Harmful Content
The legislation specifies types of harmful content, including cyberbullying, image-based abuse, and other online harms.
Impact of Australian Social Media Legislation on Content Moderation
Australian online safety rules directly influence how platforms handle harmful content and structure their content moderation processes.
-
Swift Removal Requirements
The legislation mandates the prompt removal of specified harmful content, necessitating reliable and efficient content moderation processes.
-
Financial Penalties
Fines of up to AUD 555,000 per day for individuals and AUD 2.77 million per day for corporations provide a significant financial incentive for platforms to prioritize effective content moderation.
The Future of Digital Media Regulations
The global surge in laws regulating social media platforms reflects the growing recognition of the impact these platforms have on society. The diverse approaches taken by the UK, the EU, the US, and Australian authorities reflect the unique challenges each region faces.
When Multiple Regulatory Frameworks Converge
You’re preparing your quarterly compliance review, and the complexity is mounting. Your platform now falls under the Digital Services Act’s very large online platform category, which means stricter content moderation standards and transparency reporting requirements.
At the same time, you’re implementing changes to meet the UK Online Safety Bill’s duty of care obligations, and your Legal team is tracking 38 US states that have introduced content regulation bills. Each framework demands different approaches to harmful content, different reporting structures, and different definitions of what requires removal.
Meanwhile, your Trust and Safety team is asking for additional wellbeing support to handle the increased volume and pressure. You need to allocate resources that satisfy regulators in multiple regions while ensuring your Content Moderators can do their work without burning out.
The imposition of fines and prison time for non-compliance is a common thread, signaling a collective determination to ensure that social media companies prioritize the safety and wellbeing of their users.
The Implications for Social Media Companies
As these legislative frameworks continue to evolve, the impact on companies, both in terms of financial penalties and potential legal consequences, serves as a powerful incentive for the industry to proactively address the challenges posed by harmful online content.
These legislative acts collectively emphasize the importance of user safety and place substantial responsibility on social media platforms to implement effective content moderation measures.
The combination of duty of care, regulatory oversight, transparency reporting, and financial penalties underscores a global commitment to creating a safer online environment.
How Do These Laws Affect Content Moderators’ Wellbeing?
With regulators focusing on the safety and wellbeing of users and providing distinct requirements around corporate social accountability and legal liability of the platforms, we believe that they must also consider the impact that these digital media regulations will have on Trust and Safety teams and Content Moderator wellness.
As a next step after launching and enforcing new legislation, regulators will need to ensure platforms are adequately resourced to support their Trust & Safety teams to adhere to and comply with their regulatory requirements, while balancing the need to protect the psychological health and wellbeing of their employees.
Protecting Both Users and Moderators – A Balanced Approach
At Zevo Health, we are working with our customers and regulators to design and build the absolute best clinical solution for Content Moderators and Trust & Safety teams, to ensure their protection and the reduction of risk for the organization.
If you are interested in supporting your Content Moderators’ wellbeing, get in touch with our experts today.