The question of how to regulate online speech without breaking the internet – or democracy – is precisely the kind of debate that benefits from getting the right people in the same room. This is one of the topics being explored at Human x AI Europe, May 19 in Vienna – where Europe decides what kind of digital future it wants to build.
The Disagreement Worth Having
When Germany passed the Netzwerkdurchsetzungsgesetz (NetzDG) in 2017, the debate immediately polarized. Supporters called it a necessary response to rising hate speech and far-right extremism. Critics labeled it "draconian censorship" that would force platforms into over-removal. Both sides were partly right – and that's what makes this case study worth examining carefully.
The law applies to social media platforms with more than 2 million registered users in Germany. It requires them to remove "manifestly unlawful" content within 24 hours of receiving a complaint, and all other illegal content within seven days. Platforms that systematically fail to comply face fines of up to €50 million. The law covers 22 categories of criminal offenses under the German Criminal Code (StGB), including incitement to hatred, defamation, and threats of violence.
What makes NetzDG analytically interesting is that it represents a specific regulatory choice: rather than targeting speakers directly, it conscripts private platforms into governmental service as content regulators. Legal scholar Jack Balkin calls this "new school speech regulation" – coercing intermediaries rather than speakers. The question is whether this approach works, and at what cost.
What the Evidence Actually Shows
The most rigorous empirical study of NetzDG's effects comes from economists Rafael Jiménez Durán, Karsten Müller, and Carlo Schwarz. Their 2024 research found that the law "transformed social media discourse: posts became less hateful, refugee-related content less inflammatory, and the use of moderated platforms increased." More significantly, they found offline effects: anti-refugee hate crimes decreased by approximately 1% for every standard deviation in exposure to far-right social media use.
This is not a trivial finding. It suggests that content moderation can disrupt the coordination mechanisms that turn online hatred into real-world violence. As Müller and Schwarz argue, "hateful activity on Facebook can indeed cause hate crimes, and does not only reflect underlying tensions."
But the story doesn't end there. A 2025 study from Chapman University found evidence of a displacement effect: NetzDG's "chilling effect and over-censorship on public platforms displaced anti-immigration discourse into private encrypted spaces that the law cannot reach." The research showed that users of encrypted messaging apps like Telegram (odds ratio 4.13) and WhatsApp (odds ratio 1.89) were significantly more likely to support the far-right Alternative für Deutschland (AfD) party than users of moderated public platforms.
This presents what the researchers call a "regulatory paradox": overly broad content moderation can backfire by pushing extremists into harder-to-reach spaces where they grow stronger.
The Over-Removal Question
Critics of NetzDG have consistently argued that the law's penalty structure – massive fines for under-enforcement, no consequences for over-enforcement – creates systematic incentives for platforms to remove legal speech. Yale Law School analysis noted that "social media companies are more likely to remove demeaning content that could potentially violate the Criminal Code than risk a fifty-million-Euro fine."
The Information Technology & Innovation Foundation cites studies suggesting that 87.5% to 99.7% of removed content was legally permissible speech. However, this figure deserves scrutiny: it comes from research examining content removed under platform community standards, not specifically under NetzDG complaints. The distinction matters because platforms often prefer to cite their own community standards rather than NetzDG when removing content – precisely to avoid the law's liability framework.
The CEPS study found that "NetzDG has not provoked mass requests for takedowns. Nor has it forced internet platforms to adopt a 'take down, ask later' approach." Removal rates among major platforms ranged from 21.2% for Facebook to only 10.8% for Twitter – hardly evidence of indiscriminate censorship.
What NetzDG Doesn't Address
Perhaps the most important critique of NetzDG comes from Rachel Griffin's research at Sciences Po: the law addresses only one factor driving online hate speech – the absence of media gatekeepers – while ignoring algorithmic recommendations that promote hateful content, social affordances that let users amplify hate speech, and anonymous environments that reduce accountability.
"NetzDG represents an incremental, legalistic approach to a complex sociotechnical problem which requires more fundamental regulatory reform. Rules prescribing censorship of narrowly-defined content categories are ill-suited to large-scale, networked, algorithmically-curated social media."
Rachel Griffin
This is a facts disagreement, not a values disagreement. Both NetzDG supporters and critics generally agree that reducing hate speech is desirable. The question is whether deletion-focused regulation is the right mechanism – or whether it treats symptoms while ignoring the underlying platform architecture that amplifies harmful content.
The DSA Transition
As of February 2024, the EU's Digital Services Act (DSA) has largely superseded NetzDG for platforms operating in Germany. HateAid, a German civil society organization, describes the transition with "mixed feelings": the DSA extends regulation to smaller platforms, gaming services, and professional networks like LinkedIn, but it also removes some protections German users had under NetzDG – including the requirement to remove illegal content within 24 hours and the right to contact an authorized agent in Germany.
The first monitoring results under the revised EU Code of Conduct, published in April 2026, show that major platforms "maintained their commitment to review the majority of notifications about alleged illegal hate content within 24 hours." However, the methodology has changed significantly, making direct comparisons with NetzDG-era data difficult.
The DSA takes a different approach than NetzDG: rather than focusing primarily on content deletion, it emphasizes systemic risk assessment, algorithmic transparency, and platform accountability for design choices that amplify harmful content. This represents a shift toward the "systemic and preventive" regulation that critics of NetzDG have advocated.
The Questions That Remain
Eight years after NetzDG took effect, several questions remain genuinely unresolved:
Does content moderation reduce hate crimes, or does it simply displace extremist coordination to less visible channels? The evidence supports both effects occurring simultaneously. The policy question is which effect dominates – and whether the answer changes over time as extremists adapt.
What is the appropriate balance between speed and accuracy in content removal? NetzDG's 24-hour deadline for "manifestly unlawful" content was criticized as too short for careful legal analysis. The DSA's "expeditious" standard is more flexible but less predictable.
Should platforms be required to address algorithmic amplification, not just content removal? The DSA moves in this direction, but implementation remains uncertain. The question is whether transparency requirements and risk assessments will actually change platform behavior.
How should regulators measure success? NetzDG's transparency reports were criticized for low informative value. The DSA's transparency database offers more data, but researchers have been slow to use even the access provisions that exist.
These are not rhetorical questions. They represent genuine uncertainty about how to govern digital speech in democratic societies. The German experiment with NetzDG – its successes, failures, and unintended consequences – offers evidence that should inform these debates, not settle them.
Frequently Asked Questions
Q: What is Germany's NetzDG law?
A: The Netzwerkdurchsetzungsgesetz (Network Enforcement Act) is a German law that took effect on January 1, 2018, requiring social media platforms with more than 2 million users in Germany to remove "manifestly unlawful" content within 24 hours and other illegal content within seven days, with fines up to €50 million for systematic non-compliance.
Q: Did NetzDG reduce hate crimes in Germany?
A: Research by Jiménez Durán, Müller, and Schwarz (2024) found that NetzDG reduced anti-refugee hate crimes by approximately 1% for every standard deviation in exposure to far-right social media use, suggesting content moderation can disrupt coordination mechanisms that turn online hatred into violence.
Q: What is the main criticism of NetzDG?
A: Critics argue the law's penalty structure – massive fines for under-enforcement, no consequences for over-enforcement – creates incentives for platforms to remove legal speech. Additionally, research suggests the law displaced extremist discourse to encrypted platforms like Telegram rather than eliminating it.
Q: How does the EU Digital Services Act differ from NetzDG?
A: The DSA, which superseded NetzDG in February 2024, applies to all EU member states and emphasizes systemic risk assessment and algorithmic transparency rather than focusing primarily on content deletion timelines. It also removes some German-specific protections, including the 24-hour removal requirement.
Q: Which platforms were affected by NetzDG?
A: NetzDG applied to social media platforms with more than 2 million registered users in Germany, including Facebook, YouTube, Twitter, and Instagram. Platforms receiving more than 100 complaints annually were required to publish semi-annual transparency reports.
Q: What happens to content removed under NetzDG?
A: Platforms must store deleted content for at least 10 weeks for evidence purposes. The 2021 amendments added requirements for platforms to report certain criminal content to the Federal Criminal Police Office (Bundeskriminalamt) and provide appeals procedures for users whose content was removed.