In 2023, a domestic violence support forum was blocked by a major UK ISP's content filter. The filter, designed to protect children from harmful content, had categorized the forum as "violence" and silently removed it from search results for millions of users. Survivors seeking help got a generic "page cannot be displayed" message. No explanation, no appeal process, no way to know the block was happening. The site operators only discovered it months later when traffic from UK users dropped to near zero.
This wasn't a bug, exactly. The filter was working as designed — it identified content related to violence and blocked it. The problem is that content filtering systems don't understand context. They can't distinguish a resource that discusses violence to help survivors from one that promotes it. And once a filtering infrastructure exists, its scope almost always expands beyond its original purpose.
The Pattern: Safety Mandates That Become Control Infrastructure
The political logic is always the same. A government proposes internet filtering to protect children — something almost nobody opposes in principle. The legislation passes with broad support. A technical infrastructure gets built: DNS-level blocks, deep packet inspection, mandatory filtering by ISPs. And then, quietly, the list of blocked categories grows.
Australia's original internet filter was proposed to block child exploitation material. By the time it was implemented, the blocked categories included euthanasia resources, anti-abortion sites, regular pornography, gambling, and a Queensland dentist's website (that one was a mistake, but it stayed blocked for weeks). The UK's "porn filters" expanded to block forums, encrypted email services, and VPN providers. Turkey's family safety filter blocks LGBTQ+ resources. Russia's child protection law became the legal foundation for blocking opposition media.
This isn't conspiracy thinking. It's infrastructure economics. Once you build a system capable of filtering arbitrary internet content, the marginal cost of adding another category to the blocklist is effectively zero. The expensive part — the filtering infrastructure itself — already exists. Every interest group with political influence will eventually ask: "while you're at it, could you also block X?"
Why Content Filtering Doesn't Work as Intended
Beyond the scope-creep problem, there's a more fundamental issue: content filtering at scale doesn't actually work well. The technical limitations are severe.
The Over-blocking Problem
Automated filtering systems produce enormous numbers of false positives. A study of the UK's default-on ISP filters found that roughly 19% of the top 100,000 websites were incorrectly blocked by at least one ISP. Blocked sites included charities, political organizations, educational resources, and small businesses. The Scunthorpe problem — where legitimate content is blocked because it contains a substring that matches a filter rule — remains unsolved at scale.
More insidiously, over-blocking is asymmetric. Large, well-resourced sites can navigate the appeals process to get unblocked. A small LGBTQ+ youth support site run by volunteers can't. The sites most likely to be caught in over-blocking are exactly the ones least equipped to fight it.
The Under-blocking Problem
At the same time, determined users bypass filters trivially. A VPN — which costs a few dollars a month — renders ISP-level filtering completely useless. DNS-based blocks can be bypassed by changing your DNS resolver to 1.1.1.1 or 8.8.8.8, something a tech-literate teenager can do in about 30 seconds. Even deep packet inspection struggles with encrypted traffic, which now accounts for over 95% of web traffic.
# Bypassing DNS-based content filtering:
# Step 1: Change DNS resolver (takes 10 seconds)
# On Linux/Mac:
echo "nameserver 1.1.1.1" | sudo tee /etc/resolv.conf
# Or use DNS-over-HTTPS, which is invisible to ISP filters:
# Most modern browsers support this natively.
# Firefox: Settings → Privacy → Enable DNS over HTTPS
# Chrome: Settings → Security → Use secure DNS
# The filter infrastructure cost millions to build.
# Bypassing it took 10 seconds.
This creates a perverse outcome: the children most at risk — those with tech-savvy peers, those in environments where they're actively seeking out harmful content — will bypass the filters. The people who get blocked are the ones who wouldn't have sought out harmful content in the first place, but who happen to visit sites that trigger false positives.
The Technical Reality Engineers Need to Understand
If you're building applications, APIs, or services, content filtering mandates affect you directly. Here's what you need to know:
- SNI inspection is fading. Encrypted Client Hello (ECH) is being deployed across major CDNs and browsers. Once it's widespread, ISPs can no longer see which specific site a user is visiting on a shared IP — they can only see the CDN domain. This breaks hostname-based filtering entirely.
- DNS encryption breaks DNS-based filters. DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT) encrypt DNS queries, making them invisible to ISP-level monitoring. Major browsers are rolling this out as a default setting.
- Client-side filtering is the next battleground. As network-level filtering becomes ineffective, some proposals shift to mandating filtering software on devices or in app stores. This is technically more effective but raises severe privacy concerns — it requires software on your device inspecting everything you do.
- Age verification mandates create privacy risks. Several jurisdictions now require age verification for certain content. The technical implementations range from credit card checks (which create browsing-history-linked-to-identity databases) to biometric verification (which normalizes facial recognition for internet access).
What Actually Protects Children Online
The uncomfortable truth is that network-level content filtering is the least effective approach to child safety, but it's the most politically convenient because it doesn't require changing anything about how platforms operate or how parents engage with technology.
Approaches that actually work tend to be less dramatic but more effective:
- Platform design changes — Making recommendation algorithms less aggressive for young users, disabling direct messaging from strangers by default, reducing social comparison features. Instagram's restrictions on teen accounts had a measurable impact on problematic interactions.
- Transparency requirements — Requiring platforms to publish data on content moderation decisions, algorithmic recommendations to minors, and prevalence of harmful content. Sunlight is a better disinfectant than filters.
- Reporting and response infrastructure — Investing in faster takedown of genuinely illegal content (CSAM, grooming) at the platform level, with dedicated law enforcement resources. This targets the actual harm rather than building general-purpose censorship tools.
- Device-level parental controls — Voluntary, parent-managed controls on specific devices, rather than ISP-level filtering that affects everyone. These give parents agency without creating national censorship infrastructure.
The question isn't whether we should protect children online. Of course we should. The question is whether we should build censorship infrastructure that affects the entire population to do it — especially when that infrastructure doesn't actually work.
The Chilling Effect on Open Source and Security Research
Content filtering mandates have a direct impact on the tech community that often gets overlooked. Security researchers regularly publish vulnerability details, exploit code, and analysis of malware — all of which can be flagged by automated filters. Open source projects that deal with encryption, anonymization, or circumvention tools get caught in blocks designed for entirely different purposes.
Tor Project's website has been blocked by content filters in multiple countries — not because of any specific illegal content, but because the tool could be used to access blocked sites. VPN provider websites get blocked for the same reason. This creates a self-reinforcing cycle: filtering drives users toward circumvention tools, which get blocked, which drives users toward more obscure circumvention tools.
For developers, the practical impact is real. If you're hosting documentation, tools, or libraries that touch on security topics, you may find your site blocked in certain jurisdictions with no notification and no clear appeals process. It's worth monitoring access patterns by geography and having alternative distribution channels.
What Engineers Can Do
As engineers, we're often the ones asked to implement these filtering systems. We're also the ones best positioned to understand their limitations and advocate for better approaches.
- Support encryption everywhere. HTTPS, encrypted DNS, encrypted SNI — these protect user privacy and make blunt-instrument filtering harder. This isn't about enabling harmful content; it's about ensuring that filtering decisions are made at the right level (platforms, not ISPs).
- Build transparency into moderation systems. If you're building content moderation, make your decisions auditable. Publish statistics on false positive rates. Provide clear appeals processes. The difference between moderation and censorship is often just transparency.
- Push back on security theater. When a product manager asks you to implement a content filter that you know won't actually work, say so. Document the limitations. Propose alternatives that address the actual problem.
- Engage with policy. Technical people notoriously avoid policy discussions, but our perspective is critical. Organizations like the EFF, Open Technology Institute, and Access Now all work on these issues and need engineering expertise.
- Design for the worst-case regulator. If you're building a system that could be used for content filtering, think about how it could be misused. Can the blocklist be audited? Can individual blocks be appealed? Is there a sunset clause? Design your system so that even if the wrong people get control of it, the damage is limited.
The internet's greatest strength has always been its openness — the ability for anyone to publish, anyone to access, and anyone to build on top of what exists. Content filtering mandates, however well-intentioned, chip away at that openness. The challenge for our generation of engineers is figuring out how to protect vulnerable users without building the infrastructure of control. It's a hard problem, and there aren't easy answers. But the first step is being honest about what the tools we're building actually do — and what they can't.