Blue Finessence
Blue Finessence
  • Home
  • About Us
  • Services
    • Our Services
    • Company Formation in Europe
  • News
    • Internal News
    • General news
  • Contact
  • Your cart is currently empty.

    Sub Total: $0.00 View cartCheckout

Britain Is Trying To Censor Americans… But Washington Is Fighting Back

Home / Finance / Britain Is Trying To Censor Americans… But Washington Is Fighting Back
Britain Is Trying To Censor Americans… But Washington Is Fighting Back
  • March 8, 2026
  • test
  • 18 Views

Britain Is Trying To Censor Americans… But Washington Is Fighting Back

Britain Is Trying To Censor Americans… But Washington Is Fighting Back

Authored by Daniel Lü via The Daily Sceptic,

Ofcom has confirmed it is referring 4chan to a final enforcement decision under the Online Safety Act. The target is a Delaware company that runs an entirely anonymous imageboard from the United States, with no offices, staff, servers or assets in Britain.

The demand: install age-verification systems and content filters so that British children cannot access the site or face daily fines levied from London on an American platform.

This case is not an outlier.

It is the clearest real-world demonstration of what the new generation of “online safety” laws requires: private companies must build automated filters that decide, in advance, which legal speech is too harmful for minors to see. The question the regulators never quite answer is simple: what exactly does the filter catch?

In the early 2020s, a political consensus formed on both sides of the Atlantic: social media is harming children and something must be done. The result in Washington was the Kids’ Online Safety Act (KOSA); in Westminster, the Online Safety Act (OSA), which received Royal Assent in October 2023 and began enforcement in 2025. The political appeal of both measures is genuine. Adolescent mental health deteriorated in the 2010s, parents are alarmed and platforms have appeared indifferent. But good intentions do not make good law, and the form these interventions took is constitutionally and morally indefensible. Both KOSA and the OSA rest on a duty-of-care model: platforms must take “reasonable measures” or implement “proportionate systems” to prevent minors from encountering content associated with depression, anxiety, eating disorders, self-harm and suicide. This is not a regulation of conduct. It is a mandate to suppress speech based on its topic and its predicted emotional effect on a reader: the very definition of content-based regulation.

The American Civil Liberties Union (ACLU) stated the constitutional problem plainly in its July 2023 letter opposing KOSA: the bill “is a content-based regulation of constitutionally protected speech” that “will silence important conversations, limit minors’ access to potentially vital resources and violate the First Amendment”.  Under Reed v. Town of Gilbert, a law is content-based if it “applies to particular speech because of the topic discussed or the idea or message expressed”. Content-based regulations are “presumptively unconstitutional”.

The ACLU identified three specific constitutional failures.

First, the speech targeted is protected. The Supreme Court has never permitted government to suppress legal speech simply because a legislature finds it unsuitable for children. In Brown v. Entertainment Merchants Association, the Court was unambiguous: “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” Creating a “wholly new category of content-based regulation” permissible only for speech directed at children would be “unprecedented and mistaken”.

Second, these regimes fail strict scrutiny because they are not premised on demonstrated causation. As the ACLU wrote, KOSA “is not premised on a direct causal link, but instead is based on correlation, not evidence of causation”. This is a decisive legal and moral point. In Brown, the Court struck down California’s video game restriction on exactly the same grounds: the state had produced only correlative data. A law that restricts the speech of millions of people must show that the restriction will actually prevent the harm it identifies. Neither KOSA nor the OSA can clear that bar.

Third, these regimes are both under- and over-inclusive. They leave news media, books, music and magazines entirely unregulated while targeting social media platforms. And they will, inevitably, sweep up beneficial speech alongside harmful speech: 92% of parental control apps have been found to incorrectly block LGBTQ+ content and suicide-prevention resources alongside material that is genuinely harmful. Congress, the ACLU concluded, may not rely on unproven future technology to save the statute.

The empirical premise of both regimes is that social media causes mental illness in adolescents. This claim is contested by a substantial body of peer-reviewed research. In a widely noted book review in Nature, Candice L. Odgers, a psychologist specialising in adolescent mental health at UC Irvine, wrote that the graphs produced by Jonathan Haidt in his work The Anxious Generation, which align the rise in teen mental illness with smartphone adoption, “will be useful in teaching my students the fundamentals of causal inference, and how to avoid making up stories by simply looking at trend lines”. Hundreds of researchers, Odgers wrote, “have searched for the kind of large effects suggested by Haidt. Our efforts have produced a mix of no, small and mixed associations. Most data are correlative.” The direction of causality may run the other way: distressed and isolated adolescents gravitate toward online community; social media does not necessarily create the distress.

The practical implication is stark. Existing criminal law already covers the most serious harms comprehensively: child sexual abuse material (CSAM), terrorist content, incitement to violence and harassment are all criminal in both jurisdictions and all designated “priority illegal content” under the OSA’s Schedules 5-7. The genuinely novel element of both regimes is the duty to suppress legal speech about mental health, gender identity and emotional distress. That element is what fails both the First Amendment and basic proportionality analysis.

The most immediate and documented casualty of the OSA’s implementation has been LGBTQ+ communities. This is not an implementation error. It is structural: the content filters platforms deploy to comply with age-assurance obligations cannot distinguish between content that causes harm to LGBTQ+ youth and content that protects them. Following the July 2025 enforcement rollout, Reddit moved significant LGBTQ+ community content behind age-verification barriers on the logic that queer content is “adult content” and therefore, under the Act, presumptively harmful to children. As OpenDemocracy documented, content creators who are “queer, trans or racialised”, or whose content focuses on these communities, have been “disproportionately targeted, with anything ‘queer’ indiscriminately labelled as ‘adult’”. For trans people, the harm is compounded by the identity documentation problem. Age verification requires users to produce government-issued identity matching their legal name and sex. In 2018, fewer than 5,000 trans people in the UK held a Gender Recognition Certificate, out of an estimated 200,000-500,000. For those without legal gender recognition, age verification is not a minor inconvenience, it forces them to out themselves to a commercial third party as a condition of internet access, creating a permanent record linking their legal identity to spaces they may be using precisely to explore their identity in safety. The moral stakes here are not abstract. For LGBTQ+ young people who cannot be open at home or school, online community is not a convenience but a lifeline. Stonewall has warned that anonymity-reduction measures create a “chilling effect” that puts LGBTQ+ people in genuine danger, particularly in the 12 countries where being LGBTQ+ carries the death penalty. As Stonewall’s Director of External Affairs wrote: “The UK’s Online Safety Bill could become the playbook for countries looking to use digital surveillance to identify and persecute their LGBTQ+ citizens.” The US State Department’s 2024 Human Rights Practices Report criticised the OSA for pressuring US social media platforms to “censor speech deemed misinformation or hate speech”.

The regulatory pressure on US platforms is not confined to Ofcom. On February 24th 2026, the Information Commissioner’s Office (ICO), the UK’s independent data protection regulator, issued Reddit, Inc. a £14.47 million fine for unlawfully processing children’s personal information: the largest penalty the ICO has ever imposed for breaches of children’s privacy. The ICO found that Reddit, despite prohibiting users under 13 by its terms of service, applied no robust age assurance mechanism from May 2018 until July 2025, and therefore had no lawful basis for processing the personal data of under-13s under the UK General Data Protection Regulation. Reddit’s omission to carry out a data protection impact assessment (DPIA) focused on the risks to children before January 2025 separately breached Articles 5, 6, 8 and 35 of the UK GDPR. Reddit has announced its intention to appeal, calling the ICO’s requirement to collect identity information from users “counterintuitive and at odds with our strong belief in our users’ online privacy and safety”. The ICO acted under its Age Appropriate Design Code (the ‘Children’s Code’) rather than the OSA, but the two regimes are coordinated: the ICO has openly admitted that it works in partnership with Ofcom, as the ICO stated in its December 2025 children’s privacy progress update, “to ensure efforts are coordinated”. The fine is legally distinct from OSA enforcement but functionally complementary to it: where Ofcom targets platforms’ content-governance duties, the ICO targets their data-governance failures, and the same underlying conduct of allowing age-unverified users to access content triggers liability under both regimes simultaneously. The ICO is now conducting a broader review of at least 17 platforms popular with children in the UK, including Discord, Pinterest and X. Reddit’s objection also surfaces another contradiction the ICO has not resolved: the age verification it effectively mandates creates a permanent record linking users’ legal identities to their platform activity, held by third-party age verification processors entirely outside the platforms’ own systems, and the data practices of those processors are, as the ICO’s own enforcement demonstrates, largely beyond the regulator’s concern.

The contrast between the ICO’s vigour against American social media platforms and its passivity toward British police forces is, on its face, a study in selective enforcement.

The same week that John Edwards announced the £14.47 million Reddit fine and spoke at the IAPP UK Intensive, the story of Alvi Choudhury was making national television. Choudhury, a 26 year-old British Bangladeshi software engineer, had been arrested at his home in Southampton in January 2026 by Thames Valley Police, who suspected him of committing a £3,000 burglary in Milton Keynes: a city he has never visited, 100 miles away. The arrest was triggered by a retrospective facial recognition match against Cognitec software that runs 25,000 searches per month against approximately 19 million custody photographs held on the Police National Database. Choudhury was held in custody for nearly 10 hours before officers examined the alibi evidence he had been offering since his arrest. When he eventually saw the CCTV footage that had identified him, he told the Guardian the suspect looked approximately 10 years younger, with lighter skin, a bigger nose, no facial hair and different eyes and lips. His own mugshot had been on the police system in the first place only because he was wrongly arrested in 2021 after being the victim of an assault; his DNA was subsequently deleted, but his custody photograph was not. Thames Valley Police’s response was, on its own account, revealing. The force acknowledged the arrest “may have been the result of bias within facial recognition technology”, but an officer told Choudhury that “as the use of facial recognition is already subject to review at a strategic level”, he did not feel the need to raise the matter for wider organisational learning. The force’s public statement went further, reframing the failure entirely: the arrest, it said, was based on the investigating officer’s own visual assessment after the algorithmic match, and therefore “was not influenced by racial profiling”. The position that a human officer confirming a racially biased algorithmic result absolves the institution of responsibility for racial bias merits no extended comment. This is not an isolated incident. In January 2026, another force paid damages to a black man wrongly arrested using the same technology. Home Office research, suppressed until December 2025 when it was published deep within a consultation document by Liberty Investigates, found that the algorithm generates false positive matches at a rate of 5.5% for Black faces and 4.0% for Asian faces, compared with 0.04% for white faces: a disparity of more than 100 to one.

When Edwards took the stage, he explained the ICO’s enforcement philosophy: the regulator must “very deliberately choose our focus”, concentrating on “AI and biometrics, children’s privacy and online tracking”. Police facial recognition involves all three. But the ICO has conducted audits, expressed concern through its Deputy Commissioner, and asked the Home Office for “urgent clarity” and stopped there. The Equality and Human Rights Commission has been more forthright: it was granted permission in August 2025 to intervene in a judicial review of the Metropolitan Police’s live facial recognition programme, arguing the deployments are unlawful for want of a clear legal basis. A comment made at the time about the ICO’s posture proved apt: the regulator had “stressed the need for FRT deployment with appropriate safeguards” while sitting “on the fence” as others sought judicial determination of whether current use is “strictly necessary”. The juxtaposition is instructive. The regulator charged with protecting personal data finds £14 million worth of urgency in Reddit’s failure to age-verify its users, and no comparable urgency in a biometric surveillance system that its own deputy has called “disappointing”, that the government’s own research shows discriminates against minorities by a factor exceeding 100, and that has produced wrongful arrests of racial minorities on the basis of a technology the operating force itself concedes may be racially biased. The filter, as always, catches what the filter is not intentionally designed to catch.

All of this would be a domestic British problem if the OSA’s reach were confined to British soil. It is not. Section 3 of the OSA applies to any service with “links with the United Kingdom”, which Ofcom has interpreted to include any platform with a significant UK user base regardless of where it is domiciled, incorporated or operated. In March 2025, Ofcom wrote to 4chan Community Support LLC, a Delaware LLC with no offices, staff or assets outside the United States, to inform it that it was a regulated service because approximately 7% of its traffic came from UK IP addresses and must therefore provide information regarding its illegal content risk assessment and its qualifying worldwide revenue. 4chan refused to respond to either request. In October, Ofcom issued escalating demands, investigations and a £20,000 fine plus a penalty of £100 per day for up to 60 days for non-compliance with information requests, all served by email to US addresses. 4chan again refused to pay. In August 2025, 4chan and Kiwi Farms (Lolcow LLC) filed a federal lawsuit against Ofcom in the District of Columbia, alleging violations of the First, Fourth and Fifth Amendments, pre-emption by Section 230 of the Communications Decency Act and conflict with the SPEECH Act. Ofcom responded by asserting sovereign immunity under the Foreign Sovereign Immunities Act, claiming both the right to issue binding censorship orders to Americans on American soil and immunity from any American legal response.

Ofcom’s enforcement action against 4chan did not end with the October 2025 information-gathering fine. On February 12th 2026, Ofcom issued a second Provisional Decision against 4chan, proposing both a single penalty and a daily rate penalty for contraventions of sections 9, 10, and 12 of the OSA: its substantive duties to conduct a suitable illegal content risk assessment, to set out adequate user protections in its terms of service, and to implement age verification to prevent children from encountering explicit content. Counsel for 4chan, Preston Byrne, replied the same day: “Increasing the size of a censorship fine does not cure its legal invalidity in the United States.” The deadline for representations having passed without compliance, Ofcom confirmed on February 27th that it was referring the matter to a final decision maker under its Online Safety Enforcement Guidelines. The progression is systematic: from information requests under section 100, to a confirmation decision imposing penalties, to a second provisional decision targeting the Act’s substantive content-safety and age-verification duties. Each escalatory step expands the scope of demanded compliance and raises the potential penalty exposure. For an anonymous platform operating exclusively in the United States, age verification for an anonymous imageboard is not a technical requirement: it is an existential one.

The domestic British appeals framework for these decisions is itself still being constructed. On February 26th 2026, the Tribunal Procedure Committee (TPC) opened a consultation on amending the Upper Tribunal Procedure Rules to accommodate the new rights of appeal created by the OSA. Under section 168 of the Act, any person with a sufficient interest may challenge Ofcom’s confirmation decisions, penalty notices and technology notices before the Upper Tribunal. The TPC provisionally proposes a three-month window for permission-to-appeal applications by interested persons who are not the direct recipients of an Ofcom notice, departing from Ofcom’s own preference for one month. On costs, the TPC agrees with Ofcom’s proposal to displace the usual no-costs rule, recognising that the tribunal should have broader discretion to award costs in OSA cases given the likely complexity and evidence-heavy nature of such appeals, and that the existing rule would leave Ofcom unable to recover costs even where it successfully defends a decision. Ofcom is a regulator with the power to fine companies hundreds o

Tyler DurdenSource

Share:

Previus Post
UK firms
Next Post
Russia Warns

Leave a comment

Cancel reply

Recent Posts

  • Independent assessment to support establishment of a Future Entity
  • Predisposizione, da parte dell’Agenzia delle entrate, delle bozze dei registri IVA, delle liquidazioni periodiche dell’IVA e della dichiarazione annuale dell’IVA di cui all’articolo 4 del decreto legislativo 5 agosto 2015, n. 127. Ulteriore estensione del periodo sperimentale stabilito con il provvedimento del Direttore dell’Agenzia delle entrate n. 183994 dell’8 luglio 2021 (provvedimento)
  • Istituzione delle causali contributo per il versamento, tramite modello F24, dei contributi all’INPS da destinare ad Enti Bilaterali (risoluzione n. 5)
  • Deadline for challenging your business rates valuation
  • Targeted financial support for aspiring social workers

Recent Comments

  1. validtheme on Digital Camera

Archives

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025

Categories

  • Finance
  • internal news
  • Italy
  • Uncategorized
  • United Kingdom

Recent Posts

  • Independent assessment to support establishment of a Future Entity
    09 March, 2026Independent assessment to support
  • Predisposizione, da parte dell’Agenzia delle entrate, delle bozze dei registri IVA, delle liquidazioni periodiche dell’IVA e della dichiarazione annuale dell’IVA di cui all’articolo 4 del decreto legislativo 5 agosto 2015, n. 127. Ulteriore estensione del periodo sperimentale stabilito con il provvedimento del Direttore dell’Agenzia delle entrate n. 183994 dell’8 luglio 2021 (provvedimento)
    09 March, 2026Predisposizione, da parte dell’Agenzia
  • 09 March, 2026Istituzione delle causali contributo
  • Deadline for challenging your business rates valuation
    09 March, 2026Deadline for challenging your

Tags

Blue%20Finessence

Excellence decisively nay man yet impression for contrasted remarkably. There spoke happy for you are out. Fertile how old address did showing.

Contact Info

  • Address:CEO Blue FinEssence Ltd Piccadilly Circus 126 London
  • Email:director@bluefinessence.com
  • Phone:004407784915057

Copyright 2024 Bluefinessence. All Rights Reserved by Bluefinessence

  • About Us
  • Our Services