|
A DNS service offers a feature to spoof the location of censored users in the UK so as to appear from a freer country
|
|
|
 | 24th August 2025
|
|
| See article from reclaimthenet.org |
NextDNS is a DNS service that is looking to work around ID/age verification by diverting DNS requests to appear as if from another country where iD/age verification is not required. Most internet page reads consist of two steps, first to
use the DNS server to work out eg that melonfarmers.co.uk/ is to be found at say IP address 214.16.66.216. The second stage is to extarct the page data from 214.16.66.216. Now a lot of internet censorship and blocking is implemented at blocking
and diverting the first step of the DNS look up, eg if an ISP wanted to block melonfarmers.co.uk it would return a false page of say 199.109.188.205 which is a page containing a blocked message or more likely a message saying that the site is
unavailable. It is the idea of NextDNS that most websites will do their location checking on the DNS look up rather than the page data request. And so if the DNS server would suggest that the DNS lookup were to be spoofed to appear too be from a
less censorial country then perhaps the website could be fooled into not requiring ID verification, and that subsequent data requests would not be checked for location. This would be cheaper and easier then encrypting and rerouting page data requests as
done by a VPN. Anyone using the free or paid version of NextDNS can already turn this redirection feature on. To do so, users need to log into their account at my.nextdns.io, navigate to the Settings tab, scroll down, and toggle the Bypass Age
Verification option. The company notes that by enabling it, users confirm they are of legal age to access restricted content. The results so far are mixed. The feature remains in beta and does not work reliably across all platforms. Services like
Reddit and X are still blocking some users, even with the setting active. Attempts to view age-restricted YouTube videos have also failed, likely because YouTube requires account sign-ins and has started experimenting with AI-driven age estimation in
the US. Presumably these particular services do check the request location for each page data request. Hopefully the idea works with porn website that are happy to be access ed via spoofed location services |
|
Actually not too far off the mark for VPN usage
|
|
|
 |
24th August 2025
|
|
| See article from alecmuffett.com |
|
|
Ofcom expands its investigation into 4chan, demanding censorship and onerous paperwork from a US website with no connection to the UK beyond that it's viewable online
|
|
|
| 14th August 2025
|
|
| See
article from ofcom.org.uk |
Ofcom has originally opened an investigation into the US image hosting site in June 2025. It has now added and extra clause an investigation into Non-compliance with the safety duties about illegal content. The
investigation now reads: We are initiating an investigation to determine whether the online discussion board 4chan has failed204or is currently failing204to comply with its obligations under the Online Safety Act 2023. Our
investigation will focus on potential breaches in the following areas:
Failure to respond to a statutory information request; Failure to complete and keep a record of a suitable and sufficient illegal content risk assessment; and Non-compliance with the safety duties about illegal content.
See article from en.wikipedia.org 4chan.org is an anonymous English-language imageboard website. The site hosts
boards dedicated to a wide variety of topics, from video games and television to literature, cooking, weapons, music, history, technology, anime, physical fitness, politics, and sports, porn, among others. Registration is not available, except for staff,
and users typically post anonymously. 4chan receives more than 22 million unique monthly visitors, of whom approximately half are from the United States. The website achieved a little notoriety in Donald Trump's first
presidential term. The wesbite was identified for providing a voice to 'alt-right' (right leaning) Trump supporters who were otherwise silenced by an alliance of liberal internet companies and mainstream media outlets..
Offsite Comment: Allowing British authorities to demand compliance from virtually any website. 11th June 2025. See
article from reclaimthenet.org Ofcom has set its sights on 4chan, a US-hosted
imageboard owned by a Japanese national. The site operates under US law and has no physical infrastructure, employees, or legal registration in Britain. Nonetheless, UK regulators have declared it fair game. Wherever in the world
a service is based if it has 'links to the UK', it now has duties to protect UK users, Ofcom insists. That phrase, links to the UK, is intentionally vague and extraordinarily expensive, allowing British authorities to demand
compliance from virtually any website. This kind of extraterritorial overreach marks a direct threat to the principle of national sovereignty in internet governance. The UK is attempting to dictate the rules of online speech to
foreign companies, hosted on foreign servers, and serving users in other countries, all because someone in Britain might visit their site.
So what will Donald Trump's government make of Ofcom's attempt to censor US free
speech? Surely it will be an important step for Ofcom, it could easily be blocked by the US, or simply ignored. Surely this will set a precedent for thousands of other foreign websites that could end up simply ignoring Britain's arrogant censorship law.
|
|
LibDem MPs write to internet censorship minister voicing concerns about how the Online Safety Act is leading to political censorship, easy circumvention and unsafe ID data grabbing
|
|
|
| 6th August 2025
|
|
| See article from reddit.com See
petition to repeal the Online Safety Act at petition.parliament.uk |
In an ideal world inhabited by politicians and children's campaigners, social media companies would work though all postings and treat each on its merits as to whether it requires age gating or not. In the real world where commercial reality make
this approach too expensive, coupled with a safety first approach mandated by ludicrously massive fines for transgression, the social media play safe and implement age gating around entire forums or even whole websites. For smaller companies it is often
make sense just to self block the whole website to UK users. Of course this reality leads to many more posts being blocked or age gated than maybe simple minded politicians envisaged. Now there seems to be a widespread disquiet about how the
Online Safety Act is panning out. Apart from just the 498,000 people that have signed the petition to repeal the Online Saety Act, LibDems MP Victoria Collins and peer Lord Clement-Jones wrote a letter to the censorship minister Peter Kyle
saying: There remain significant concerns about how the legislation is currently being implemented, including concerns that:
age-assurance measures may prove ineffective, as children and young people may use VPNs to sidestep the systems, political content is being age-gated on social media educational sites like Wikipedia will be designated as Category 1 services, requiring them to age verify moderators
important forums dealing with LGBTQ+ rights, sexual health or other potentially sensitive topics have been age gated, and that age assurance systems may pose a data protection or privacy threat to
users.
The implementation of the Act must be flexible, and respond to those emerging concerns. The intention behind this legislation was never to limit access to political or educational content, or to important support relied on by young
people. It was intended to keep children safe, and we must ensure that it is implemented in a way that does that as effectively as possible. They then go on to talk about how parliament needs the chance to review
it and make legislative changes where necessary. Ofcom on over blocking Online security expert Alec Muffet has tweeted that he has spotted a few hints that Ofcom has recognised that over blocking will be an inevitable
characteristic of Soi cla media's attempts to live whith the censorship rules: Of course MPs use VPNs themselves, its basic internet security See
article from reclaimthenet.org Meanwhile it is interesting to see that when Peter Kyle has called for people not to use
VPNs for the sake of the children, then it is intereting to see that MPs themselves are using VPNs as a matter of course. After all it would be stupid not to, for people in public life. Speaking on BBC Breakfast, Peter Kyle warned:
For everybody out there whos thinking about using VPNs, let me say this to you directly: verifying your age keeps a child safe. Keeps children safe in our country, so lets just not try to find a way around. Politico reported that official spending records show parliamentarians across party lines have been billing the public for commercial VPN services. Business Secretary Jonathan Reynolds charged taxpayers for a two-year NordVPN subscription in April 2024. Labour MP Sarah Champion, who in 2022 pressed the government to investigate whether teenage VPN use could undermine online safety rules, also has a subscription on record.
The government says it has no intention of outlawing VPNs but admits it is monitoring how young people use them. This comes after a sharp increase in downloads following the rollout of mandatory digital ID checks under the new censorship law, the
Online Safety Act. So I wonder how many porn using MPs prefer to dangerously hand over their ID data for age verification, and how many play it safe and use a VPN.
|
|
No, the UKs Online Safety Act Doesnt Make Children Safer Online
|
|
|
 | 3rd August 2025
|
|
| See Creative Commons article from eff.org by Paige Collings
|
Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But in one of the latest misguided attempts
to protect children online, internet users of all ages in the UK are being forced to prove their age before they can access millions of websites under the countrys Online Safety Act (OSA). The legislation attempts to make the UK
the 'the safest place' in the world to be online by placing a duty of care on online platforms to protect their users from harmful content. It mandates that any site accessible in the UK--including social media , search engines , music sites , and adult
content providers --enforce age checks to prevent children from seeing harmful content . This is defined in three categories, and failure to comply could result in fines of up to 10% of global revenue or courts blocking services:
Primary priority content that is harmful to children:
Priority content that is harmful to children:
Content that is abusive on the basis of race, religion, sex, sexual orientation, disability or gender reassignment; Content that incites hatred against people on the basis of race, religion, sex,
sexual orientation, disability or gender reassignment; Content that encourages, promotes or provides instructions for serious violence against a person; Bullying content; -
Content which depicts serious violence against or graphicly depicts serious injury to a person or animal (whether real or fictional); Content that encourages, promotes or provides instructions for stunts
and challenges that are highly likely to result in serious injury; and Content that encourages the self-administration of harmful substances.
Non-designated content that is harmful to children (NDC):
Online service providers must make a judgement about whether the content they host is harmful to children, and if so, address the risk by implementing a number of measures, which includes, but is not limited to:
Robust age checks: Services must use 'highly effective age assurance to protect children from this content. If services have minimum age requirements and are not using highly effective age assurance to prevent children
under that age using the service, they should assume that younger children are on their service and take appropriate steps to protect them from harm.' To do this, all users on sites that host this content must verify their age,
for example by uploading a form of ID like a passport, taking a face selfie or video to facilitate age assurance through third-party services, or giving permission for the age-check service to access information from your bank about whether you are over
18. Safer algorithms: Services 'will be expected to configure their algorithms to ensure children are not presented with the most harmful content and take appropriate action to protect them from other harmful content.'
Effective moderation: All services 'must have content moderation systems in place to take swift action against content harmful to children when they become aware of it.'
Since these measures took effect in late July, social media platforms Reddit , Bluesky , Discord , and X all introduced age checks to block children from seeing harmful content on their sites. Porn websites like Pornhub and YouPorn
implemented age assurance checks on their sites, now asking users to either upload government-issued ID, provide an email address for technology to analyze other online services where it has been used, or submit their information to a third-party vendor
for age verification. Sites like Spotify are also requiring users to submit face scans to third-party digital identity company Yoti to access content labelled 18+. Ofcom, which oversees implementation of the OSA, went further by sending letters to try to
enforce the UK legislation on U.S.-based companies such as the right-wing platform Gab . The UK Must Do Better The UK is not alone in pursuing such a misguided approach to protect children online:
the U.S. Supreme Court recently paved the way for states to require websites to check the ages of users before allowing them access to graphic sexual materials; courts in France last week ruled that porn websites can check users ages; the European
Commission is pushing forward with plans to test its age-verification app; and Australias ban on youth under the age of 16 accessing social media is likely to be implemented in December. But the UKs scramble to find an effective
age verification method shows us that there isn't one, and its high time for politicians to take that seriously. The Online Safety Act is a threat to the privacy of users, restricts free expression by arbitrating speech online, exposes users to
algorithmic discrimination through face checks, and leaves millions of people without a personal device or form of ID excluded from accessing the internet. And, to top it all off, UK internet users are sending a very clear message
that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act recently hit
more than 400,000 signatures. The internet must remain a place where all voices can be heard, free from discrimination or censorship by government agencies. If the UK really wants to achieve its goal of being the safest place in
the world to go online, it must lead the way in introducing policies that actually protect all users--including children--rather than pushing the enforcement of legislation that harms the very people it was meant to protect.
|
|
Some Gaza and Ukraine social media posts are blocked under new ID/age checks
|
|
|
 |
1st August 2025
|
|
| See article from bbc.com |
Social media companies are blocking wide-ranging content - including posts about the wars in Ukraine and Gaza - in an attempt to comply with the UK's new Online Safety Act, BBC Verify has found. BBC Verify found a range of public interest content,
including parliamentary debates on grooming gangs, has been restricted on X and Reddit for those who have not completed ID/age verification checks. Experts warn companies are risking stifling legitimate public debate by overapplying the law.
Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute, expressed alarm at the restrictions and told BBC Verify that the new bill was not supposed to be used to suppress facts of public interest, even if uncomfortable.
Among the restricted content identified by BBC Verify was a video post on X which showed a man in Gaza looking for the dead bodies of his family buried among the rubble of destroyed buildings. The post was restricted despite not showing any graphic
imagery or bodies at any point in the clip. X subsequently removed the warning after being approached by BBC Verify. Reader who attempted to view a video of a Shahed drone destroyed mid-flight in Ukraine were required to provide ID/age verfication
even though nobody was injured or killed in the clip. Among the Reddit communities which have been restricted is one called R/UkraineConflict, a message board with 48,000 members that frequently posts footage of the war. Similar restrictions, which
urge users to log in to confirm your age, have been imposed on several pages which discuss the Israel-Gaza war and communities which focus on healthcare. Meanwhile, clips of parliamentary debates have also been swept up in the restrictions. A speech
by Conservative MP Katie Lam, containing a graphic description of the rape of a minor by a grooming gang, is available to view without restriction on Parliament's official streaming website, ParliamentLive, but is restricted on X. Meanwhile Spiked reports on other examples of social media censorship
Five things we can't post about thanks to the Online Safety Act See article from spiked-online.com
From grooming gangs to men's fashion, literally any topic of discussion can now be censored. Here are five things Britons can no longer post or read about under the new internet censorship rules. 1) Francisco Goya's 19th-century
masterpiece, Saturn Devouring His Son, was automatically hidden from British users of X. A thread on X detailing the life of Richard the Lionheart and the Crusades has also been suppressed, presumably it's been deemed Islamophobic. 2) A tweet calling
for single-sex toilets was branded too sensitive by the censors for her to read. 3) A Guido Fawkes article headlined Keir Suffers Extinction Event, featuring a baby with Starmer's head superimposed on it, has been put behind the age wall on X.
4) Testimony from survivor and campaigner Sammy Woodhouse, detailing the brutal grooming gang rapes and abuses she suffered as a young girl, was censored on X as graphic content. 5) When compiling a list of posts that have been censored on X, Benjamin
Jones of the Free Speech Union found himself censored for bringing the absurdities of the Online Safety Act to the public's attention. Read the full
article from spiked-online.com |
|
Hackers steal 72000 selfies from an app that claimed that photos would be 'deleted immediately' after authentication
|
|
|
 | 27th July 2025
|
|
| See article from bbc.co.uk |
A dating safety app that allows women to do background checks on men and anonymously share red flag behaviour has been hacked, exposing thousands of members' images, posts and comments. Tea Dating Advice, a US-based women-only app with 1.6 million
users, said there had been unauthorised access to 72,000 images submitted by women. Some included images of women holding photo identification for verification purposes, which Tea's own privacy policy promises are deleted immediately after
authentication. The company also admitted that an additional 59,000 images from the app showing posts, comments and direct messages from over two years ago were accessed. |
|
Major porn websites introduce ID/Age verification
|
|
|
 |
27th July 2025
|
|
| 26th July 2025. See article from tyla.com See also a useful list of porn sites to try to find those not inflicting ID verification:
toppornsites.com |
So most of the major tube sites have decided to implement ID verification for UK viewers. But thankfully there are still plenty of options of websites that have not yet implemented ID verification requirements. Here is a useful list of porn sites to try
to find those not inflicting ID verification: toppornsites.com . For viewers stupidly subscribing to the risk of handing over ID to watch porn I noted that many websites were
promising to not keep a copy of ID data provided for verification purposes and then immediately demanding an email address that will be kept for furture visits. Surely an email address is a key piece of identity data that should not be retained. Surely a better idea is purchase a VPN and access porn as if in a different country from the UK. For the moment all the major porn sites stil allow access via VPN. Perhaps one day this will not be 5the case when ID verification is adopted worldwide. Also not that it is up to websites whether they allow access via VPN or not. Under threat of extreme punishment they could reasonably easily one day block access from VPNs. (as the likes of BBC and Netflix already do).
Another option is to install a tor browser (the onion ring I think). See torproject.org . This is a browser that looks bery much like Firefox but obtains page data
via complicated and encrypted routing that evades censorship and country specific blocking. It is not quite as 100% succesful as a VPN but can be used to watch porn on the main porn websites. But of course the authorities will not be very
pleased by these straightforward workarounds, and they have put in place a censorship rule to prevent adult websites from themselves promoting workarounds. According to Ofcom and the BBC, platforms must not host, share or permit content that encourages
the use of VPNs to get around age checks and it will be illegal for them to do so. An Aylo spokesperson, the parent company of Pornhub said parents are advised to block VPN usage just in case, and told the BBC that the question of VPNs was
an issue for governments, adding: We certainly do not recommend that anyone uses technology to bypass the law.' Aylo has publicly called for effective and enforceable age assurance solutions
that protect minors online, while ensuring the safety and privacy of all users. The United Kingdom is the first country to present these same priorities demonstrably
Thankfully such censorship laws simply don't apply to websites out
of Ofcom's remit so there will surely be plenty of sources of information available to workaround the dangers of ID verification for porn. Update: VPNs galore 27th July 2025. From the Financial Times
The Financial Times has reported on the inevitably booming sales and downloads of VPNs. Proton VPN has leapfrogged ChatGPT to become the top free app in the UK, according to Apple. Proton VPN has experienced a 1800% increase in daily UK.
sign-ups. NordVPN has seen 1000% increase in UK purchases. A Proton spokesperson told Mashable: This clearly shows that adults are concerned about the impact universal age verification laws will have on their privacy.
|
|
Ofcom publishes the final version of its censorship rules as applied to transparency reporting
|
|
|
 |
26th July 2025
|
|
| See press release from ofcom.org.uk See
report from
ofcom.org.uk |
Ofcom has published a statement detailing how they will expect larger websites to report on how they have applied censorship rules to user content. Ofcom writes: The decisions explained in this statement set out our final
positions on our guidance on transparency reporting. Our final guidance explains when and how Ofcom will exercise its transparency powers. It is designed to provide stakeholders with information about how the transparency reporting process under the
online safety regime will work in practice, including the factors Ofcom will consider when deciding what information providers must publish in their reports, how we will produce our own Ofcom transparency reports and how we will engage with stakeholders
throughout the process. The Online Safety Act makes platforms -- including social media, search, and pornography services -- legally responsible for keeping people, especially children, safe online. Certain duties in the Act
apply to all regulated services, while a set of additional duties apply only to certain services. The duty to publish transparency reports only applies to providers of certain regulated services, specifically those that appear on a public register of
categorised services prepared by Ofcom. Categorised services will have to publish transparency reports according to requirements that are set out by Ofcom in transparency notices. Our draft guidance lays out our proposed
approach to determining what information relevant services are required to publish in their reports, as well as information about how we will engage with services throughout the reporting process. Ofcom is also required to
produce its own transparency report that draws conclusions based on the substance of the reports produced by providers. Our draft guidance presents our proposed approach to using information from service providers transparency reports in our own report.
|
|
UK Internet censor Ofcom selects its first victims for porn censorship, scoreland.com and undress.cc
|
|
|
 | 4th
July 2025
|
|
| 11th May 2025. See
press release from ofcom.org.uk |
Ofcom has commenced investigations into two pornographic services - Itai Tech Ltd and Score Internet Group LLC - under our age assurance enforcement programme. Under the Online Safety Act, online services must ensure children
cannot access pornographic content on their sites. In January, we wrote to online services that display or publish their own pornographic content to explain that the requirements for them to have highly effective age checks in place to protect children
had come into force. We requested details of services' plans for complying, along with an implementation timeline and a named point of contact. Encouragingly, many services confirmed that they are implementing, or have plans to
implement, age assurance on around 1,300 sites. A small number of services chose to block UK users from accessing their sites, rather than putting age checks in place. Certain services failed to respond to our request and have not
taken any steps to implement highly effective age assurance to protect children from pornography.
We are today opening investigations into Itai Tech Ltd - a service which runs the nudification site Undress.cc - and Score Internet Group LLC, which runs the site Scoreland.com. Both sites appear to have no highly effective age assurance in place and are
potentially in breach of the Online Safety Act and their duties to protect children from pornography. Next steps We will provide an update on both investigations on our website in due course, along with details of any further
investigations launched under this enforcement programme Update: Low Scores 2nd July 2025. See
article from ofcom.org.uk
Ofcom has closed its investigation of scoreland.com after the website introduced age/ID verification. The website now requires that UK users subscribe using a credit card (no debit cards) before content can be viewed. Visitors from other countries can
see teaser images and can pay via several other options. Ofcom writes: In response to our investigation, Score Internet Group LLC have taken steps to implement highly effective age assurance to ensure compliance
with their duties under Part 5 of the OSA. As such, Ofcom is satisfied that the conduct that led to the opening of the investigation has ceased and we do not consider it appropriate to continue our investigation. We have therefore
closed it without making any findings as to Score Internet's compliance with its duties, either currently or prior to its confirmation that it had taken steps to comply with the OSA.
|
|
US free speech website blocks UK users so as avoid onerous and suffocating internet censorship by Ofcom
|
|
|
 |
23rd April 2025
|
|
| 17th April 2025. See uk.gab.com |
The US right leaning forum website GAB has blocked internet users located in Britain. UK users can now only see a landing page explaining that UK internet censorship laws are unacceptable to the free speech loving forum. The website explains its
actions as follows: ATTENTION: UK Visitor Detected The following notice applies specifically to users accessing from the United Kingdom. Access Restricted by Provider
After receiving yet another demand from the UK's speech police, Ofcom, Gab has made the decision to block the entire United Kingdom from accessing our website. This latest email from Ofcom ordered us to
disclose information about our users and operations. We know where this leads: compelled censorship and British citizens thrown in jail for hate speech. We refuse to comply with this tyranny. Gab is an American company with zero
presence in the UK. Ofcom's demands have no legal force here. To enforce anything in the United States, they'd need to go through a Mutual Legal Assistance Treaty request or letters rogatory. No U.S. court is going to enforce a foreign censorship regime.
The First Amendment forbids it. Ofcom will likely try to make an example of us anyway. That's because the UK's Online Safety Act isn't about protecting children. It's about suppressing dissent. They're
welcome to try. The idea that a British regulator can pressure a U.S. company that's IP-blocking the entire UK is as farcical as it is futile. If anything, it proves our point: censorship doesn't work. It only reveals the truth about the censors.
We proudly join platforms like Bitchute in boycotting the United Kingdom. American companies should follow suit. The power of the UK's parliament ends where the First Amendment begins. The only way to vote
against the tyranny of the UK's present regime is to walk away from it, refuse to comply, and take refuge under the impervious shelter of the First Amendment. The UK's rulers want their people kept in the dark. Let them see how
long the public tolerates it as their Internet vanishes, one website at a time. Update: Ofcom responds 23rd April 2025. See
article from ofcom.org.uk
The Online Safety Act introduces new rules for providers of online user-to-user, search and pornography services, to help keep people in the UK safe from content which is illegal in the UK, and to protect children from the most
harmful content such as pornography, suicide and self-harm material. Wherever in the world a service is based, if it has links to the UK, it now has duties to protect UK users. This includes having a significant number of UK
users, or that the UK is a target market. These rules will also apply to services that are capable of being used by individuals in the UK and which pose a material risk of significant harm to them. The Act only requires that
services take action to protect users based in the UK -- it does not require them to take action in relation to users based anywhere else in the world. Ofcom believes its flexible approach to risk assessment and mitigation allows
all services to take appropriate and proportionate steps to protect UK users from illegal content. Some services might seek to prevent users in the UK from accessing their sites or parts of their sites, instead of complying with the Act's requirements to
protect UK users. That is their choice. If a service restricts UK users' access, that action would need to be effective in order for the service to fall out of scope of the Act. The key test remains whether the service has links
to the UK. This will depend on the specific circumstances (including whether it is still targeting UK users, for example, by promoting ways of evading access restrictions). Ofcom would assess whether a service is in scope on a case-by-case basis and,
where the Act applies, would consider the service's compliance with the law and, where necessary, use our investigation and enforcement powers. We recognise the breadth and complexity of the online safety rules and that there is a
diverse range of services in scope. New regulation can create uncertainty and navigating the requirements can be challenging. Ofcom is committed to working with providers to help them comply with the Online Safety Act and protect
their users. We have therefore developed a range of tools and resources to make it easier for them to understand -- and comply with -- their obligations. We also recently published a guide to help small services navigate the Online Safety Act.
|
|
US officials challenge Ofcom over online safety laws' impact on free speech
|
|
|
 | 6th April 2025
|
|
| See article from
theguardian.com |
US state department officials have challenged Britain's internet censor over the impact on freedom of expression created by new online censorship laws, the Guardian understands. A group of officials from the state department's Bureau of Democracy,
Human Rights, and Labor (DRL) recently met Ofcom in London. It is understood that they raised the issue of the new online safety act and how it risked infringing free speech. The state department body later said the meeting was part of its
initiative to affirm the US commitment to defending freedom of expression, both in Europe and around the world. During the meeting, Ofcom officials claimed the new rules were only in place to deal with explicitly illegal content and material that could
be harmful to children. A state department spokesperson said: As Vice-President Vance has said, we are concerned about freedom of expression in the United Kingdom. It is important that the UK respect and protect freedom of expression. Details
of the meeting emerged after Jonathan Reynolds, the business secretary, denied that concerns over free speech had featured in tariff negotiations with the US. In February, the US vice-president, JD Vance, complained of infringements on free speech in
the UK. Elon Musk, one of Trump's closest allies, repeatedly claimed that some prison sentences handed down to people who incited the riots on X were a breach of free speech. Free speech advocates say that the UK censorship law is going to bring
about a culture of 'if in doubt, cut it out' as platforms seek to avoid being subject to Ofcom's enforcement powers. |
|
The suffocating mountain of red tape titled the Online Safety Acts kills its first British business
|
|
|
 | 24th
December 2024
|
|
| See article from lfgss.com |
The owner of a popular cycling forum LFGSS has decided to close his business due to the enormous risks and expenses inherent in running a British business due to be suffocated by the misleadingly named Online Safety Act. He explains: Reading
Ofcom's tome of censorship rules and we're done... we fall firmly
into scope, and I have no way to dodge it. The act is too broad, and it doesn't matter that there's never been an instance of any of the proclaimed things that this act protects adults, children and vulnerable people from... the very broad language and
the fact that I'm based in the UK means we're covered. The act simply does not care that this site and platform is run by an individual, and that I do so philanthropically without any profit motive (typically losing money), nor that the site
exists to reduce social loneliness, reduce suicide rates, help build meaningful communities that enrich life. The act only cares that is it "linked to the UK" (by me being involved as a UK native and resident, by you being a UK based
user), and that users can talk to other users... that's it, that's the scope. I can't afford what is likely tens of thousand to go through all the legal hoops here over a prolonged period of time, the site itself barely gets a few hundred in
donations each month and costs a little more to run... this is not a venture that can afford compliance costs... and if we did, what remains is a disproportionately high personal liability for me, and one that could easily be weaponised by disgruntled
people who are banned for their egregious behaviour... I do not see an alternative to shuttering it. The conclusion I have to make is that we're done... Microcosm, LFGSS, the many other communities running on this platform... the risk to me
personally is too high, and so I will need to shutter them all. On Sunday 16th March 2025 (the last day prior to the Act taking effect) I will delete the virtual servers hosting LFGSS and other communities, and effectively immediately end the
approximately 300 small communities that I run, and the few large communities such as LFGSS. |
|
Ofcom publishes another mountain of expensive and suffocating censorship red tape
|
|
|
| 16th December 2024
|
|
| See press release
from ofcom.org.uk |
Ofcom writes: Today we are publishing our first major policy Statement for the Online Safety regime. This decision on the Illegal Harms Codes and guidance marks a major milestone, with online
providers now being legally required to protect their users from illegal harm. Ofcom published proposals about the steps providers should take to address illegal harms on their services shortly after passage of the Online Safety
Act in October 2023. Since then, we have been consulting carefully and widely, listening to industry, charities and campaigners, parents and children, as well as expert bodies and law enforcement agencies. With today's publication1, online providers must
take action to start to comply with these new rules. The result will be a safer life online for people in the UK, especially children. Providers now have a duty to assess the risk of illegal harms on their services, with a
deadline of 16 March 2025. Subject to the Codes completing the Parliamentary process, from 17 March 2025, providers will need to take the safety measures set out in the Codes or use other effective measures to protect users from illegal content and
activity. We are ready to take enforcement action if providers do not act promptly to address the risks on their services.
Analysis to follow but there are over 1000 pages to get through first! |
|
Ofcom proposes definitions for which websites will be subjected to the most onerous censorship rules defined in the Online Safety Act
|
|
|
 | 31st March 2024
|
|
| See press release from ofcom.org.uk See
consultation from ofcom.org.uk |
Ofcom writes: Ofcom is seeking evidence to inform our codes of practice and guidance on the additional duties that will apply to some of the most widely used online sites and apps -- designated as categorised services - under the
Online Safety Act. Under the new laws, all in-scope tech firms must put in place appropriate safety measures to protect users from online harms. In addition, some online services will have to comply with extra requirements if they
fall into one of three categories, known as Category 1, 2A or 2B. These extra duties include giving users more tools to control what content they see, ensuring protections for news publisher and journalistic content, preventing
fraudulent advertising and producing transparency reports. Different duties apply, depending on which category a service falls into. The Act requires us to produce codes of practice and guidance outlining the steps that companies
can take to comply with these additional duties. We are inviting evidence from industry, expert groups and other organisations to help inform and shape our approach. A formal consultation on the draft codes and guidance will follow in 2025, taking
account of responses to today's call for evidence. Advice to Government on categorisation thresholds Alongside this, we have also today published our advice to Government on the thresholds which would determine whether or not a
service falls into Category 1, 2A or 2B. We advise that: Category 1 (most onerous): should apply to services which meet either of the following conditions:
Condition 1 - uses a content recommender system; and has more than 34 million UK users on the user-to-user part of its service, representing around 50% of the UK population; Condition 2 - allows users
to forward or reshare user-generated content; and uses a content recommender system; and has more than 7 million UK users on the user-to-user part of its service, representing circa 10% of the UK population.
Category 2A: should apply to services which meet both of the following criteria:
is a search service, but not vertical search service has more than 7 million UK users on the search engine part of its service, representing circa 10% of the UK population.
Category 2B: should apply to services which meet both of the following criteria:
allows users to send direct messages; and has more than 3 million UK users on the user-to-user part of the service, representing circa 5% of the UK population.
Taking our advice into consideration, the Secretary of State must set the threshold conditions in secondary legislation. Once passed, we will then gather information, as needed, from regulated services and produce a published register
of categorised services. |
|
Parts of the Online Censorship Act have come into force
|
|
|
 |
31st January 2024
|
|
| See press release
from gov.uk |
Abusers, trolls, and predators online now face a fleet of tough new jailable offences from Wednesday 31 January, as offences for cyberflashing, sending death threats, and epilepsy-trolling are written into the statute book after the Online Safety Act
gained Royal Assent. These new criminal offences will protect people from a wide range of abuse and harm online, including threatening messages, the non-consensual sharing of intimate images known as revenge porn, and sending fake
news that aims to cause non-trivial physical or psychological harm. Dubbed Zach's law, a new offence will also mean online trolls that send or show flashing images electronically with the intention of causing harm to people with
epilepsy will be held accountable for their actions and face prison. Following the campaigning of Love Island star Georgia Harrison, bitter ex-partners and other abusers who share, or threaten to share, intimate images on or
offline without the consent of those depicted will face jail time under new offences from today. Those found guilty of the base offence of sharing an intimate image could be in prison for up to 6 months, or 2 years if it is proven
the perpetrator also intended to cause distress, alarm or humiliation, or shared the image to obtain sexual gratification. Cyberflashing on dating apps, AirDrop and other platforms will also result in perpetrators facing up to two
years behind bars where it is done to gain sexual gratification, or to cause alarm, distress or humiliation. Sending death threats or threatening serious harm online will also carry a jail sentence of up to five years under a new
threatening communications offence that will completely outlaw appalling threats made online that would be illegal if said in person. A new false communications offence will bring internet trolls to justice by outlawing the
intentional sending of false information that could cause non-trivial psychological or physical harm to users online. This new offence will bolster the government's strong commitment to clamping down on dangerous disinformation and election interference
online. In the wake of sickening content, often targeted at children, that encourages users to self-harm, a new offence will mean the individuals that post content encouraging or assisting serious self-harm could face up to 5
years behind bars. While much of the Online Safety Act's protections are intended to hold tech companies and social media platforms to account for the content hosted on their sites, these new offences will apply directly to the
individuals sending threatening or menacing messages and bring justice directly to them. Some of the offences that commence from today will be further bolstered too, when the wide-ranging Criminal Justice Bill completes its
passage through Parliament.
|
|
|
|
|
 | 4th
December 2023
|
|
|
A summary of the current position of the UK's (anti-)pornographic internet censorship provisions See
article from decoded.legal |
|
|
|
|
 | 11th
November 2023
|
|
|
With 1500 pages outlining a mountain of suffocating red tape in the name of internet regulation, Ofcom delivers a message to small British internet companies See
article from webdevlaw.uk |
|
The Online Unsafety Bill gets Royal Assent and so becomes law
|
|
|
 | 29th
October 2023
|
|
| See article from ofcom.org.uk |
The Online Safety Bill received Royal Assenton 26th October 2023, heralding a new era of internet censorship. The new UK internet Ofcom was quick off the mark to outline its timetable for implementing the new censorship regime. Ofcom has set
out our plans for putting online safety laws into practice, and what we expect from tech firms, now that the Online Safety Act has passed. Ofcom writes: The Act makes companies that operate a wide range of online services legally
responsible for keeping people, especially children, safe online. These companies have new duties to protect UK users by assessing risks of harm, and taking steps to address them. All in-scope services with a significant number of UK users, or targeting
the UK market, are covered by the new rules, regardless of where they are based. While the onus is on companies to decide what safety measures they need given the risks they face, we expect implementation of the Act to ensure
people in the UK are safer online by delivering four outcomes:
stronger safety governance in online firms; online services designed and operated with safety in mind; choice for users so they can have meaningful
control over their online experiences; and transparency regarding the safety measures services use, and the action Ofcom is taking to improve them, in order to build trust .
We are moving quickly to implement the new rules Ofcom will give guidance and set out codes of practice on how in-scope companies can comply with their duties, in three phases, as set out in the Act.
Phase one: illegal harms duties We will publish draft codes and guidance on these duties on 9 November 2023, including:
analysis of the causes and impacts of online harm, to support services in carrying out their risk assessments; draft guidance on a recommended process for assessing risk; draft codes of practice, setting out what services can do to mitigate the risk of harm; and
draft guidelines on Ofcom's approach to enforcement.
We will consult on these documents, and plan to publish a statement on our final decisions in Autumn 2024. The codes of practices will then be submitted to the Secretary of State for Science, Innovation and Technology, and subject to
their approval, laid before Parliament. Phase two: child safety, pornography and the protection of women and girls Child protection duties will be set out in two parts. First, online pornography
services and other interested stakeholders will be able to read and respond to our draft guidance on age assurance from December 2023. This will be relevant to all services in scope of Part 5 of the Online Safety Act. Secondly,
regulated services and other interested stakeholders will be able to read and respond to draft codes of practice relating to protection of children, in Spring 2024. Alongside this, we expect to consult on:
We expect to publish draft guidance on protecting women and girls by Spring 2025, when we will have finalised our codes of practice on protection of children. Phase three: transparency, user empowerment, and
other duties on categorised services A small proportion of regulated services will be designated Category 1, 2A or 2B services if they meet certain thresholds set out in secondary legislation to be made by Government. Our
final stage of implementation focuses on additional requirements that fall only on these categorised services. Those requirements include duties to:
produce transparency reports; provide user empowerment tools; operate in line with terms of service; protect certain types of journalistic
content; and prevent fraudulent advertising.
We now plan to issue a call for evidence regarding our approach to these duties in early 2024 and a consultation on draft transparency guidance in mid 2024. Ofcom must produce a register of categorised
services. We will advise Government on the thresholds for these categories in early 2024, and Government will then make secondary legislation on categorisation, which we currently expect to happen by summer 2024. Assuming this is achieved, we will:
publish the register of categorised services by the end of 2024; publish draft proposals regarding the additional duties on these services in early 2025; and issue
transparency notices in mid 2025.
|
| |