|
Ofcom goes full on nightmare with age/ID verification for nearly all websites coupled with a mountain of red tape and expense
|
|
|
| 8th May 2024
|
|
| See press release from
ofcom.org.uk See Ofcom's consultation and proposed censorship rules from ofcom.org.uk
|
With a theatrical flourish clamouring to the 'won't somebody think of the children' mob, Ofcom has proposed a set of censorship rules that demand strict age/ID verification for practically ever single website that allows users to post content. On top of
that they are proposing the most onerous mountain of expensive red tape seen in the western world. There are few clever slight of hands that drag most of the internet into the realm of strict age/ID verification. Ofcom argues that nearly all websites
will have child users because 16 and 17 year old 'children' have more or less the same interests as adults and so there is no content that is not of interest to 'children' And so all websites will have to offer content that is appropriate to all
age children or else put in place strict age/ID verification to ensure that content is appropriate to age. And at every stage of deciding website policy, Ofcom is demanding extensive justification of decision made and proof of data used in making
decisions. The amount of risk assessments, documents, research, evidence required makes the 'health and safety' regime look like child's play. On occasions in the consultation documents Ofcom acknowledges that this will impose a massive
administrative burden, but swats away criticism by noting that is the fault of the Online Safety Act law itself, and not Ofcom's fault. Comment: Online Safety proposals could cause new
harms See article from openrightsgroup.org
Ofcom's consultation on safeguarding children online exposes significant problems regarding the proposed implementation of age-gating measures. While aimed at protecting children from digital harms, the proposed measures introduce risks to cybersecurity,
privacy and freedom of expression. Ofcom's proposals outline the implementation of age assurance systems, including photo-ID matching, facial age estimation, and reusable digital identity services, to restrict access to popular
platforms like Twitter, Reddit, YouTube, and Google that might contain content deemed harmful to children. Open Rights Group warns that these measures could inadvertently curtail individuals' freedom of expression while
simultaneously exposing them to heightened cybersecurity risks. Jim Killock, Executive Director of Open Rights Group, said: Adults will be faced with a choice: either limit their freedom of
expression by not accessing content, or expose themselves to increased security risks that will arise from data breaches and phishing sites. Some overseas providers may block access to their platforms from the UK rather than
comply with these stringent measures. We are also concerned that educational and help material, especially where it relates to sexuality, gender identity, drugs and other sensitive topics may be denied to young people by
moderation systems. Risks to children will continue with these measures. Regulators need to shift their approach to one that empowers children to understand the risks they may face, especially where young people may look for
content, whether it is meant to be available to them or not.
Open Rights Group underscores the necessity for privacy-friendly standards in the development and deployment of age-assurance systems mandated by the Online
Safety Act. Killock notes, Current data protection laws lack the framework to pre-emptively address the specific and novel cybersecurity risks posed by these proposals. Open Rights Group urges the government to prioritize
comprehensive solutions that incorporate parental guidance and education rather than relying largely on technical measures. |
|
The Online Censorship Bill passes its final parliamentary hurdle
|
|
|
| 20th September 2023
|
|
| See article from newscientist.com
|
The UK's disgraceful Online Safety Bill has passed through Parliament and will soon become law. The wide-ranging legislation, which is likely to affect every internet user in the UK and any service they access, and generate mountains of onerous red tape
for any internet business stupid enough to be based in Britain. Potential impacts are still unclear and some of the new regulations are technologically impossible to comply with. A key sticking point is what the legislation means for end-to-end
encryption, a security technique used by services like WhatsApp that mathematically guarantees that no one, not even the service provider, can read messages sent between two users. The new law gives regulator Ofcom the power to intercept and check this
encrypted data for illegal or harmful content. Using this power would require service providers to create a backdoor in their software, allowing Ofcom to bypass the mathematically secure encryption. But this same backdoor would be abused by
hackers, thieves, scammers and malicious states to snoop, steal and hack. Beyond encryption, the bill also brings in mandatory age checks on pornography websites and requires that websites have policies in place to protect people from harmful or
illegal content. What counts as illegal and exactly which websites will fall under the scope of the bill is unclear, however. Neil Brown at law firm decoded.legal says Ofcom still has a huge amount of work to do. The new law could plausibly affect any
company that allows comments on its website, publishes user-generated content, transmits encrypted data or hosts anything that the government deems may be harmful to children, says Brown: What I'm fearful of is that
there are going to be an awful lot of people, small organisations - not these big tech giants -- who are going to face pretty chunky legal bills trying to work out if they are in scope and, if so, what they need to do.
|
|
The Online Censorship Bill has now been passed by the House of Lords with weak promises about not breaking user security
|
|
|
| 9th
September 2023
|
|
| See article from eff.org |
The U.K.'s Online Safety Bill has passed a critical final stage in the House of Lords, and envisions a potentially vast scheme to surveil internet users. The bill would empower the U.K. government, in certain situations, to demand
that online platforms use government-approved software to search through all users' photos, files, and messages, scanning for illegal content. Online services that don't comply can be subject to extreme penalties, including criminal penalties.
Such a backdoor scanning system can and will be exploited by bad actors. It will also produce false positives, leading to false accusations of child abuse that will have to be resolved. That's why the bill is incompatible with
end-to-end encryption--and human rights. EFF has strongly opposed this bill from the start. Now, with the bill on the verge of becoming U.K. law, the U.K. government has sheepishly acknowledged that it may not be able to make use
of some aspects of this law. During a final debate over the bill, a representative of the government said that orders to scan user files can be issued only where technically feasible, as determined by Ofcom, the U.K.'s telecom regulatory agency. He also
said any such order must be compatible with U.K. and European human rights law. That's a notable step back, since previously the same representative, Lord Parkinson of Whitley Bay, said in a letter to the House of Lords that the
technology that would magically make invasive scanning co-exist with end-to-end encryption already existed . We have seen companies develop such solutions for platforms with end-to-end encryption before, wrote Lord Parkinson in that letter.
Now, Parkinson has come quite close to admitting that such technology does not, in fact, exist. On Tuesday, he said : There is no intention by the Government to weaken the encryption technology used
by platforms, and we have built strong safeguards into the Bill to ensure that users' privacy is protected. If appropriate technology which meets these requirements does not exist, Ofcom cannot require its use. That is why the
powers include the ability for Ofcom to require companies to make best endeavors to develop or source a new solution.
The same day that these public statements were made, news outlets reported that the U.K. government
privately acknowledged that there is no technology that could examine end-to-end encrypted messages while respecting user privacy. People Need Privacy, Not Weak Promises
Let's be clear: weak statements by government ministers, such as the hedging from Lord Parkinson during this week's debate, are no substitute for real privacy rights. Nothing in the law's text has changed. The
bill gives the U.K. government the right to order message and photo-scanning, and that will harm the privacy and security of internet users worldwide. These powers, enshrined in Clause 122 of the bill, are now set to become law. After that, the regulator
in charge of enforcing the law, Ofcom, will have to devise and publish a set of regulations regarding how the law will be enforced. Several companies that provide end-to-end encrypted services have said they will withdraw from the
U.K. if Ofcom actually takes the extreme choice of requiring examination of currently encrypted messages. Those companies include Meta-owned WhatsApp, Signal, and U.K.-based Element, among others. While it's the last minute,
Members of Parliament still could introduce an amendment with real protections for user privacy, including an explicit protection for real end-to-end encryption. Failing that, Ofcom should publish regulations that make clear that
there is no available technology that can allow for scanning of user data to co-exist with strong encryption and privacy. Finally, lawmakers in other jurisdictions, including the United States, should take heed of the embarrassing
result of passing a law that is not just deceptive, but unhinged from computational reality. The U.K. government has insisted that through software magic, a system in which they can examine or scan everything will also somehow be a privacy-protecting
system. Faced with the reality of this contradiction, the government has turned to an 11th hour campaign to assure people that the powers it has demanded simply won't be used.
|
|
|
|
|
| 31st August 2023
|
|
|
The British Computer Society is not impressed by the Online Safety Bill See article
from bcs.org |
|
|
|
|
| 6th August 2023
|
|
|
The legislation is also terrible on free speech and poses global risks. See article from reason.com
|
|
|
|
|
| 31st
July 2023
|
|
|
A fascinating article speculating on how the UK's Online Censorship Bill will actually impact the internet business, as always the onerous red tape will most benefit the US internet giants See
article from regulate.tech |
|
|
|
|
| 4th May 2023
|
|
|
The bill aims to make the country the safest place in the world to be online but has been mired by multiple delays and criticisms that it's grown too large and unwieldy to please anyone See
article from theverge.com |
|
Although the French decision to deem an internet censorship law as unconstitutional has passed into internet history, the Constitutional Council's decision provides some instructive comparisons when we examine the UK's Online Safety
Bill.
|
|
|
| 13th March
2023
|
|
| See article from cyberleagle.com by Graham Smith |
|
|
The government is set to grant itself an 18 week extension to the parliamentary time available to force through its unsafe Internet Censorship Bill
|
|
|
| 13th March 2023
|
|
| |
The government's Internet 'Safety' Bill is coming under a lot of pressure for its disgraceful intention to compromise internet security for all British people by removing secure encrypted communication used to keep out hackers, blackmailers, scammers and
thieves. Perhaps acknowledging the opposition from security experts the government is giving itself another 18 weeks to push it through parliament. Otherwise the bill would be in danger of being timed out. The extension will be presented to
parliament tomorrow. |
|
|
|
|
| 12th March 2023
|
|
|
Yes the government can demand that tech companies compromise the security of encrypted communications for all users See article from
untidy.substack.com |
|
|
|
|
| 14th
February 2023
|
|
|
MPs must beware making the Online Safety Bill even more damaging See article from capx.co |
|
|
|
|
|
18th January 2023
|
|
|
The road to hell is paved with good intentions. By Matthew Lesh See article from thecritic.co.uk |
|
Online Safety Bill latest change: State enforcement of big tech terms
|
|
|
|
12th January 2023
|
|
| See Creative Commons article from
openrightsgroup.org by Dr Monica Horten |
The Online Safety Bill is currently going back to Report Stage in the Commons on 16 th January, and is widely expected to be in the Lords for the end of the month, or beginning of February. We anticipate it could complete its legislative passage
by June. At the end of last year, a widely publicised change to the Online Safety Bill took out the so-called "legal but harmful" clauses for adults. The government has claimed this is protecting free speech.
However, in their place, new clauses have been shunted in that create a regime for state-mandated enforcement of tech companies' terms and conditions. It raises new concerns around embedded power for the tech companies and a worrying
lack of transparency around the way that the regulator, Ofcom, will act as enforcer-in-chief. Whatever they say goes It is not a good look for free speech. It does not alter the underlying framework
of the Bill that establishes rules by which private companies will police our content. On the other hand, it does create a shift in emphasis away from merely "taking down" troublesome content, and towards "acting against users".
For policy geeks, the change removed Clauses 12 and 13 of the Bill, concerning "content harmful to adults". The clauses regarding harmful content for children, Clauses 10 and 11, remain. The two
deleted clauses have been replaced by five new clauses addressing the terms of service of the tech companies. If their terms of service say they will "act against" content of "a particular kind", then they will follow through and do
so. This will be enforced by Ofcom. The new clauses emphatically refer to "restricting users' access" as well as taking down their content, or banning users from the service. The language of "restricting
access" is troubling because the implied meaning suggests a policy of limiting free speech, not protecting it. This is an apparent shift in emphasis away from taking down troublesome content, to preventing users from seeing it in the first place. It
is an environment of sanctions rather than rights and freedoms. There is no definition of "a particular kind" and it is up to the tech companies to identify the content they would restrict. Indeed, they could restrict
access to whatever they like, as long as they tell users in the terms of service. The political pressure will be on them to restrict the content that the government dictates. It will not be done by the law, but by backroom
chats, nods and winks over emails between the companies, Ofcom and government Ministries. Joining the dots, Ofcom has a legal duty to "produce guidance" for the tech companies with regard to compliance. Ofcom takes
direction from the two responsible Ministries, DCMS and the Home Office. A quick call with expression of the Minister's concerns could be used to apply pressure, with the advantage that it would skirt around publicly accountable procedures. "Yes,
Minister" would morph into real life. Restricting access to content The new clauses do attempt to define "restricting users access to content". It occurs when a tech company
"takes a measure which has the effect that a user is unable to access content without taking a prior step" or "content is temporarily hidden from a user". It's a definition that gives plenty of room for tech companies to be inventive
about new types of restrictions. It does seem to bring in the concept of age-gating, which is a restriction on access, requiring people to take the step of establishing their identity or age-group, before being allowed access. The
new provisions also state that tech companies "must not act against users except in accordance with their terms and conditions", but the repetition of restrictive language suggests that the expectation is that they will restrict. There is no
recognition of users' freedom of expression rights, and they may only complain about breach of contract, not breach of rights. These restrictive clauses should also be seen in light of another little twist of language by the
Bill's drafters: "relevant content". This is any content posted by users onto online platforms, but it is also any content capable of being searched by search engines, which are in scope of the Bill. The mind boggles at how much over-reach this
Bill could achieve. How many innocent websites could find themselves demoted or down-ranked on the basis of the government whim of the day? "Relevant content" is applicable when users seek to complain. But how can users
complain about their website being down-ranked in a search listing when they don't have any confirmation that it has happened? The Bill makes no provision for users to be informed about "restricted access". The change
fails to take account of the potential cross-border effects, that will especially affect search functions. The Bill limits its jurisdiction to what it calls "UK-linked" content or web services. The definition is imprecise and includes content
that is accessible from the UK. Online platform terms and conditions are usually written for a global user base. It's not clear if this provision could over-reach into other jurisdictions, potentially banning lawful content or users elsewhere.
Failure of policy-making It reflects a failure of policy-making. These platforms are important vehicles for the global dissemination of information, knowledge and news. The restrictions that online
platforms have in their armoury will limit the dissemination of users' content, in ways that are invisible and draconian. For example, they could use shadow bans, which operate by limiting ways that content is shown in newsfeeds and timelines. The
original version of the Bill as introduced to Parliament did acknowledge this, and even allowed user to complain about them. The current version does not. Overall, this is a failure to recognise that the vast majority of users are
speaking lawfully. The pre-Christmas change to the Bill puts them at risk not only of their content being taken down but their access being restricted. Freedom of expression is a right to speak and to be informed. This change affects both.
|
|
Open Rights Group reports on the latest government amendments for the Online Censorship Bill
|
|
|
|
14th December 2022
|
|
| See Creative Commons article from
openrightsgroup.org |
The Online Safety Bill is back in Parliament. It had been stalled for five months whilst the government made a few changes. A Parliamentary debate on Monday (5th December) revealed the shift in policy direction for the first time. It's relatively small
change, with big implications. According to the government, the Online Safety Bill is supposed to protect children. However, from a digital rights perspective it is probably the most worrying piece of legislation ever imagined to
date. The government's focus is on the content it wants to ban, with little attention paid to the impact on freedom of expression or privacy. The lack of definition or precision in the text leaves wide open loopholes for over-removals of content and the
possibility of government- imposed, privatised surveillance. The emphasis was on new amendments to be tabled early next year. Self-harm content, deep fakes and the sharing of non-consensual intimate images, will be defined as new
criminal offences and illegal content. The subtle policy shift turns on a requirement for large online platforms to tackle the so-called "legal but harmful" content. This is a legally-problematic, grey area. It is about
content that is not illegal but which the government wants to ban, and understood to include eating disorders, self-harm, and false claims about medicines. The government has announced a plan to delete this requirement, but only
for adult users, not for children. An amendment will be tabled next week. A further, legally problematic, amendment requires platforms to allow adult users to filter out these kinds of harmful content for themselves. The idea is a
kind of filter button where users can select the type of harmful content that they don't want to see. In tandem, there will be an amendment that makes online platforms enforce their terms and conditions with regard to content that
is not addressed by the Bill. We have seen drafts of some of these amendments, and await the final versions. This filter, together with the requirement to enforce terms and conditions, and an existing
requirement to remove all illegal content, is what the government is calling its "triple shield". The government claims this will protect users from the range of harms set out in the Bill. It also claims the move will protect free speech. This
claim does not stack up, as the underlying censorship framework remains in place, including the possibility of general monitoring and upload filters. Moreover, the effect of these amendments is to mitigate in favour of age-gating.
The notion of "legal but harmful" content for children remains in the Bill. In Monday's debate, government Ministers emphasised the role of "age assurance" which is a requirement in the Bill although it does not say how it should be
implemented. The government's position on age-gating is broader than just excluding under-18s from 'adult' content. The Secretary of State, Michelle Donelan, said that all platforms must know the age of their users. They may be
required to differentiate between age-groups, in order to prevent children from engaging with age-inappropriate harmful content to be defined by the government. The likely methods will use biometric surveillance. MPs have also
passed an amendment that confirms chat controls on private messaging services. This is the "spy clause"
, renumbered S. 106 (formerly S.104). It's a stealth measure that is almost invisible in the text, with
no precision as to what providers will do. The government's preferred route is understood to be client-side scanning. This completes a trio of surveillance on public posts, private chats and children.
|
|
|
|
|
| 7th December 2022
|
|
|
The UK's amended Online Safety Bill covers services available in the country even if they are based elsewhere. But what does the bill entail, and if passed, how will it affect companies that conduct business online? See
article from computerworld.com |
|
Index on Censorship has commissioned a legal opinion by Matthew Ryder KC and finds that the powers conceived would not be lawful under our common law and the existing human rights legal framework
|
|
|
| 30th
November 2022
|
|
| See article from indexoncensorship.org See
legal opinion [pdf] from indexoncensorship.org |
There has been significant commentary on the flaws of the Online Safety Bill, particularly the harmful impact on freedom of expression from the concept of the duty of care over adult internet users and the problematic legal but harmful category for
online speech. Index on Censorship has identified another area of the Bill, far less examined, that now deserves our attention. The provisions in the Online Safety Bill that would enable state-backed surveillance of private communications contain some of
the broadest and powerful surveillance powers ever proposed in any Western democracy. It is our opinion that the powers conceived in the Bill would not be lawful under our common law and existing human rights legal framework. The
legal opinion shows how the powers conceived go beyond even the controversial powers contained within the Investigatory Powers Act (2016) but critically, without the safeguards that Parliament inserted into the Act in order to ensure it protected the
privacy and the fundamental rights of UK citizens. The powers in the Online Safety Bill have no such safeguards as of yet. The Bill as currently drafted gives Ofcom the powers to impose Section 104 notices on the operators of
private messaging apps and other online services. These notices give Ofcom the power to impose specific technologies (e.g. algorithmic content detection) that provide for the surveillance of the private correspondence of UK citizens. The powers allow the
technology to be imposed with limited legal safeguards. It means the UK would be one of the first democracies to place a de facto ban on end-to-end encryption for private messaging apps. No communications in the UK -- whether between MPs, between
whistleblowers and journalists, or between a victim and a victims support charity -- would be secure or private. In an era where Russia and China continue to work to undermine UK cybersecurity, we believe this could pose a critical threat to UK national
security. See full article from indexoncensorship.org
|
|
The UK government announces that its Online Censorship Bill returns to Parliament on 5th December
|
|
|
| 26th November 2022
|
|
| From The Times |
The Times is reporting that the government's Online Censorship Bill will return to the House of Commons on December 5th with a few amendments re 'harmful but legal' content. Rishi Sunak is to introduce a compromise over the Online Safety Bill that
will involve users being able to filter out legal but harmful content without it being removed by tech platforms. The bill has been paused while the government takes out provisions that alarmed free speech advocates. Of particular concern
were sections that would have led to tech platforms such as Facebook, Instagram, TikTok and Google removing content that was deemed to be legal, but harmful to adults. The government will also detail a new offence about sharing deep fake porm.
Those who share pornographic deepfakes,explicit images or videos that have been manipulated to look like someone without their consent, could be jailed under the proposed changes. It is not clear how the government will take on the international porn
websites where faked porn of celebrities is commonplace. Perhaps the government will have to block them all. Meanwhile the censorship bill is causing further criticisms over governments powers to degrade encryption. This is used to keep British
people safe from hackers, blackmailers and thieves, not to mention snooping by malicious governments most notably China and Russia. The Open Rights Group explains in an
article from openrightsgroup.org :
The Online Safety Bill requires ALL online speech to be monitored for harmful content, including the private conversations you have on your phone with friends and family. Companies like Whatsapp and Signal will be required by law to break end-to-end
encryption, so the Government can automatically scan your messages. They say encryption is dangerous, but the opposite is true. Encryption keeps your information and transactions safe from criminals. It ensures your private
messages stay private. If the UK Government can break encryption to read your messages, that means scammers, hackers and foreign governments can too. Save encryption, Protect the security of your phone If they get their way, your
phone will be turned into a spy in your pocket. Billions of personal messages will be ready to be hacked, sold and exploited. The Government's plan to access your private messages will help criminals and make us less safe.
|
|
|
|
|
| 22nd November 2022
|
|
|
The chilling effect of this new legislation will be violation of privacy and infringement of free speech online. By Monica Horten See
article from newstatesman.com |
|
The Government is discussing reworking the free speech curtailing censorship of 'legal but harmful' content into something more optional for adults
|
|
|
|
20th November 2022
|
|
| See article from telegraph.co.uk |
The Telegraph is reporting on significant changes being considered by the government to its Online Censorship Bill. The government is considering backing off from the government defined censorship of 'legal but harmful' content on most websites
available in the UK. The government has rightfully been taking stick for these free speech curtailing measures, particularly as the censorship is expected to be implemented mostly by mostly woke US internet giants who clearly don't care about free
speech, and will over censor to ensure that they don't get caught up in the expense of getting it wrong by under censoring. Culture Secretary Michelle Donelan is said to be considering the option for adults to be able to self censor 'legal but
harmful' content by clicking a filter button that will order websites to block such content. Of course children will not be able to opt out of that choice. And of course this will men that age and identity verification has to be in place to esnsure that
only adults can opt out. A Culture Department spokesman said: The Secretary of State has committed to strengthen protections for free speech and children in the Online Safety Bill and bring the bill back to the
Commons as soon as possible. It remains the Government's intention to pass the bill this session.
|
|
|
|
|
|
13th November 2022
|
|
|
Graham Smith suggests a few ideas to pare back the unviable monstrosity that currently exists See article from cyberleagle.com
|
|
Government signals that it will delete the censorship of 'legal but harmful' content for adults chapter from the Online Censorship Bill
|
|
|
| 2nd
November 2022
|
|
| See article from
inews.co.uk |
The Online Censorship Bill is due to be brought back to Parliament later this month when Culture Secretary Michelle Donelan will present an amended version of the Online Safety Bill to MPs. It is reported that controversial 'legal but harmful
rules' are set to be watered down. She is scrapping sweeping legal but harmful rules which required social media companies to address content that is not illegal but is deemed dangerous. The rules would have meant social media sites, such as
Twitter, Instagram and Facebook, were responsible for dealing with this content for both adults and children. But, amid criticism that it would have led to a widespread attack on freedom of speech by companies hoping to avoid hefty fines, it seems that
the new laws will only apply to material targeted at children. |
|
The Government pauses the Online Censorship Bill to give the new government a chance to consider its business suffocating mountain of red tape and its curtailment of free speech
|
|
|
| 27th
October 2022
|
|
| See article from finance.yahoo.com |
PoliticsHome spotted the change to the House of Commons schedule last night, reporting that the Online Censorship Bill had been dropped from the Commons business next week. A source in the Department of Digital, Culture, Media and Sport (DCMS) told
TechCrunch that the latest delay to the bill's parliamentary timetable is to allow time for MPs to read new amendments -- which they also confirmed are yet to be laid. But they suggested the delay will not affect the passage of the bill, saying it
will progress within the next few weeks. The change of PM may not mean major differences in policy approach in the arena of online regulation as Rishi Sunak has expressed similar concerns about the Online Safety Bill's impact on free speech -- also
seemingly centred on clauses pertaining to restrictions on the legal but harmful speech of adults. |
|
|
|
|
| 27th
October 2022
|
|
|
Parliament debates in Westminster Hall that 'this House has considered online harms' See article from theyworkforyou.com |
|
|
|
|
| 18th
October 2022
|
|
|
Coronor in Molly Russell case claims that social media should be split into adult and child sections See article from theregister.com
|
|
Ofcom publishes report seemingly trying categorise or classify these 'harms' and associated risks with view to its future censorship role
|
|
|
| 25th
September 2022
|
|
| See article from ofcom.org.uk See
report [pdf] from ofcom.org.uk |
Ofcom writes: The Online Safety Bill, as currently drafted, will require Ofcom to assess, and publish its findings about the risks of harm arising from content that users may encounter on in-scope services, and will require in-scope
services to assess the risks of harm to their users from such content, and to have systems and processes for protecting individuals from harm. Online users can face a range of risks online, and the harms they may experience are
wide-ranging, complex and nuanced. In addition, the impact of the same harms can vary between users. In light of this complexity, we need to understand the mechanisms by which online content and conduct may give rise to harm, and use that insight to
inform our work, including our guidance to regulated services about how they might comply with their duties. This report sets out a generic model for understanding how online harms manifest. This research aimed to test a
framework, developed by Ofcom, with real-life user experiences. We wanted to explore if there were common risks and user experiences that could provide a single framework through which different harms could be analysed. There are a couple of important
considerations when reading this report:
The research goes beyond platforms' safety systems and processes to help shed broader light on what people are experiencing online. It therefore touches on issues that are beyond the scope of the proposed online safety regime.
The research reflects people's views and experiences of their online world: it is based on people self- identifying as having experienced 'significant harm', whether caused directly or indirectly, or 'illegal content'.
Participants' definitions of harmful and illegal content may differ and do not necessarily align with how the Online Safety Bill, Ofcom or others may define them.
|
|
UK Online Censorship Bill set to continue after 'tweaks'
|
|
|
| 16th September
2022
|
|
| See article from techdirt.com |
After a little distraction for the royal funeral, the UK's newly elected prime minister has said she will be continuing with the Online Censorship Bill. She said: We will be proceeding with the Online Safety Bill. There
are some issues that we need to deal with. What I want to make sure is that we protect the under-18s from harm and that we also make sure free speech is allowed, so there may be some tweaks required, but certainly he is right that we need to protect
people's safety online.
TechDirt comments: This is just so ridiculously ignorant and uninformed. The Online Safety Bill is a disaster in waiting and I wouldn't be surprised if some websites chose to
exit the UK entirely rather than continue to deal with the law. It won't actually protect the children, of course. It will create many problems for them. It won't do much at all, except make internet companies question whether
it's even worth doing business in the UK.
|
|
Former UK Supreme Court judge savages the government's censorship bill
|
|
|
|
18th August 2022
|
|
| See article from spectator.co.uk by Jonathan Sumption
|
Weighing in at 218 pages, with 197 sections and 15 schedules, the Online Safety Bill is a clunking attempt to regulate content on the internet. Its internal contradictions and exceptions, its complex paper chase of definitions, its weasel language
suggesting more than it says, all positively invite misunderstanding. Parts of it are so obscure that its promoters and critics cannot even agree on what it does. The real vice of the bill is that its provisions are not limited to
material capable of being defined and identified. It creates a new category of speech which is legal but harmful. The range of material covered is almost infinite, the only limitation being that it must be liable to cause harm to some people.
Unfortunately, that is not much of a limitation. Harm is defined in the bill in circular language of stratospheric vagueness. It means any physical or psychological harm. As if that were not general enough, harm also extends to anything that may increase
the likelihood of someone acting in a way that is harmful to themselves, either because they have encountered it on the internet or because someone has told them about it. This test is almost entirely subjective. Many things which
are harmless to the overwhelming majority of users may be harmful to sufficiently sensitive, fearful or vulnerable minorities, or may be presented as such by manipulative pressure groups. At a time when even universities are warning adult students
against exposure to material such as Chaucer with his rumbustious references to sex, or historical or literary material dealing with slavery or other forms of cruelty, the harmful propensity of any material whatever is a matter of opinion. It will vary
from one internet user to the next. If the bill is passed in its current form, internet giants will have to identify categories of material which are potentially harmful to adults and provide them with options to cut it out or
alert them to its potentially harmful nature. This is easier said than done. The internet is vast. At the last count, 300,000 status updates are uploaded to Facebook every minute, with 500,000 comments left that same minute. YouTube adds 500 hours of
videos every minute. Faced with the need to find unidentifiable categories of material liable to inflict unidentifiable categories of harm on unidentifiable categories of people, and threatened with criminal sanctions and enormous regulatory fines (up to
10 per cent of global revenue). What is a media company to do? The only way to cope will be to take the course involving the least risk: if in doubt, cut it out. This will involve a huge measure of regulatory overkill. A new era
of intensive internet self-censorship will have dawned. See full article from spectator.co.uk
|
|
British Computer Society experts are not impressed by The Online Censorship Bill
|
|
|
| 15th
August 2022
|
|
| See article from bcs.org See
BSC report [pdf] from bcs.org |
Plans to compel social media platforms to tackle online harms are not fit for purpose according to a new poll of IT experts. Only 14% of tech professionals believed the Online Harms Bill was fit for purpose, according to the
survey by BCS, The Chartered Institute for IT. Some 46% said the bill was not workable, with the rest unsure. The legislation would have a negative effect on freedom of speech, most IT specialists (58%)
told BCS. Only 19% felt the measures proposed would make the internet safer, with 51% saying the law would not make it safer to be online. There were nearly 1,300 responses from tech professionals to the
survey by BCS. Just 9% of IT specialists polled said they were confident that legal but harmful content could be effectively and proportionately removed. Some 74% of tech specialists said they felt the bill
would do nothing to stop the spread of disinformation and fake news.
|
|
whilst we still can!
|
|
|
|
31st July 2022
|
|
| |
Offsite Comment: Fixing the UK's Online Safety Bill, part 1: We need answers. 31st July 2022. See
article from webdevlaw.uk
by Heather Burns
Offsite Comment: The delay to the online safety bill It won't make it any easier to please everyone 17th July 2022. See
article from theguardian.com by Alex Hern
Offsite Comment: It’s time to kill the Online Safety Bill for good... Not only is it bad for business, bad for free speech, and -- by attacking encryption -- bad for online safety 16th July 2022. See
article from spectator.co.uk by Sam Ashworth-Hayes
|
|
Well John Penrose MP bizarrely proposes that social media companies keep a truthfulness score for all their users
|
|
|
| 10th July 2022
|
|
| See Online Censorship Bill proposed amendments [pdf] from docs.reclaimthenet.org
|
John Penrose, a Tory MP, has tabled an amendment to the Online Censorship Bill currently being debated in Parliament: To move the following Clause--
Factual Accuracy
(1) The purpose of this section is to reduce the risk of harm to users of regulated services caused my disinformation or misinformation. (2) Any Regulated Service must provide an index of the historic factual
accuracy of material published by each user who has-- (a) produced user-generated content, (b) news publisher content, or (c) comments and reviews on provider contact
whose content is viewed more widely than a minimum threshold to be defined and set by OFCOM.
(3) The index under subsection (1) must-- (a) satisfy minimum quality criteria to be set
by OFCOM, and (b) be displayed in a way which allows any user easily to reach an informed view of the likely factual accuracy of the content at the same time as they encounter it.
Surely it is a
case of be careful what you wish for. After all it would be great to see truth scores attached to all politicians social media posts. I somehow think that other MPs will rather see the flaws in this idea and will be rather quick to see it consigned to
the parliamentary trash can. |
|
|
|
|
| 7th July 2022
|
|
|
...er the same sleazy party loving people that gave you one rule for them and one rule for us! See article from reprobatepress.com
|
|
Legal analysis of UK internet censorship proposals
|
|
|
|
5th July 2022
|
|
| |
Offsite Article: French lawyers provide the best summary yet 15th June 2022. See article
from taylorwessing.com Offsite Article: Have we opened Pandora's box? 20th June 2022. See
article from tandfonline.com
Abstract In thinking about the developing online harms regime (in the UK and elsewhere1) it is forgivable to think only of how laws placing responsibility on social media platforms to prevent hate speech may benefit
society. Yet these laws could have insidious implications for free speech. By drawing on Germany's Network Enforcement Act I investigate whether the increased prospect of liability, and the fines that may result from breaching the duty of care in the
UK's Online Safety Act - once it is in force - could result in platforms censoring more speech, but not necessarily hate speech, and using the imposed responsibility as an excuse to censor speech that does not conform to their objectives. Thus, in
drafting a Bill to protect the public from hate speech we may unintentionally open Pandora's Box by giving platforms a statutory justification to take more control of the message. See full
article from tandfonline.com Offsite Article: The Online Safety
Act - An Act of Betrayal 5th July 2022. See article from ukcolumn.org by Iain Davis
The Online Safety Bill (OSB) has been presented to the public as an attempt to protect children from online grooming and abuse and to limit the reach of terrorist propaganda. This, however, does not seem to be its primary focus.
The real objective of the proposed Online Safety Act (OSA) appears to be narrative control.
|
|
The Christian Institute realises that religious voices will be readily silenced under the Online Censorship Bill
|
|
|
| 30th June 2022
|
|
| See article from christian.org.uk
|
The Christian Institute has been reading a report by the Institute of Economic Affairs (IEA) and has realised that the Christians will be first against the wall when the UK government empowers US internet Goliaths partnering with the easily offended to
control what people are allowed to say. The Christian Institute explains: A report titled An Unsafe Bill , published by the Institute of Economic Affairs (IEA), outlines the Online Safety Bill's impact on free speech,
privacy and innovation The Bill gives strong incentives for social media companies and search engines to restrict content which is legal but harmful to adults and empowers Government ministers to decide what this covers.
The IEA warns this will give the Secretary of State for Culture and watchdog Ofcom unprecedented powers to define and limit speech, with limited parliamentary or judicial oversight. The report highlights that
because tech companies could be fined up to ten per cent of their annual global turnover if they fail to uphold their new duties, platforms may use automated tools in a precautionary and censorious manner. The briefing also warns
that the Bill's free speech protections appear wholly inadequate, with the risk that those claiming distress will request the removal of speech with which they disagree. Writing in The Times, its co-author Matthew Lesh called the
Bill a recipe for automated over-removal of speech on an industrial scale, to ensure compliance and placate the most easily offended. He commented: Is the government trying to out-compete Russia and China in online
censorship?
An Unsafe Bill 29th June 2022. See full report [pdf] from iea.org.uk
Here is the summery of the quoted report.
An Unsafe Bill: How the Online Safety Bill threaten s free speech , innovation and privacy By Matthew Lesh, Head of Public Policy, Institute of Economic Affairs, and Victoria Hewson , Head of Regulatory Affairs , Institute
of Economic Affairs Summary
- The Online Safety Bill establishes a new regulatory regime for digital platforms intended to improve online safety. - The Bill raises significant issues for freedom of expression, privacy and
innovation. - There is a lack of evidence to justify the legislation, with respect to both the alleged prevalence of what the Bill treats as ' harm ' and the link between the proposed measures and the desired objectives.
Freedom of expression
- The duties in the Bill, in respect of illegal content and legal content that is harmful to adults, combine d with the threat of large fines and criminal liability, risks platforms using automated tools in a precautionary and
censorious manner . - The Bill appears designed to discourage platforms from hosting speech that the Secretary of State considers to be harmf ul, even if that speech is legal. The Bill allows for the expansion of the category
of ' legal but harmful ' content with limited parliamentary scrutiny . - The Secretary of State and Ofcom will have unprecedented powers to define and limit speech, with limited parliamentary or judicial oversight. -
- The introduction of age assurance requirements will force search engines and social media to withhold potentially harmful information by default, making it difficultforadults to access information without logging int o services, and
entirely forbidding children from content even if it could be educationally valuable. - Some small to mid - sized overseas platforms could block access for UK users to limit their regulatory costs and risks, thereby reducing
British users ' acce ss to online content . - Safeguards designed to protect free expression are comparatively weak and could backfire by requiring application in a ' consistent ' manner, leading to the removal of more
content.
Privacy
- The safety duties will lead platforms to profi le users and monitor their content and interactions including by using technologies mandated by Ofcom . - The inclusion of private messaging in the
duties risks undermining encryption . - The child safety duties will infringe the privacy of adult users by requir ing them to verify their age, through an identity verification or age assurance process, to access content
that is judged unsuitable for children. - The user empowerment duties will further necessitate many users verifying their identities to platforms.
Innovation
- The Bill imposes byzantine requirements on businesses of all sizes. Platforms face large regulatory costs and criminal liability for violations, which could discourage investment and research and development in the United
Kingdom. - The Bill ' s regulatory costs will be more burdensome for start - ups and small and medium - sized businesses, which lack the resources to invest in legal and regulatory compliance and automated systems, and
therefore the Bill could entrench the market position of ' Big Tech ' companies. - The likely result of the additional regulatory and cost burdens on digital businesses will be the slower and more cautious introduction of new
innovative products or features , and fewer companies entering the sector. This will lead to less competition and less incentive to innovate, with resulting losses to consumer welfare
|
|
|
|
|
| 30th April 2022
|
|
|
Bill compliance costs will hit smaller companies the most See article from verdict.co.uk |
|
|
|
|
| 25th April 2022
|
|
|
The UK government is actively encouraging Big Tech censorship. By Matthew Lesh See article from
spiked-online.com |
|
Surveyed porn users indicate that they will be unlikely to hand over their identity documents to for age verification
|
|
|
| 22nd April 2022
|
|
| See article from techradar.com
|
So what will porn users do should their favourite porn site succumb to age verification. Will they decide to use a VPN, or else try Tor, or perhaps exchange porn with their friends, or perhaps their will be an opportunity for a black market to spring up.
Another option would be to seek out lesser known foreign porn sites that van fly under the radar. All of these options seem more likely than users dangerously handing over identity documents to any porn website that asks. According to a new survey
from YouGov, 78% of the 2,000 adults surveyed would not be willing to verify their age to access adult websites by uploading a document linked to their identity such as a driver's license, passport or other ID card. Of the participants who believe
that visiting adult websites can be part of a healthy sexual lifestyle, just 17% are willing to upload their ID. The main reasons for their decisions were analysed. 64% just don't trust the companies to keep their data safe while 63% are scared their
information could end up in the wrong hands. 49% are concerned about adult websites suffering data breaches which could expose their personal information. Director of the privacy campaigner Open Rights Group, Jim Killock explained in a press release
that those who want to access adult websites anonymously will just use a VPN if the UK's Online Safety legislation passes, saying: The government assumes that people will actually upload their ID to access adult content. The data shows that this is a
naive assumption. Instead, adults will simply use a VPN (as many already do) to avoid the step, or they'll go to smaller, unmoderated sites which exist outside the law. Smaller adult sites tend to be harder to regulate and could potentially expose
users204including minors204to more extreme or illegal content. |
|
The UK govenment's Online Censorship Bill will get a 2nd reading debate in the House of Commons on Tuesday 19th April
|
|
|
| 18th
April 2022
|
|
| See press release from gov.uk
|
Repressive new censorship laws return to Parliament for their second reading this week. Online censorship legislation will be debated in the Commons Comes as new plans to support some people and fight deemed falsities online are launched Funding
boost will help people's critical thinking online through a new expert Media Literacy Taskforce alongside proposals to pay for training for teachers and library workers Parliamentarians will debate the government's groundbreaking Online Censorship
Bill which requires social media platforms, search engines and other apps and websites allowing people to post content to censor 'wrong think' content. Ofcom, the official state censor, will have the power to fine companies failing to comply with
the laws up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites. Crucially, the laws have strong measures to safeguard children from harmful content such as pornography and child sexual
abuse. |
|
|
|
|
| 2nd April
2022
|
|
|
Online Safety Bill: What issues does the Bill pose for UK businesses operating online? See article from lexology.com
|
|
|
|
|
|
20th March 2022
|
|
|
The UK's Online Safety Bill is an authoritarian nightmare. By Fraser Myers See article from spiked-online.com
|
|
UK Government introduces its Online Censorship Bill which significantly diminishes British free speech whilst terrorising British businesses with a mountain of expense and red tape
|
|
|
| 17th
March 2022
|
|
| See press release from gov.uk See
bill progress from bills.parliament.uk See bill
text [pdf] from publications.parliament.uk |
The UK government's new online censorship laws have been brought before parliament. The Government wrote in its press release: The Online Safety Bill marks a milestone in the fight for a new digital age which is safer for users and
holds tech giants to account. It will protect children from harmful content such as pornography and limit people's exposure to illegal content, while protecting freedom of speech. It will require social media platforms, search
engines and other apps and websites allowing people to post their own content to protect children, tackle illegal activity and uphold their stated terms and conditions. The regulator Ofcom will have the power to fine companies
failing to comply with the laws up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites. Today the government is announcing that executives whose companies fail to
cooperate with Ofcom's information requests could now face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted. A raft of other new offences have also been added
to the Bill to make in-scope companies' senior managers criminally liable for destroying evidence, failing to attend or providing false information in interviews with Ofcom, and for obstructing the regulator when it enters company offices.
In the UK, tech industries are blazing a trail in investment and innovation. The Bill is balanced and proportionate with exemptions for low-risk tech and non-tech businesses with an online presence. It aims to increase people's trust
in technology, which will in turn support our ambition for the UK to be the best place for tech firms to grow. The Bill will strengthen people's rights to express themselves freely online and ensure social media companies are not
removing legal free speech. For the first time, users will have the right to appeal if they feel their post has been taken down unfairly. It will also put requirements on social media firms to protect journalism and democratic
political debate on their platforms. News content will be completely exempt from any regulation under the Bill. And, in a further boost to freedom of expression online, another major improvement announced today will mean social
media platforms will only be required to tackle 'legal but harmful' content, such as exposure to self-harm, harassment and eating disorders, set by the government and approved by Parliament. Previously they would have had to
consider whether additional content on their sites met the definition of legal but harmful material. This change removes any incentives or pressure for platforms to over-remove legal content or controversial comments and will clear up the grey area
around what constitutes legal but harmful. Ministers will also continue to consider how to ensure platforms do not remove content from recognised media outlets. Bill introduction and changes over the
last year The Bill will be introduced in the Commons today. This is the first step in its passage through Parliament to become law and beginning a new era of accountability online. It follows a period in which the government
has significantly strengthened the Bill since it was first published in draft in May 2021. Changes since the draft Bill include:
Bringing paid-for scam adverts on social media and search engines into scope in a major move to combat online fraud . Making sure all websites which publish or host pornography , including commercial
sites, put robust checks in place to ensure users are 18 years old or over. Adding new measures to clamp down on anonymous trolls to give people more control over who can contact them and what they see online. -
Making companies proactively tackle the most harmful illegal content and criminal activity quicker. Criminalising cyberflashing through the Bill.
Criminal liability for senior managers The Bill gives Ofcom powers to demand information and data from tech companies, including on the role of their algorithms in selecting and displaying content, so it
can assess how they are shielding users from harm. Ofcom will be able to enter companies' premises to access data and equipment, request interviews with company employees and require companies to undergo an external assessment of
how they're keeping users safe. The Bill was originally drafted with a power for senior managers of large online platforms to be held criminally liable for failing to ensure their company complies with Ofcom's information requests
in an accurate and timely manner. In the draft Bill, this power was deferred and so could not be used by Ofcom for at least two years after it became law. The Bill introduced today reduces the period to two months to strengthen
penalties for wrongdoing from the outset. Additional information-related offences have been added to the Bill to toughen the deterrent against companies and their senior managers providing false or incomplete information. They
will apply to every company in scope of the Online Safety Bill. They are:
offences for companies in scope and/or employees who suppress, destroy or alter information requested by Ofcom; offences for failing to comply with, obstructing or delaying Ofcom when exercising its
powers of entry, audit and inspection, or providing false information; offences for employees who fail to attend or provide false information at an interview.
Falling foul of these offences could lead to up to two years in imprisonment or a fine. Ofcom must treat the information gathered from companies sensitively. For example, it will not be able to share or publish
data without consent unless tightly defined exemptions apply, and it will have a responsibility to ensure its powers are used proportionately. Changes to requirements on 'legal but harmful' content Under the draft Bill, 'Category 1' companies - the largest online platforms with the widest reach including the most popular social media platforms - must address content harmful to adults that falls below the threshold of a criminal offence.
Category 1 companies will have a duty to carry risk assessments on the types of legal harms against adults which could arise on their services. They will have to set out clearly in terms of service how they will deal with such
content and enforce these terms consistently. If companies intend to remove, limit or allow particular types of content they will have to say so. The agreed categories of legal but harmful content will be set out in secondary
legislation and subject to approval by both Houses of Parliament. Social media platforms will only be required to act on the priority legal harms set out in that secondary legislation, meaning decisions on what types of content are harmful are not
delegated to private companies or at the whim of internet executives. It will also remove the threat of social media firms being overzealous and removing legal content because it upsets or offends someone even if it is not
prohibited by their terms and conditions. This will end situations such as the incident last year when TalkRadio was forced offline by YouTube for an "unspecified" violation and it was not clear on how it breached its terms and conditions.
The move will help uphold freedom of expression and ensure people remain able to have challenging and controversial discussions online. The DCMS Secretary of State has the power to add more categories of
priority legal but harmful content via secondary legislation should they emerge in the future. Companies will be required to report emerging harms to Ofcom. Proactive technology Platforms may need to
use tools for content moderation, user profiling and behaviour identification to protect their users. Additional provisions have been added to the Bill to allow Ofcom to set expectations for the use of these proactive technologies
in codes of practice and force companies to use better and more effective tools, should this be necessary. Companies will need to demonstrate they are using the right tools to address harms, they are transparent, and any
technologies they develop meet standards of accuracy and effectiveness required by the regulator. Ofcom will not be able to recommend these tools are applied on private messaging or legal but harmful content. Reporting child
sexual abuse A new requirement will mean companies must report child sexual exploitation and abuse content they detect on their platforms to the National Crime Agency . The CSEA reporting requirement
will replace the UK's existing voluntary reporting regime and reflects the Government's commitment to tackling this horrific crime. Reports to the National Crime Agency will need to meet a set of clear standards to ensure law
enforcement receives the high quality information it needs to safeguard children, pursue offenders and limit lifelong re-victimisation by preventing the ongoing recirculation of illegal content. In-scope companies will need to
demonstrate existing reporting obligations outside of the UK to be exempt from this requirement, which will avoid duplication of company's efforts. |
| |