Online Harms White Paper

UK Government seeks to censor social media



 

Policing the wild west...

Status report on the government's plans to introduce an internet censor for social media


Link Here 30th January 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media
The U.K. government is rushing to finalize a draft internet censorship law particularly targeting social media but key details of the proposal have yet to be finalised amid concerns about stifling innovation.

Government officials have been meeting with industry players, MPs, peers and other groups over the past month as they try to finalise their proposals.

People involved in those discussions said there is now broad agreement about the need to impose a new duty of care on big tech companies, as well as the need to back up their terms and conditions with the force of law.

A white paper is due be published by the end of winter. But the Department for Digital, Culture, Media and Sport, which is partly responsible for writing up the new rules alongside the Home Office, is still deliberating over key aspects with just weeks to go until the government said it would unveil an outline of its proposals.

Among the sticking points are worries that regulation could stifle innovation in one of the U.K. economy's most thriving sectors and concerns over whether it can keep pace with rapid technological change. Another is ensuring sufficient political support to pass the law despite likely opposition from parts of the Conservative Party. A third is deciding what regulatory agency would ultimately be responsible for enforcing the so-called Internet Safety Law.

A major unresolved question is what censorship body will be in charge of enforcing laws that could expose big tech companies to greater liability for hosted content, a prospect that firms including Google and Facebook have fought at the European level.

Several people who spoke to POLITICO said the government does not appear to have settled on who would be the censor, although the communications regulator Ofcom is very much in the mix, however there are concerns that Ofcom is already getting too big.

 

 

Updated: As always increased red tape benefits the largest (ie US) companies...

Daily Mail reports on government discussion about a new internet censor, codenamed Ofweb


Link Here 6th February 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media
Wrangling in Whitehall has held up plans to set up a social media censor dubbed Ofweb, The Mail on Sunday reveals.

The Government was due to publish a White Paper this winter on censorship of tech giants but this Mail has learnt it is still far from ready. Culture Secretary Jeremy Wright said it would be published within a month, but a Cabinet source said that timeline was wholly unrealistic. Other senior Government sources went further and said the policy document is unlikely to surface before the Spring.

Key details on how a new censor would work have yet to be decided while funding from the Treasury has not yet been secured. Another problem is that some Ministers believe the proposed clampdown is too draconian and are preparing to try to block or water down the plan.

There are also concerns that technically difficult requirements would benefit the largest US companies as smaller European companies and start ups would not be able to afford the technology and development required.

The Mail on Sunday understands Jeremy Wright has postponed a visit to Facebook HQ in California to discuss the measures, as key details are still up in the air.

Update: The Conservatives don't have a monopoly on internet censorship...Labour agrees

6th February 2019. See  article from ft.com

Labour has called for a new entity capable of taking on the likes of Facebook and Google. Tom Watson, the shadow digital secretary, will on Wednesday say a regulator should also have responsibility for competition policy and be able to refer cases to the Competition and Markets Authority.

According to Watson, any duty of care would only be effective with penalties that seriously affect companies' bottom lines. He has referred to regulators' ability to fine companies up to 4% of global turnover, or euro 20m, whichever is higher, for worst-case breaches of the EU-wide General Data Protection Regulation.

 

 

Offsite Article: A Lord Chamberlain for the internet?...


Link Here 8th February 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media
Thanks, but no thanks. By Graham Smith

See article from cyberleagle.com

 

 

Duty of care: an empty concept...

The Open Rights Group comments on government moves to create a social media censor


Link Here 9th February 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media

There is every reason to believe that the government and opposition are moving to a consensus on introducing a duty of care for social media companies to reduce harm and risk to their users. This may be backed by an Internet regulator, who might decide what kind of mitigating actions are appropriate to address the risks to users on different platforms.

This idea originated from a series of papers by Will Perrin and Lorna Woods and has been mentioned most recently in a recent Science and Technology committee report and by NGOs including children's charity 5Rights.

A duty of care has some obvious merits: it could be based on objective risks, based on evidence, and ensure that mitigations are proportionate to those risks. It could take some of the politicisation out of the current debate.

However, it also has obvious problems. For a start, it focuses on risk rather than process . It moves attention away from the fact that interventions are regulating social media users just as much as platforms. It does not by itself tell us that free expression impacts will be considered, tracked or mitigated.

Furthermore, the lack of focus that a duty of care model gives to process means that platform decisions that have nothing to do with risky content are not necessarily based on better decisions, independent appeals and so on. Rather, as has happened with German regulation, processes can remain unaffected when they are outside a duty of care.

In practice, a lot of content which is disturbing or offensive is already banned on online platforms. Much of this would not be in scope under a duty of care but it is precisely these kinds of material which users often complain about, when it is either not removed when they want it gone, or is removed incorrectly. Any model of social media regulation needs to improve these issues, but a duty of care is unlikely to touch these problems.

There are very many questions about the kinds of risk, whether to individual in general, vulnerable groups, or society at large; and the evidence required to create action. The truth is that a duty of care, if cast sensibly and narrowly, will not satisfy many of the people who are demanding action; equally, if the threshold to act is low, then it will quickly be seen to be a mechanism for wide-scale Internet censorship.

It is also a simple fact that many decisions that platforms make about legal content which is not risky are not the business of government to regulate. This includes decisions about what legal content is promoted and why. For this reason, we believe that a better approach might be to require independent self-regulation of major platforms across all of their content decisions. This requirement could be a legislative one, but the regulator would need to be independent of government and platforms.

Independent self-regulation has not been truly tried. Instead, voluntary agreements have filled its place. We should be cautious about moving straight to government regulation of social media and social media users. The government refuses to regulate the press in this way because it doesn't wish to be seen to be controlling print media. It is pretty curious that neither the media nor the government are spelling out the risks of state regulation of the speech of millions of British citizens.

That we are in this place is of course largely the fault of the social media platforms themselves, who have failed to understand the need and value of transparent and accountable systems to ensure they are acting properly. That, however, just demonstrates the problem: politically weak platforms who have created monopoly positions based on data silos are now being sliced and diced at the policy table for their wider errors. It's imperative that as these government proposals progress we keep focus on the simple fact that it is end users whose speech will ultimately be regulated.

 

 

Wider definition of harm can be manipulated to restrict media freedom...

Index on Censorship responds to government plans to create a UK internet censor


Link Here 22nd February 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media

Index on Censorship welcomes a report by the House of Commons Digital, Culture, Media and Sport select committee into disinformation and fake news that calls for greater transparency on social media companies' decision making processes, on who posts political advertising and on use of personal data. However, we remain concerned about attempts by government to establish systems that would regulate harmful content online given there remains no agreed definition of harm in this context beyond those which are already illegal.

Despite a number of reports, including the government's Internet Safety Strategy green paper, that have examined the issue over the past year, none have yet been able to come up with a definition of harmful content that goes beyond definitions of speech and expression that are already illegal. DCMS recognises this in its report when it quotes the Secretary of State Jeremy Wright discussing the difficulties surrounding the definition. Despite acknowledging this, the report's authors nevertheless expect technical experts to be able to set out what constitutes harmful content that will be overseen by an independent regulator.

International experience shows that in practice it is extremely difficult to define harmful content in such a way that would target only bad speech. Last year, for example, activists in Vietnam wrote an open letter to Facebook complaining that Facebook's system of automatically pulling content if enough people complained could silence human rights activists and citizen journalists in Vietnam , while Facebook has shut down the livestreams of people in the United States using the platform as a tool to document their experiences of police violence.

Index on Censorship chief executive Jodie Ginsberg said:

It is vital that any new system created for regulating social media protects freedom of expression, rather than introducing new restrictions on speech by the back door. We already have laws to deal with harassment, incitement to violence, and incitement to hatred. Even well-intentioned laws meant to tackle hateful views online often end up hurting the minority groups they are meant to protect, stifle public debate, and limit the public's ability to hold the powerful to account.

The select committee report provides the example of Germany as a country that has legislated against harmful content on tech platforms. However, it fails to mention the German Network Reinforcement Act was legislating on content that was already considered illegal, nor the widespread criticism of the law that included the UN rapporteur on freedom of expression and groups such as Human Rights Watch. It also cites the fact that one in six of Facebook's moderators now works in Germany as practical evidence that legislation can work. Ginsberg said:

The existence of more moderators is not evidence that the laws work. Evidence would be if more harmful content had been removed and if lawful speech flourished. Given that there is no effective mechanism for challenging decisions made by operators, it is impossible to tell how much lawful content is being removed in Germany. But the fact that Russia, Singapore and the Philippines have all cited the German law as a positive example of ways to restrict content online should give us pause.

Index has reported on various examples of the German law being applied incorrectly, including the removal of a tweet of journalist Martin Eimermacher criticising the double standards of tabloid newspaper Bild Zeitung and the blocking of the Twitter account of German satirical magazine Titanic. The Association of German Journalists (DJV) has said the Twitter move amounted to censorship, adding it had warned of this danger when the German law was drawn up.

Index is also concerned about the continued calls for tools to distinguish between quality journalism and unreliable sources, most recently in the Cairncross Review . While we recognise that the ability to do this as individuals and through education is key to democracy, we are worried that a reliance on a labelling system could create false positives, and mean that smaller or newer journalism outfits would find themselves rejected by the system.

 

 

Driving the internet into dark corners...

The IWF warns the government to think about unintended consequences when creating a UK internet censor


Link Here 22nd February 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media

Internet Watch Foundation's (IWF) CEO, Susie Hargreaves OBE, puts forward a voice of reason by urging politicians and policy makers to take a balanced approach to internet regulation which avoids a heavy cost to the victims of child sexual abuse.

IWF has set out its views on internet regulation ahead of the publication of the Government's Online Harms White Paper. It suggests that traditional approaches to regulation cannot apply to the internet and that human rights should play a big role in any regulatory approach.

The IWF, as part of the UK Safer Internet Centre, supports the Government's ambition to make the UK the safest place in the world to go online, and the best place to start a digital business.

IWF has a world-leading reputation in identifying and removing child sexual abuse images and videos from the internet. It takes a co-regulatory approach to combating child sexual abuse images and videos by working in partnership with the internet industry, law enforcement and governments around the world. It offers a suite of tools and services to the online industry to keep their networks safer. In the past 22 years, the internet watchdog has assessed -- with human eyes -- more than 1 million reports.

Ms Hargreaves said:

Tackling criminal child sexual abuse material requires a global multi-stakeholder effort. We'll use our 22 years' experience in this area to help the government and policy makers to shape a regulatory framework which is sustainable and puts victims at its heart. In order to do this, any regulation in this area should be developed with industry and other key stakeholders rather than imposed on them.

We recommend an outcomes-based approach where the outcomes are clearly defined and the government should provide clarity over the results it seeks in dealing with any harm. There also needs to be a process to monitor this and for any results to be transparently communicated.

But, warns Ms Hargreaves, any solutions should be tested with users including understanding impacts on victims: "The UK already leads the world at tackling online child sexual abuse images and videos but there is definitely more that can be done, particularly in relation to tackling grooming and livestreaming, and of course, regulating harmful content is important.

My worries, however, are about rushing into knee-jerk regulation which creates perverse incentives or unintended consequences to victims and could undo all the successful work accomplished to date. Ultimately, we must avoid a heavy cost to victims of online sexual abuse.

 

 

Putting Zuckerberg behind bars...

The Telegraph reports on the latest government thoughts about setting up a social media censor


Link Here 23rd February 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media

Social media companies face criminal sanctions for failing to protect children from online harms, according to drafts of the Government's White Paper circulating in Whitehall.

Civil servants are proposing a new corporate offence as an option in the White Paper plans for a tough new censor with the power to force social media firms to take down illegal content and to police legal but harmful material.

They see criminal sanctions as desirable and as an important part of a regulatory regime, said one source who added that there's a recognition particularly on the Home Office side that this needs to be a regulator with teeth. The main issue they need to satisfy ministers on is extra-territoriality, that is can you apply this to non-UK companies like Facebook and YouTube? The belief is that you can.

The White Paper, which is due to published mid-March followed by a Summer consultation, is not expected to lay out as definitive a plan as previously thought. A decision on whether to create a brand new censor or use Ofcom is expected to be left open. A Whitehall source said:

Criminal sanctions are going to be put into the White Paper as an option. We are not necessarily saying we are going to do it but these are things that are open to us. They will be allied to a system of fines amounting to 4% of global turnover or Euros 20m, whichever is higher.

Government minister Jeremy Wright told the Telegraph this week he was especially focused on ensuring that technology companies enforce minimum age standards. He also indicated the Government w ould fulfill a manifesto commitment to a levy on social media firms, that could fund the new censorr.

 

 

Six shooters...

Internet giants respond to impending government internet censorship laws with sex principles that should be followed


Link Here 1st March 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media
The world's biggest internet companies including Facebook, Google and Twitter are represented by a trade group call The Internet Association. This organisation has written to UK government ministers to outline how they believe harmful online activity should be regulated.

The letter has been sent to the culture, health and home secretaries. The letter will be seen as a pre-emptive move in the coming negotiation over new rules to govern the internet. The government is due to publish a delayed White Paper on online harms in the coming weeks.

The letter outlines six principles:

  • "Be targeted at specific harms, using a risk-based approach
  • "Provide flexibility to adapt to changing technologies, different services and evolving societal expectations
  • "Maintain the intermediary liability protections that enable the internet to deliver significant benefits for consumers, society and the economy
  • "Be technically possible to implement in practice
  • "Provide clarity and certainty for consumers, citizens and internet companies
  • "Recognise the distinction between public and private communication"

Many leading figures in the UK technology sector fear a lack of expertise in government, and hardening public sentiment against the excesses of the internet, will push the Online Harms paper in a more radical direction.

Three of the key areas of debate are the definition of online harm, the lack of liability for third-party content, and the difference between public and private communication.

The companies insist that government should recognise the distinction between clearly illegal content and content which is harmful, but not illegal. If these leading tech companies believe this government definition of harm is too broad, their insistence on a distinction between illegal and harmful content may be superseded by another set of problems.

The companies also defend the principle that platforms such as YouTube permit users to post and share information without fear that those platforms will be held liable for third-party content. Another area which will be of particular interest to the Home Office is the insistence that care should be taken to avoid regulation encroaching into the surveillance of private communications.

 

 

Offsite Article: Why an internet regulator is a bad idea...


Link Here 20th March 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media
We should be stripping away curbs on speech -- not adding more. By Andrew Tettenborn

See article from spiked-online.com

 


Censor Watch logo
censorwatch.co.uk
 

Top

Home

Links
 

Censorship News Latest

Daily BBFC Ratings