A three-judge panel of the 9th U.S. Circuit Court of Appeals has given a thumbs down to censorship of an ad called Faces of Global Terrorism .
The years-long lawsuit is over an ad based on an image created by the State Department that previously was run on county buses in the Seattle area.
The ad was submitted to King County Metro Transit (KCMT) by Pamela Geller, Robert Spencer and their organization, the American Freedom Defense Initiative.
It first was rejected on a long list of grounds, including that some of its statements were inaccurate.The statements were corrected, but KCMT still rejected it on the grounds that it disparaged some people and might disrupt the system.
The court found that found that KCMT's arguments were lofty ideals, but were unconstitutional in this case. The court said:
We conclude that Metro's disparagement standard discriminates, on its face, on the basis of viewpoint, the panel explained. The ruling said Metro requires the refusal of ads that disparage people, but giving offense is a viewpoint, so Metro's
disparagement clause discriminates.
As for the disruption?
The transit system previously had run the largely similar ad presented by the State Department, with no ill effects
Musician and 2010 Freemuse Award winner Ferhat Tunç has been sentenced in Turkey to one year, 11 months and 12 days in prison for making propaganda of a terrorist organization. The charge relates to messages shared on Tunç's social
media in December 2016, with the terrorist organization referring to the Kurdistan Workers Party and Kurdistan Peoples Community. Tunç plans to appeal the verdict at the Court of Appeal in the next week.
Alongside this case, Tunç faces two additional trials on the charges of publicly inciting hatred and hostility f or tweets shared on 16 April 2017, including '#WeAreNotSilent'; and insulting the President through messages shared on
his social media in 2016.
Freemuse calls for a transparent, fair and impartial appeals process and for the Turkish government to drop all charges against Tunç. Freemuse Executive Director Dr Srirak Plipat said:
The sentencing of Ferhat Tunç to prison is a human rights scandal in Turkey. When a musician who sings peacefully is imprisoned for promoting terrorism, the world knows that Turkey is stepping up its efforts to silence artists and art
communities. The imprisonment of Tunç is the imprisonment of artistic freedom in Turkey.
Fox has announced that it will cut Deadpool 2 for a PG-13 re-release on December 21, 2018.
Ryan Reynolds confirmed the news on Instagram with an image that appears to be from newly-shot footage of Deadpool retelling the events of Deadpool 2 as a bedtime story to a grown-up Fred Savage, Princess Bride-style, as a framing device:
One has to think that the editors will be hard at work trying tone down the rather caustic humour of Deadpool.
The comedian Roy Chubby Brown has been banned from performing at Sheffield Town Hall
The comic believes he has been stopped from appearing at the new look venue because his act is too saucy. He says he is angry and gutted over the move insisting his publicity material makes the sort of content clear and anyone easily offended can
simply stay away.
Middlesbrough Council claimed the decision was simply based on the fact the programme for the recently relaunched theatre was full.
Teesside Live readers have reacted in force to the news - and almost every one backing the local lad. Posting on Teesside Live 14 said:
Most people are able to make up their own minds about whether they want to see Chubby Brown or not. No need for venues to make that decision for them. Maybe we should all boycott the town hall!
The Indian state of Uttarakhand has been hearing a rape case and has decided that porn was to blame. So the court is looking to resurrect an internet porn ban first mooted in 2015.
On August 3rd, 2015, three years back, the Government. of India had passed a notification which ordered all ISPs to ban pornographic content with immediate effect . Around 857 pornographic websites were ordered to be banned, and ISPs were duly
However, the Government faced massive backlash over this issue and were criticized for banning porn. Some even described this as Talibanization of the Internet. After two days, a new notification was issued; and this time, the responsibility for
the porn ban was passed over to the Internet Service Provider and limited to banning child porn.
Now the Uttarakhand High Court has ordered all Internet Service Providers to immediately ban porn websites, across India. If they fail to do so, then their license can be canceled!
While hearing a recent case of gang-rape in a school at Dehradun, the bench at Uttarakhand High Court comprising of acting chief justice Rajiv Sharma and justice Manoj Tiwari has asked the Centre Govt. to strictly implement a blanket ban on
pornography sites. The Bench observed:
Unlimited access to these pornographic sites is required to be blocked/curbed to avoid an adverse influence on the impressionable minds of children.
The Nagpur Bench of Bombay High Court has ordered the Information and Broadcasting Ministry to initiate effective steps against Netflix, Amazon Prime, Hotstar and other private channels on Internet for broadcasting pornographic contents, crudity,
sexual or discriminatory language, and various levels of violence,
A Public Interest Litigation (PIL) was filed by Adv Divya Gontiya requesting the High Court to issue orders aimed at 'urbing the deluge of vulgarity, violent scenes and crude language on webseries. The screening of pornographic contents,
vulgar gestures and talks are overriding the Indian culture and morality.
The High Court has directed the concerned ministries to set up a pre-screening committee for curbing , crudity, sexual or discriminatory language, vulgar gestures, nudity, sex, immodesty on webseries, monitor the webseries and advertisements
before going on online media.
Vera Jourova is the European Commissioner for justice, consumers and gender equality. Once she opened a Facebook account. It did not go well. Jourova said at a news conference:
For a short time, I had a Facebook account. It was a channel of dirt. I didn't expect such an influx of hatred. I decided to cancel the account because I realised there will be less hatred in Europe after I do this.
Jourova's words carry more weight than most. She has a policy beef with Facebook, and also the means to enforce it. Jourova says Facebook's terms of service are misleading, and has called upon the company to clarify them. In a post Thursday on
that other channel of dirt, Twitter.com, she said:
I want #Facebook to be extremely clear to its users about how their service operates and makes money. Not many people know that Facebook has made available their data to third parties or that for instance it holds full copyright about any
picture or content you put on it.
Jourova says European authorities could sanction Facebook next year if it doesn't like what it hears from the company soon. I was quite clear that we cannot negotiate forever, she said at the news conference. We need to see the result.
Vera Jourova is the European Commissioner for justice, consumers and gender equality has condemned a series of hard-hitting front pages in the British press after a recent Sun headline described Europe's leaders as 'EU Dirty Rats'.
Jourová bad mouthed media again in a press release saying:
Media can build the culture of dialogue or sow divisions, spread disinformation and encourage exclusion.
The Brexit debate is the best example of that. Do you remember the front page of a popular British daily calling the judges the 'enemy of the people'? Or just last week, the EU leaders were called 'Dirty Rats' on another front page.
Fundamental rights must be a part of public discourse in the media. They have to belong to the media. Media are also instrumental in holding politicians to account and in defining the limits of what is 'unacceptable' in a society.
Offsite Comment: Now the EU wants to turn off the Sun
They dream of stopping populism by curbing press freedom.
The European Commission has come up with a new way to prevent people backing Brexit -- not by winning the argument, but by curbing press freedom . They want to stop the British press encouraging hatred of EU leaders and judges, and impose a
European approach of smart regulation to control the views expressed by the tabloids and their supposedly non-smart readers.
Add a phone number I never gave Facebook for targeted advertising to the list of deceptive and invasive ways Facebook makes money off your personal information. Contrary to user expectations and Facebook representatives' own previous statements,
the company has been using contact information that users explicitly provided for security purposes--or that users never provided at all --for targeted advertising.
A group of academic researchers from Northeastern University and Princeton University , along with Gizmodo reporters , have used real-world tests to demonstrate how Facebook's latest deceptive practice works. They found that Facebook harvests
user phone numbers for targeted advertising in two disturbing ways: two-factor authentication (2FA) phone numbers, and shadow contact information. Two-Factor Authentication Is Not The Problem
First, when a user gives Facebook their number for security purposes--to set up 2FA , or to receive alerts about new logins to their account--that phone number can become fair game for advertisers within weeks. (This is not the first time
Facebook has misused 2FA phone numbers .)
But the important message for users is: this is not a reason to turn off or avoid 2FA. The problem is not with two-factor authentication. It's not even a problem with the inherent weaknesses of SMS-based 2FA in particular . Instead, this is a
problem with how Facebook has handled users' information and violated their reasonable security and privacy expectations.
There are many types of 2FA . SMS-based 2FA requires a phone number, so you can receive a text with a second factor code when you log in. Other types of 2FA--like authenticator apps and hardware tokens--do not require a phone number to work.
However, until just four months ago , Facebook required users to enter a phone number to turn on any type of 2FA, even though it offers its authenticator as a more secure alternative. Other companies-- Google notable among them --also still
follow that outdated practice.
Even with the welcome move to no longer require phone numbers for 2FA, Facebook still has work to do here. This finding has not only validated users who are suspicious of Facebook's repeated claims that we have complete control over our own
information, but has also seriously damaged users' trust in a foundational security practice.
Until Facebook and other companies do better, users who need privacy and security most--especially those for whom using an authenticator app or hardware key is not feasible--will be forced into a corner. Shadow Contact Information
Second, Facebook is also grabbing your contact information from your friends. Kash Hill of Gizmodo provides an example :
...if User A, whom we'll call Anna, shares her contacts with Facebook, including a previously unknown phone number for User B, whom we'll call Ben, advertisers will be able to target Ben with an ad using that phone number, which I call shadow
contact information, about a month later.
This means that, even if you never directly handed a particular phone number over to Facebook, advertisers may nevertheless be able to associate it with your account based on your friends' phone books.
Even worse, none of this is accessible or transparent to users. You can't find such shadow contact information in the contact and basic info section of your profile; users in Europe can't even get their hands on it despite explicit requirements
under the GDPR that a company give users a right to know what information it has on them.
As Facebook attempts to salvage its reputation among users in the wake of the Cambridge Analytica scandal , it needs to put its money where its mouth is . Wiping 2FA numbers and shadow contact data from non-essential use would be a good start.
Facebook plans to unveil its Portal video chat device for the home next week.
Facebook originally planned to announce Portal at its annual F8 developer conference in May of this year. But the company's scandals, including the Cambridge Analytica data breach led executives to shelve the announcement at the last minute.
Portal will feature a wide-angle video camera, which uses artificial intelligence to recognize people in the frame and follow them as they move throughout a room. In response to the breakdown in trust for Facebook, the company has recently added
a privacy shutter which can physically block the camera.
Forget the BBC -- only Channel 5 does proper documentaries these days You don't get quite the same production values but you don't get the PC bollocks of Bodyguard and King Arthur's Britain. By James Delingpole
Sweden's Advert Censor (RO) has criticized a Stockholm company for sexism after it used a popular meme alongside a recruitment advert.
The image, known by online communities as the Distracted Boyfriend Meme, is based on a stock photo of a man turning away from his appalled girlfriend to look at an attractive woman. Swedish ISP Bahnhof used the image alongside a jobs advert; in
their take on the meme, the boyfriend was turning away from your current workplace to stare at Bahnhof.
The censor claimed that the use of the meme was gender-discriminatory, both due to presenting women as interchangeable and sex objects and presenting a stereotypical picture of men seeing women as interchangeable. Saying that it seems a little
discriminatory to stereotype men as always seeing women as interchangeable.
The original posts shared to Bahnhof's Facebook and Instagram pages received hundreds of comments. Many of these criticized the alleged sexism of the image, and the advert was reported to the advert censor.
A by-law which will allow for Turkey's state-run TV censor to extend its remit to all internet broadcasting platforms has been approved.
The Turkish state agency for monitoring, regulating, and sanctioning radio and television broadcasts (RTÜK) met on Tuesday to discuss the bylaw regarding radio and TV programs aired online. The bylaw, which will also require that TV stations
obtain a licence from RTÜK to begin broadcasting online.
Under the leadership of the ruling Justice and Development Party (AKP), RTÜK took a strict approach with TV stations, slapping channels with large fines for what they say is ''offending societal values.'' Consequently, many Turkish television
producers have opted to share their work online, but now face the same repressive censorship rules that they previously managed to avoid.
Twitter is consulting its users about new censorship rules banning 'dehumanising speech', in which people are compared to animals or objects. It said language that made people seem less than human had repercussions.
The social network already has a hateful-conduct policy but it is implemented discriminately allowing some types of insulting language to remain online. For example, countless tweets describing middle-aged white men as gammon can be found on the
At present it bans insults based on a person's: race ethnicity nationality sexual orientation sex gender religious beliefs age disability medical condition but there is an unwritten secondary rule which means that the prohibition excludes groups
not favoured under the conventions of political correctness.
Twitter said it intended to prohibit dehumanising language towards people in an identifiable group because some researchers claim it could lead to real-world violence. Asked whether calling men gammon would count as dehumanising speech, the
company said it would first seek the views of its members. Twitter's announcement reads in part:
For the last three months, we have been developing a new policy to address dehumanizing language on Twitter. Language that makes someone less than human can have repercussions off the service, including normalizing serious violence. Some of this
content falls within our hateful conduct policy (which prohibits the promotion of violence against or direct attacks or threats against other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity,
religious affiliation, age, disability, or serious disease), but there are still Tweets many people consider to be abusive, even when they do not break our rules. Better addressing this gap is part of our work to serve a healthy public
With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target. Many scholars have examined
the relationship between dehumanization and violence. For example, Susan Benesch has described dehumanizing language as a hallmark of dangerous speech, because it can make violence seem acceptable, and Herbert Kelman has posited that
dehumanization can reduce the strength of restraining forces against violence.
witter's critics are now using the hashtag #verifiedhate to highlight examples of what they believe to be bias in what the platform judges to be unacceptable. The gammon insult gained popularity after a collage of contributors to the BBC's
Question Time programme - each middle-aged, white and male - was shared along with the phrase Great Wall of Gammon in 2017.
The scope of identifiable groups covered by the new rules will be decided after a public consultation that will run until 9 October.
Ps before filling in the consultation form, note that it was broken for me and didn't accept my submission. For the record, Melon Farmer tried to submit the comment:
This is yet another policy that restricts free speech. As always, the vagueness of the rules will allow Twitter, or its moderators, to arbitrarily apply its own morality anyway. But not to worry, the richness of language will always enable
people to dream up new ways to insult others.
A federal court considering a challenge to the Allow States and Victims to Fight Online Sex Trafficking Act of 2017, or FOSTA , dismissed the case on Monday.
EFF and partner law firms filed a lawsuit in June against the Justice Department on behalf of two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist to block enforcement of FOSTA.
Unfortunately, a federal court sided with the government and dismissed Woodhull Freedom Foundation et al. v. United States. The court did not reach the merits of any of the constitutional issues, but instead found that none of the plaintiffs had
standing to challenge the law's legality.
We're disappointed and believe the decision is wrong. For example, the court failed to apply the standing principles that are usually applied in First Amendment cases in which the plaintiffs' speech is chilled. The plaintiffs are considering
their options for their next steps.
FOSTA was passed by Congress for the worthy purpose of fighting sex trafficking, but the poorly-written bill contains language that criminalizes the protected speech of those who advocate for and provide resources to adult, consensual sex
workers. Worse yet, the bill actually hinders efforts to prosecute sex traffickers and aid victims.
The lawsuit argues that FOSTA forces community forums and speakers offline for fear of criminal charges and heavy civil liability, in violation of their constitutional rights. We asked the federal court to strike down the law, though the
government argued that the plaintiffs were not likely to be subject to criminal or civil liability under the law.
The BBFC waives its previous cuts for animal cruelty
26th September 2018
The Green Inferno (aka Cannibal Holocaust 2) is a 1988 Italy action crime horror by Antonio Climati.
Starring Marco Merlo, Fabrizio Merlo and May Deseligny.
UK: Passed 15 uncut for strong violence, injury detail with previous BBFC cuts waived for:
2018 88 Films Limited [16:9] video
The BBFC has just waived its cuts to the Italian horror, The Green Inferno. It was previously released in the UK titled Cannibal Holocaust 2 with a 15 rating after BBFC cuts for animal cruelty. The previous 12s cut from 2002 was to remove the
sight of a monkey hit with a blow dart.
A man named Pete gets a phone call from his friend, Jemma, who says she has evidence that a professor missing in the Amazon is still alive. Pete hires two men, Mark and Fred, to steal a plane and fly down to the jungle to meet with her. Once
there, they meet with Jemma and head into the jungle. The group gets the help of a young native girl to take them to the legendary Imas tribe, the tribe in which the professor was said to be with. However, during their search for the Imas, they
run into gold hunters, who are intent on killing the tribe and stealing their treasure. Now racing against the treasure seekers to reach the Imas, they also uncover another scandal in the jungle and try to shut them both down to save the local
Angela Rayner, the shadow education secretary, has said at the Labour Party Conference that social media companies should stop letting people post abuse from anonymous accounts. Speaking at a Guardian fringe meeting, she said:
One of the first things they should do is stop anonymous accounts. Most people who send me abuse me do so from anonymous accounts, and wouldn't dream of doing it in their own name
Angela Rayner became the latest in a long line of politicians to suggest that anonymous social media accounts should be banned in an attempt on Sunday to crack down on abusive and threatening behaviour online.
There is no doubt Rayner is sincere, and that the problem she refers to is a serious one, of which she and her colleagues have first-hand experience. The reality for many MPs and public figures is that social media is a constant barrage of abuse
and threats that is far worse for women , and especially for women of colour or trans women.
Given that extensive experience of the harm caused by these accounts, it's easy to see why calling for a ban seems a reasonable thing to do. However, in reality it would do harm to a greater number of people than it would help.
Facebook's Partnership with US state-funded think tanks
Last Wednesday Facebook announced it would work with two US government-funded think tanks in order bolster the social media giant's election integrity efforts around the globe.
The new partnership with the International Republican Institute (IRI) and the National Democratic Institute (NDI) has been described by Reuters as an initiative to slow the global spread of misinformation that could influence elections,
acknowledging that fake news sites were still read by millions.
But both the IRI and NDI are funded by the National Endowment for Democracy (NED) , which has since its late Cold War era founding defined itself as a soft power wing of the US government abroad focused on democracy promotion.
Journalist Max Blumenthal recently described the NDI as a taxpayer funded organization that has interfered in elections, mobilized coups, and orchestrated public relations campaigns against nations that resist Washington's agenda.
A censorship row has blown up about a retrospective exhibition of Robert Mapplethorpe's photographic work at Serralves museum, in Oporto, Portugal
Although the institution's creative director João Ribas had previously stated to Público newspaper that there would be no censorship, partially-covered pieces, special rooms, or any sort of restriction to visitors motivated by age, adding that only a disclaimer would be placed at the exhibition's entry to warn the public that certain content might hurt some visitors' susceptibilities.
But a few days before the inauguration Ribas unexpectedly resigned from his position, arguing that not only there were areas with limited access to minors against his will but also that he had been asked to remove twenty photos from it altogether
-- declarations to which the museum's administration has since then responded to, saying this had resulted from Ribas' own decision.
The PC authorities banned the use of background allegiances as a convenient tag or adjective for terrorists. Now the high priestesses of PC have taken umbrage at replacement tags.
Media outlets had for instance tried to downplay the common denominator of islam by suggesting that terrorists were 'lone wolves'. Now the word police are claiming that the adjective 'wolf' has a positive tone, and so the media should find a new
less positive term.
The #WordsMatter campaign also complains about the use of the term 'mastermind' and nicknames such as the Beatles only glorifies them. The campaign also asks the media to avoid publishing images of terrorists in combat gear and using war
terminology such as soldier, which serves to legitimise them.
The group has produced a series of short films just released on social media to air their opinions. The films have been produced by the Tim Parry Johnatha n Ball Peace Foundation, set up in memory of the two child victims of the 1993 IRA bomb
attack in Warrington. The foundation has also helped compile a Counter-Daesh dictionary.
The dictionary also warns care over using words such as jihad, jihadi, and jihadi bride which often ignore the complex religious meanings of jihad. If reporting insists on its usage, ensure it is distinguished as violent jihad.
But forcing people to use the 'correct' words doesn't really work as intended. Artificial replacement words often emphasise obviously missing words more loudly than if they had used the originals. Eg a news report obviously trying to avoid
referencing islam shouts the unspoken connection as loudly as if it had been directly stated. Similarly the use of 'correct' PC terms emphasises the user's political correctness, and distracts from what they are trying to say.
Ofcom has welcomed two new members to its board, after the Secretary of State for Digital, Culture, Media and Sport (DCMS) confirmed the appointment of Maggie Carver and Dr Angela Dean.
Maggie is currently Chairman of the British Board of Film Classification and the Racecourse Association. Starting her career in investment banking, Maggie has held a number of roles within the media industry - including sitting on the boards of
ITN, Channel 5 and other organisations across the public, private and not-for-profit sectors. She was previously Managing Director of Channel 4 Racing, and worked on establishing the ITV franchise Meridian Broadcasting.
Dr Angela Dean was a financial analyst of European communications and technology companies for almost 20 years.?She was a managing director at investment bank Morgan Stanley, heading up its global technology research team; and also its Director
for Socially Responsible Investment. She was previously a trustee of the Heritage Lottery Fund and a former member of the Museums, Archives and Libraries Council.
China has complained to Sweden over a satirical news show on Swedish state television that advised Chinese tourists how to avoid culture clashes. China complained that the show insulted the Chinese people.
The satirical programme Svenska Nyheter (Swedish News), was aired a week after police removed three Chinese citizens from a Stockholm hotel. Local media reported they had refused to leave the hotel despite the fact they were not booked to
Geng Shuang, a spokesman for the Chinese foreign ministry said in a statement:
The [Svenska Nyheter] anchor's remarks are full of discrimination, prejudice and provocation against China and other ethnic groups, completely deviating from professional media ethics. We strongly condemn this.
The US Republican Party has apologised to Hindus after an advertisement meant to woo them ended up offending instead.
The ad, published to celebrate a Hindu festival for the elephant-headed deity Ganesha, also included the political message: Would you worship a donkey or an elephant? The choice is yours. The donkey is the political symbol for the Democrats while
the elephant is the symbol for the Republicans.
The Hindu American Foundation (HAF) called the ad problematic:
While we appreciate the Fort Bend County GOP's attempt to reach out to Hindus on an important Hindu festival, its ad -- equating Hindus' veneration of the Lord Ganesha with choosing a political party based on its animal symbol -- is problematic
and offensive, said HAF Board Member Rishi Bhutada in an official statement.
In response to the outrage, the party said that the ad was not meant to disparage Hindu customs or traditions. Jacey Jetton, chairman of the Fort Bend County Republican Party said:
We offer our sincerest apologies to anyone that was offended by the ad. Obviously, that was not the intent.
On September 13, after a five-year legal battle, the European Court of Human Rights
said that the UK government's surveillance regime--which includes the country's mass surveillance programs, methods, laws, and judges--violated the human rights to privacy and to freedom of expression. The court's opinion is the culmination
of lawsuits filed by multiple privacy rights organizations, journalists, and activists who argued that the UK's surveillance programs violated the privacy of millions.
The court's decision is a step in the right direction, but it shouldn't be the last. While the court rejected the UK's spying programs, it left open the risk that a mass surveillance regime could comply with human rights law, and it did not say
that mass surveillance itself was unlawful under the European Convention on Human Rights (a treaty that we discuss below).
But the court found that the real-world implementation of the UK's surveillance--with secret hearings, vague legal safeguards, and broadening reach--did not meet international human rights standards. The court described a surveillance regime
"incapable" of limiting its "interference" into individuals' private lives when only "necessary in a democratic society."
In particular, the court's decision attempts to rein in the expanding use of mass surveillance. Originally reserved for allegedly protecting national security or preventing serious threats, use of these programs has trickled into routine criminal
investigations with no national security element--a lowered threshold that the court zeroed in on to justify its rejection of the UK's surveillance programs. The court also said the UK's mass surveillance pipeline--from the moment data is
automatically swept up and filtered to the moment when that data is viewed by government agents--lacked meaningful safeguards.
The UK Surveillance Regime
In the UK, the intelligence agency primarily tasked with online spying is the Government Communications Headquarters (GCHQ). The agency, which is sort of the UK version of the NSA, deploys multiple surveillance programs to sweep up nearly any
type of online interaction you can think of, including emails, instant messenger chats, social media connections, online searches, browser history, and IP addresses. The GCHQ also collects communications metadata, capturing, for instance, what
time an email was sent, where it was sent from, who it was sent to, and how quickly a reply was made.
The privacy safeguards for this surveillance are dismal.
For more than a decade, the GCHQ was supposed to comply with the Regulation of Investigatory Powers Act 2000 (RIPA). Though no longer fully in effect, the law required Internet service providers to, upon government request, give access to users'
online communications in secret and to install technical equipment to allow surveillance on company infrastructure.
The UK directly collected massive amounts of data from the transatlantic, fiber-optic cables that carry Internet traffic around the world. The UK government targeted "bearers"-- portions of a single cable--to collect the data traveling
within, applied filters and search criteria to weed out data it didn't want, and then stored the remaining data for later search, use, and sharing. According to GCHQ, this surveillance was designed to target "external"
communications--online activity that is entirely outside the UK or that involves communications that leave or enter the UK--like email correspondence between a Londoner and someone overseas. But the surveillance also collected entirely
"internal" communications, like two British neighbors' emails to one another. This surveillance was repeatedly approved under months-long, non-targeted warrants. Parts of this process, the court said, were vulnerable to abuse.
(In 2016, the UK passed another surveillance law--the Investigatory Powers Act, or IPA--but the court's decision applies only to government surveillance under the prior surveillance law, the RIPA.)
A Failure to Comply with Human Rights Laws
The suit's results can be looked at as a disconnect between the domestic laws allowing government surveillance in the UK and the UK's international human rights obligations.
The court took issue with the UK's failure to comply with the European Convention on Human Rights--an international treaty to protect human rights in Europe, specified in the convention's "articles." The European Court of Human Rights
(ECtHR), a regional human rights judicial body based in Strasbourg, France, issued the opinion.
Though the lawsuit's plaintiffs asserted violations of Articles 6, 8, 10, and 14, the court only found violations of Article 8 and 10, which guarantee the right to privacy and the right to freedom of expression. The court's reasoning relied on
applicable law, government admissions, and recent court judgments.
The court found two glaring problems in the UK's surveillance regime--the entire selection process for what data the government collects, keeps, and sees, and the government's unrestricted access to metadata.
How the government chooses "bearers" for data collection should "be subject to greater oversight," the court said. By itself, this was not enough to violate Article 8's right to privacy, the court said, but it necessitated
better safeguards in the next steps--how data is filtered after initial collection and how data is later accessed.
Both those steps lacked sufficient oversight, too, the court said. It said the UK government received no independent oversight and needed "more rigorous safeguards" when choosing search criteria and selectors (things like email
addresses and telephone numbers) to look through already-collected data. And because analysts can only look at collected and filtered data, "the only independent oversight of the process of filtering and selecting intercept data for
examination" can happen afterwards through an external audit, the court said.
"The Court is not persuaded that the safeguards governing the selection of bearers for interception and the selection of intercepted material for examination are sufficiently robust to provide adequate guarantees against abuse," the
court said. "Of greatest concern, however, is the absence of robust independent oversight of the selectors and search criteria used to filter intercepted communications."
Along with related problems, including the association of related metadata to collected communications, the court concluded the surveillance program violated Article 8.
The court also looked at how the UK government accesses metadata in so-called targeted requests to communications providers. It focused on one section of RIPA and one particularly important legal phrase: "Serious crime."
The UK's domestic law, the court said, "requires that any regime permitting the authorities to access data retained by [communications services providers] limits access to the purpose of combating 'serious crime,' and that access be subject
to prior review by a court or independent administrative body."
This means that whenever government agents want to access data held by communications services providers, those government agents must be investigating a "serious crime," and government agents must also get court or administrative
approval prior to accessing that data.
Here's the problem: that language is absent in UK's prior surveillance law for metadata requests. Instead, RIPA allowed government agencies to obtain metadata for investigations into non -serious crimes. Relatedly, metadata access for
non-serious crimes did not require prior court or independent administrative approval, compounding the invasion of privacy.
Due to this discrepancy, the court found a violation of Articles 8 and 10.
For years, intelligence agencies convinced lawmakers that their mass surveillance programs were necessary to protect national security and to prevent terrorist threats--to, in other words, fight "serious crime." But recently, that's
changed. These programs are increasingly being used for investigating seemingly every-day crimes.
In the UK, this process began with RIPA. The 2000 law was introduced in part to bring Britain's intelligence operations into better compliance with human rights law because the country's government realized that the scope of GCHQ's powers--and
any limits to it--were insufficiently defined in law.
But as soon as lawmakers began cataloguing the intelligence services' extraordinary powers to peer into everybody's lives, other parts of the government took interest: If these powers are so useful for capturing terrorists and subverting foreign
governments, why not use them for other pressing needs? With RIPA, the end result was an infamous explosion in the number of agencies able to conduct surveillance under the law. Under its terms, the government set out to grant surveillance powers
to everyone from food standards officers to local authorities investigating the illicit movement of pigs, to a degree that
upset even the then-head of MI5 .
The court's decision supports the idea that this surveillance expansion, if left unchecked, could be incompatible with human rights.
At more than 200 pages, the court's opinion includes a lot more than just findings of human rights violations.
Metadata collection, the court said, is just as intrusive as content collection.
Take phone call metadata, for example. Metadata reveals a person's seven-days-a-week, middle-of-the-night, 10-minute phone calls to a local suicide prevention hotline. Metadata reveals a person's phone call to an HIV testing center, followed up
with a call to their doctor, followed up with a call to their health insurance company. Metadata reveals a person's half-hour call to a gynecologist, followed by another call to a local Planned Parenthood.
The court made a similar conclusion. It said:
"For example, the content of an electronic communication might be encrypted and, even if it were decrypted, might not reveal anything of note about the sender or recipient. The related communications data, on the other hand, could reveal
the identities and geographic location of the sender and recipient and the equipment through which the communication was transmitted. In bulk, the degree of intrusion is magnified, since the patterns that will emerge could be capable of painting
an intimate picture of a person through the mapping of social networks, location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with."
The court also said that an individuals' right to privacy is applied at the initial moment their communications are collected, not , as the government said, when their communications are accessed by a human analyst. That government
assertion betrays our very understanding of privacy and relates to a similar, disingenuous claim that our messages aren't really
"collected" until processed for government use .
Turning Towards Privacy
Modern telecommunications surveillance touches on so many parts of human rights that it will take many more international cases, or protective action by lawmakers and judges, before we can truly establish its limits, and there is plenty more
that's wrong with how we deal with modern surveillance than is covered by this decision.
This is partly why EFF and hundreds of other technical and human rights experts helped create the
Necessary and Proportionate Principles , a framework for assessing whether a state's communication surveillance practices comply with a country's human rights obligations. And it's why EFF has brought its own lawsuits to challenge mass
surveillance conducted by the NSA in the United States. (The European Court of Human Rights' opinion has no direct effect on this litigation.)
This type of works takes years, if not decades. When it comes to any court remedy, it is often said that the wheels of justice turn slowly. We can at least breathe a little easier knowing that, last week, thanks to the hard work of privacy groups
around the world, the wheels made one more turn in the right direction, towards privacy.
Religious communities in the US have tried several times to introduce technology that sanitises movies, skipping over sex, violence or strong language. Such censorship is totally voluntary and is not inflicted on others, so perhaps at first
thought it should not causes any issues. However Hollywood has taken a strong stance against this form of movie vandalism. Presumably Hollywood doesn't appreciate the effects on word of mouth advertising. They wouldn't really appreciate people
bad mouthing films that may have been rendered incomprehensible by the cutting of key scenes.
So now the influential religious community have come up with new law proposal to legalise move sanitisation.
Moralists of the Parents Television Council has provided a statement outlining the thinking behind the Family Movie Act Clarification Act of 2018 (HR 6816), which was introduced by Representative Mia Love, a Utah Republican on September
13th. PTC President Tim Winter said:
It is ironic that legislation first passed in the 21st century needs to be brought into the 21st century, but that is exactly what the Family Movie Act Clarification Act will do. This bill is a long-overdue update to the Family Movie Act of 2005
and would give parents the digital ability to plug their kids' ears and cover their kids' eyes to harmful and explicit streaming content, just as the 2005 Act allows them to do via a DVD. We applaud Congresswoman Mia Love for recognizing the
need for the law to catch up with technology in order to better serve parents.
Based on stories I've heard from inside the beltway, Love and the bill's cosponsors deserve combat valor medals for weathering an intense, scorched-earth effort by Hollywood lobbyists working to prevent even the introduction of this bill, let
alone its consideration.
But why would Hollywood studios object to legislation that would allow their films to make more money? They have claimed that digital filtering is akin to piracy, but there is no piracy taking place. Parents are only skipping past the
objectionable content of movies they've purchased and are watching in the comfort of their own homes. The studios raised the same arguments over a dozen years ago when the Family Movie Act of 2005 was being considered. Those arguments were
hollow then, and they are hollow now. The only plausible reason why anyone in Hollywood would be opposed to this measure is that some sort of agenda would be obviated by the consumer.
Make no mistake: this is a win-win for Hollywood and for parents. Families would be able to protect their children from harmful content in movies they stream; and Hollywood immediately increases its revenue capacity by broadening the marketplace
for its products. Any publicly-traded studio that opposes either the spirit or the letter of this legislation is acting against its own fiduciary interests and, therefore, violating its corporate duty to shareholders.
We call on congressional leadership, both in the House and in the Senate, to deliver a Christmas present to parents and families, and pass H.R. 6816 before the end of this year.
Rafiki is a 2018 Kenya / South Africa drama by Wanuri Kahiu.
Starring Patricia Amira, Muthoni Gathecha and Jimmy Gathu.
Banned by the Kenya Film Classification Board in April 2018. The KFCB claimed the film seeks to legitimize lesbian romance.
Rafiki, which means friend in Swahili, is adapted from the 2007 Caine Prize-winning short story, Jambula Tree, by Ugandan writer Monica Arac Nyeko. It follows two close friends, Kena and Ziki, who eventually fall in love despite their
families being on opposing sides of the political divide.
Wanuri Kahiu, the director of the banned film Rafiki is Suing Kenya's film censors to unblock the way for the film to qualify as contender for the Oscars. The suit demands that the local ban be lifted in time for her to submit the film to
be considered for an Oscar. It's also pushing to change the law that has been used to ban popular films like The Wolf of Wall Street.
For Rafiki to be eligible for a Best Foreign Language award, it needs to be shown in Kenya before September 30, The Hollywood Reporter adds . If the selection committee is given permission to screen the film to submit it to the Academy, Rafiki
could be the first Kenyan film to be nominated in that category
Wanuri Kahiu's Rafiki has received its due praise on the film festival circuit since her film was selected to make its world premiere at Cannes earlier this year-- making it the first Kenyan feature film to do so. However, the Kenya Film
Classification Board banned the film, claiming that it seeks to legitimize lesbian romance.
Update: Make love not war, court organises a 7 day truce
A Kenyan judge has lifted a ban on a film about a lesbian relationship - for a week. Judge Wilfrida Okwany decided to allow the screening of the film for seven days so that it could be submitted for the Oscars.
In order to be submitted to the Academy Awards, the film must have been publicly exhibited for at least seven consecutive days at a commercial motion picture venue.
In her ruling on Friday, Ms Okwany gave permission for the film to be shown to willing adults. She said she was not convinced that Kenya is such a weak society that its moral foundation will be shaken by seeing such a film.
But the head of the Kenya Film Classification Board, Ezekiel Mutua, was unhappy about the decision, claiming homosexuality is not our way of life.
The film's director Wanuri Kahiu, who appealed against the ban, was overjoyed with the latest decision.
The film's Twitter account announced that it will hold screenings in the Kenyan capital, Nairobi
24th September 2018. See article
Rafiki, temporarily reprieved from being banned showed on Sunday to a cheering full house audience in Nairobi. The cinema showed on an additional screen after more than 450 people arrived.
Nairobi residents will be able to watch Rafiki during daytime-only screenings at the Prestige Cinema in the capital for a week
The Wall Street Journal released a report that details the state of Apple's yet-to-be-unveiled streaming service. It highlights some of the difficulties Apple has faced in striking the right tone for its content, particularly when it comes to
gratuitous sex, profanity or violence.
The report opens with Apple CEO Tim Cook's reaction to Vital Signs, a show based on the life of Dr. Dre. Apple picked up the show back in 2016 , but when Cook viewed it a year ago, he told Apple Music executive Jimmy Iovine that it was too
violent, and that the company can't show it.
Apple has some big plans for its original content ambitions. It brought in two seasoned Hollywood executives to oversee its video streaming project, and invested $1 billion to develop new a slate of new projects.
The WSJ report notes that Apple's preference is for family-friendly projects that appeal to a broad audience, and that it's trying to avoid weighing into overly political or controversial territory with the content that it's producing -- only a
handful of those shows "veer into 'TV-MA territory. Apple's approach has caused some delays in its programming. Programme makers have been unhappy with the restrictions on their creativity and have clashed with Apple, leading to argument and
The new head of the Police Federation John Apter, who represents 120,000 rank and file officers across England and Wales, has said his members were incredibly frustrated because they have been assigned to sorting out social media spats rather
than tackling more serious crimes like burglary.
The new head explained that while resourcing remained the main issue facing policing, there was also a lack of common sense when it came to priorities.
Last week it emerged that Yorkshire Police had asked people to report insults on social media, even if they were not considered to be a hate crime. Other forces have been criticised recently for using computer programmes rather than experienced
officers to decide whether a burglary is worth investigating. Such initiatives have led to criticism of the police and the observation that the service is out of touch with the public.
But Apter said nobody was more frustrated than police officers when they were prevented from attending burglaries and other serious crimes. Burglary is one of the most intrusive, horrible crimes that a householder can go through. It makes you
feel incredibly vulnerable, but people can sometimes wait days for a police response, Apter said.
Zere Asylbek has been the recipient of several death threats over her attire in the music video for her song Kyz (Girl), which was written to generate public debate on gender inequality and women's rights in Kyrgyzstan. In the video
Asylbek is seen wearing a jacket and bra.
Freemuse calls for Kyrgyz authorities to ensure the safety of Asylbek and launch a criminal investigation into the threats. Freemuse Executive Director Dr Srirak Plipat said:
It is Zere's right to use art to express herself and the issues she sees as critical for women without fear of being persecuted, threatened or harmed in any way. The government of Kyrgyzstan must protect freedom of artistic expression and ensure
that she is safe and can continue to have this important public conversation in her own country.
In a 19 September interview with Asylbek, the singer told Freemuse that there was a recent, famous case in Kyrgyzstan in which a girl, named Burulai, who was bride kidnapped--an ancient tradition where girls are kidnapped and forced into
marriage--died under police custody. The girl was left alone in a police station with the kidnapper who subsequently killed her. She explained that cases such as this and the general situation for women in the country is what inspired her to
write and perform her song.
Asylbek shared on her Facebook page some of the threats she's received as private messages via social media. One message she received on Instagram reads: If you don't remove the video and don't apologise to the Kyrgyz people, we will kill you
soon. This will be the first and last time. Another private message reads: I will gladly join and cut your head off.
The Canadian government is seeking a company that will scour social media and the dark web for data on Canadians' use of cannabis. The request comes a few weeks before recreational pot use becomes legalized on October 17.
According to a tender posted by Public Safety Canada this week, the government wants a company to algorithmically scan Twitter, Tumblr, Facebook, Instagram, and other relevant microblogging platforms for information on Canadians' attitudes
towards legal pot and their behaviours.
The initiative will look for self-reported usage patterns (how much, what kind, and where) and activities such as buying and selling weed. The government will also be scanning social media for criminal activities associated with cannabis
use--driving under the influence, for example. The initiative will also capture metadata, such as self-reported location and demographics, but according to the tender the data must exclude individual unique identifiers.
Motherboard asked Public Safety Canada spokesperson Karine Martel about the project but she did not comment on whether information on cannabis-related crimes collected from social media will be shared with law enforcement, but noted that the work
will be conducted in compliance with the Tri-Council Policy Statement which notes that: research focusing on topics that include illegal activities depends on promises of strong confidentiality to participants.
According to a second tender the feds are also looking to keep track of Canadians buying and selling weed on so-called dark web markets. Both projects are slated to conclude on April 30, 2019.
Google bosses have forced employees to delete a confidential memo circulating inside the company that revealed disgrace details about a plan to launch a censored search engine in China, The Intercept has learned.
The memo, authored by a Google engineer, disclosed that the search system, codenamed Dragonfly, would require users to log in to perform searches, track their location -- and share the resulting history with a Chinese partner, presumably a proxy
for the government, who would have unilateral access to the data. This Chinese 'partner' would be able to edit the data controlling what should be censored.
The memo was shared earlier this month among a group of Google employees who have been organizing internal protests over the censored search system.
The Dragonfly memo reveals that a prototype of the censored search engine was being developed as an app for both Android and iOS devices, and would force users to sign in so they could use the service. The memo confirms, as The Intercept first
reported last week, that users' searches would be associated with their personal phone number. The memo adds that Chinese users' movements would also be stored, along with the IP address of their device and links they clicked on. It accuses
developers working on the project of creating spying tools for the Chinese government to monitor its citizens.
People's search histories, location information, and other private data would be sent out of China to a database in Taiwan, the memo states. But the data would also be provided to employees of a Chinese company who would be granted unilateral
access to the system.
The memo identifies at least 215 employees who appear to have been tasked with working full-time on Dragonfly, a number it says is larger than many Google projects.
Ex Google boss predicts that the internet will split into a Chinese internet and a US internet
The internet will be divided into two different worlds within the next decade -- and China will lead one of them, according to ex- Google CEO Eric Schmidt.
He notes that the control the Chinese government wields over its citizens' online access means it is incompatible with the democratic internet of the west. This means there will be two distinct versions of the world wide web by 2028, one run by
China and the other by the US.
The process is already happening, with the so-called Great Firewall of China blocking Chinese citizens from accessing several of the internet's most popular websites, including Facebook and YouTube.
China wants to expand a ban on foreign TV shows during the evening prime-time hours, according to the latest proposal by the country's media censor.
Since 2004, China has banned foreign TV movies and serials during the peak 7-10pm viewing hours. Now the National Radio and Television Administration is considering banning programming all foreign programmes during this peak period.
The rules will apply to free-to-air and paid channels, as well as streaming sites.
The censors speak of ideological reasoning but maybe its also to do with China's trade war with Donald Trump.
As China's TV gets ever more censored, many people now use streaming sites like iQiyi and Mango TV for their kicks and they are increasingly willing to pay for it. While these sites offer hit western shows such as Game of Thrones, they have also
adopted a similar strategy to Netflix by producing their own content.
But as they gain popularity they may also gain more attention from the censors.
Philosophers out seeking the truth on the Durham campus
A student editor at Durham university has been fired in a transphobia row after he tweeted that women don't have penises.
Angelos Sofocleous, assistant editor at Durham University's philosophy journal Critique , was sacked from his post for writing a tweet deemed claimed to be transphobic by fellow students.
Sofocleous faced disciplinary action last month after he re-tweeted an article by The Spectator on his Twitter titled Is it a crime to say women don't have penises?, with the comment: RT if women don't have penises.
The postgraduate philosophy and psychology student was dismissed from his position at the university after the tweet sparked 'outrage'. He was also fired from his position as editor of Durham University's online magazine The Bubble , and
ironically forced to resign as president of free speech society Humanist Students.
Sofocleous bravely stood by his comments, he wrote:
I may be wrong and women might indeed have penises, although I don't believe that to be the case. But the backlash that took place after my comments, particularly within the organisation, convinced me that, unfortunately and surprisingly, there
are certain issues within the humanist movement which are undebatable.
No effort was made, beyond name-calling, derogatory comments, and ad hominem statements, to convince me of the truth of the other side's position.
The radio host and colourful conspiracy theorist Alex Jones has been permanently censored by Twitter.
One month after it distinguished itself from the rest of the tech industry by declining to bar the rightwing shock jock its platform, Twitter fell in line with the other major social networks in banning Jones.
Twitter justified the censorship saying:
We took this action based on new reports of Tweets and videos posted yesterday that violate our abusive behavior policy, in addition to the accounts' past violations. We will continue to evaluate reports we receive regarding other accounts
potentially associated with @realalexjones or @infowars and will take action if content that violates our rules is reported or if other accounts are utilized in an attempt to circumvent their ban.
PayPal is the latest tech company to ban Infowars. Paypal told PC<ag:
We undertook an extensive review of the Infowars sites, and found instances that promoted hate or discriminatory intolerance against certain communities and religions, which run counter to our core value of inclusion.
InfoWars said PayPal gave it 10 days to find an alternate payment provider before terminating the service. PayPal didn't cite the specific instances of hate speech, but Infowars says the content involved criticism of Islam and opposition to
transgenderism being to taught children in schools.
Nepal's Government will soon ban porn sites in the country. The Ministry of Communication and Information technology (MOCIT) has instructed the internet censor, the NTA, to ban porn websites and any other sexually offensive/indecent content.
The government cited an increase in the rate of rape incident in the country as the reason fir the censorship. It also claims that the easy sexual content access increases sexual violence in the country.
The ministry also requests all the ISPs, telecom operators, social media operators, and Internet users not to distribute, publish and broadcast such sexual content in the country.
Some popular porn sites have been blocked for some years. Whereas some websites are still operating freely.
Attempting to read a censored websites leads to a page simply saying: This website has been blocked as per NTA's Policy.
The UK government is preparing to establish a new internet censor that would make tech firms liable for content published on their platforms and have the power to sanction companies that fail to take down illegal material and hate speech within
Under legislation being drafted by the Home Office and the Department for Digital, Culture, Media and Sport (DCMS) due to be announced this winter, a new censorship framework for online social harms would be created.
BuzzFeed News has obtained details of the proposals, which would see the establishment of an internet censor similar to Ofcom.
Home secretary Sajid Javid and culture secretary Jeremy Wright are considering the introduction of a mandatory code of practice for social media platforms and strict new rules such as takedown times forcing websites to remove illegal hate speech
within a set timeframe or face penalties. Ministers are also looking at implementing age verification for users of Facebook, Twitter, and Instagram.
The new proposals are still in the development stage and are due to be put out for consultation later this year. The new censor would also develop rules new regulations on controlling non-illegal content and online behaviour . The rules for what
constitutes non-illegal content will be the subject of what is likely to be a hotly debated consultation.
BuzzFeed News has also been told ministers are looking at creating a second new censor for online advertising. Its powers would include a crackdown on online advertisements for food and soft drink products that are high in salt, fat, or sugar.
BuzzFeed News understands concerns have been raised in Whitehall that the regulation of non-illegal content will spark opposition from free speech campaigners and MPs. There are also fears internally that some of the measures being considered,
including blocking websites that do not adhere to the new regulations, are so draconian that they will generate considerable opposition.
A government spokesperson confirmed to BuzzFeed News that the plans would be unveiled later this year.
Sun journalists have taken to Twitter to denounce the decision taken by The World Transformed , a political event, to not grant them press passes for our four-day festival of politics, arts and music taking place alongside the Labour party
conference in Liverpool next week.
The World Transformed released a statement explaining that this censorship was an act of solidarity with the families of the victims of the Hillsborough disaster, and a show of support for the boycott of the newspaper observed by community groups
and businesses across Liverpool.
Campaigners from Total Eclipse of the S*n and the Hillsborough Justice Campaign will appear at the festival.
Fifteen EU-based regulators plus Washington State have made a joint declaration while Australian based study likens loot boxes to gambling, not baseball cards
Fifteen EU gambling regulators from the UK, Ireland, France, Austria, Poland, Latvia, the Czech Republic, Spain, the Isle of Man, Malta, Portugal, Jersey, Norway, and the Netherlands plus US representation from the Washington State Gambling
Regulator published the letter, noting their concerns with the business model.
In addition to the loot box problem, the letter addresses how it will take on websites that let players either gamble or sell in-game items like skins or weapons with real-world money.
One of the signatories, Neil McArthur, CEO of the UK Gambling Commission said:
We have joined forces to call on video games companies to address the clear public concern around the risks gambling and some video games can pose to children. We encourage video games companies to work with their gambling regulators and take
action now to address those concerns to make sure that consumers, and particularly children, are protected.
The letter speaks of the groups concerns but does not detail the direction sthat the group will take in reacting to the concerns.
According to VentureBeat, a study conducted by the Australian Parliament's Environment and Communications References Committee showed that there were links between loot box spending and problematic gambling. The population sample size was 7500
The more severe a gamers' problem gambling was, the more likely they were to spend large amounts of money on loot boxes. These results strongly support claims that loot boxes are psychologically akin to gambling, said the report, conducted by Dr.
David Zendle and Dr. Paul Cairns.
In a statement, the pair added loot boxes could potentially act as an introduction to gambling or take advantage of gambling disorders. They note that the industry tends to brush off loot boxes as similar to harmless products like baseball cards,
football/soccer stickers, and products along those lines.
In related news games maker EA could face legal issues for ignoring a ruling by the Belgian government to remove the Ultimate Team portion from FIFA 18.
The National Secular Society has said Ireland's impending referendum on its blasphemy law should prompt global action in defence of free speech on religion.
On Tuesday evening the Dail, the lower house of the Oireachtas (Ireland's parliament), ratified a proposal to hold a referendum on the issue on Friday 26 October. The decision passed through the house unopposed.
The upper house, the Seanad, is expected to pass the legislation on Thursday.
Currently Ireland's constitution says:
The publication or utterance of blasphemous, seditious, or indecent matter is an offence which shall be punishable in accordance with law. The referendum will propose removing the word blasphemous from that article.
The minister for justice Charlie Flanagan said while the offence remained in the constitution, Ireland would be seen as keeping company with those who do not share the fundamental values we cherish such as belief in freedom of conscience and
NSS chief executive Stephen Evans urged Ireland to take a stand for free speech when the referendum takes place:
Repealing the reference to blasphemy from Ireland's constitution would be a welcome declaration of Ireland's changing attitude to religious privilege and a statement of support with free thinkers globally.
Ireland's referendum should prompt global action in defence of free speech on religion. It should send a message to the rest of the world: offending religious sensibilities is not a crime, and the world will not tolerate those who persecute
people for their thoughts and words.
Manmarziyaan is a 2018 India romance by Anurag Kashyap.
Starring Abhishek Bachchan, Vicky Kaushal and Tapsee Pannu.
The film is a love story set in Punjab where Abhishek Bachchan, Taapsee Pannu, and Vicky Kaushal will be seen in prominent roles.
The Delhi Sikh Gurudwara Management Committee (DGMC) is staging a protest on Sunday against this week's movie release, Manmarziyaan (Husband Material) , demanding a nationwide ban on the film.
The committee claims that the filml has a few anti-Sikh scenes which have the potential to hurt the sentiments of the community.
DSGMC president Manjeet Singh GK said:
I believe that this movie should not be screened till makers remove the objectionable scenes from the movie.
Since ages we have been demanding that the censor board should recruit a representative of the Sikh community in their team but they haven't.
We will not tolerate this at any cost and will strongly protest against this movie.
The Delhi police have stepped up the security outside the movie theatre to prevent violence.
Meanwhile in Pakistan, Manmarziyan, has not been cleared by the Central Board of Film Censors for release in Pakistan. According to CBFC Chairman Danyal Gilani, all board members found the content inappropriate and agreed that the film violated
its censorship code.
However, the film was given an adults only 'A' Certificate by the Censor Boards of Sindh and Punjab.
Manmarziyan released in India, USA and Australia on September 14, after a world premiere at the Toronto International Film Festival on September 8.
Update: The producers decide to cut the film for national reease
Ofcom has published a prospectus angling for a role as the UK internet censor. It writes:
Ofcom has published a discussion document examining the area of harmful online content.
In the UK and around the world, a debate is underway about whether regulation is needed to address a range of problems that originate online, affecting people, businesses and markets.
The discussion document is intended as a contribution to that debate, drawing on Ofcom's experience of regulating the UK's communications sector, and broadcasting in particular. It draws out the key lessons from the regulation of content
standards 203 for broadcast and on-demand video services 203 and the insights that these might provide to policy makers into the principles that could underpin any new models for addressing harmful online content.
The UK Government intends to legislate to improve online safety, and to publish a White Paper this winter. Any new legislation is a matter for Government and Parliament, and Ofcom has no view about the institutional arrangements that might
Alongside the discussion paper, Ofcom has published joint research with the Information Commissioner's Office on people's perception, understanding and experience of online harm. The survey of 1,686 adult internet users finds that 79% have
concerns about aspects of going online, and 45% have experienced some form of online harm. The study shows that protection of children is a primary concern, and reveals mixed levels of understanding around what types of media are regulated.
The sales pitch is more or less that Ofcom's TV censorship has 'benefited' viewers so would be a good basis for internet censorship.
Ofcom particularly makes a point of pushing the results of a survey of internet users and their 'concerns'. The survey is very dubious and ends up suggesting thet 79% of users have concerns about going on line.
And maybe this claim is actually true. After all, the Melon Farmers are amongst the 79% have concerns about going online: The Melon Farmers are concerned that:
There are vast amounts of scams and viruses waiting to be filtered out from Melon Farmers email inbox every day.
The authorities never seem interested in doing anything whatsoever about protecting people from being scammed out of their life savings. Have you EVER heard of the police investigating a phishing scam?
On the other hand the police devote vast resources to prosecuting internet insults and jokes, whilst never investigating scams that see old folks lose their life savings.
So yes, there is concern about the internet. BUT, it would be a lie to infer that these concerns mean support for Ofcom's proposals to censor websites along the lines of TV.
In fact looking at the figures, some of the larger categories of 'concern's are more about fears of real crime rather than concerns about issues like fake news.
Interestingly Ofcom has published how the 'concerns' were hyped up by prompting the surveyed a bit. For instance, Ofcom reports that 12% of internet users say they are 'concerned' about fake news without being prompted. With a little prompting by
the interviewer, the number of people reporting being concerned about fake news magically increases to 29%.
It also has to be noted that there are NO reports in the survey of internet users concerned about a lack news balancing opinions, a lack of algorithm transparency, a lack of trust ratings for news sources, or indeed for most of the other
suggestions that Ofcom addresses.
I've seen more fake inferences in the Ofcom discussion document than I have seen fake news items on the internet in the last ten years.
The play Stitching , has opened at the Unifaun Theatre in Malta for a two week run. But Stitching is not your average piece of theatre; it's taken 10 years, international coverage, and even a literal EU court case to get this show up and
Ten years ago, in October 2008, local theatre producer Adrian Buckle sent an email to playwright Anthony Nielson, asking for permission to produce his play Stitching in Malta. Nielson duly granted Unifaun the rights to a performance of his play.
Buckle booked a slot at a local theatre, hires the cast and informs the Board for Film and Stage Classification in order expecting to be issued an age-rating certificate for the piece. However, instead of receiving an age certification, Buckle
received a certificate that simply stated the play had been Banned and disallowed, with no explanation or reason provided. Thus begins a ten-year-long battle that finally brings us to this year's production.
However, the team at Unifaun would not stand for this lack of explanation; they chased for an answer, and in January 2009 the police commissioner delivered a letter that detailed the reasons:
Blasphemy against the State Religion
Obscene contempt for the victims of Auschwitz
An encyclopaedic review of dangerous sexual perversions leading to sexual servitude
Abby's eulogy to the child murderers Fred and Rosemary West
Reference to the abduction, sexual assault and murder of children
In conclusion, the play is a sinister tapestry of violence and perversion where the sum of the parts is greater than the whole. The Board feels that in this case the envelope has been pushed beyond the limits of public
The censorship became major news in Malta and it was decided by the politicians at the time that the established censorship system was no longer compatible with EU human rights requirements, notably Article 10 of the Convention for the Protection
of Human Rights:
Freedom of expression constitutes one of the essential foundations of such a society, one of the basic conditions for its progress and for the development of every man [...] it is applicable not only to 'information' or 'ideas' that are
favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population.
The country's censorship laws were rewritten without calling on the services of stage censors. Film censorship was also reformed with new rules that are based on the UK's, which is at least significantly more free than before.
Yes, the play is crude. Yes, they swear a lot. Yes, they talk about child murderers. Yes, they use a dildo on stage. Yes, they describe sexual acts very explicitly. Yes, it probably made people very uncomfortable. That is why performances are
given an age certification. That is not reason to censor and an artist.
Three performances have passed so far and the world has not ended. Nobody has walked out of the theatre mid-performance in a fit of rage.
Eight local councils have now decided to overturn a film's BBFC 15 age rating so younger viewers can watch it.
The documentary A Northern Soul was rated 15 by the BBFC for strong language. The BBFC commented:
It includes around 20 uses of strong language and therefore exceeds by some margin anything we have ever permitted at 12A.
The film follows Steve, who struggles to make ends meet as he tries to teach hip-hop to children in Hull schools with his Beats Bus.
So far, licensing committees in Hull, Lambeth, Leeds, Liverpool, Sheffield, Southampton, Hackney and Calderdale have downgraded A Northern Soul from a 15 to a 12A.
Phil Bates, licensing manager at Southampton City Council, said he viewed the film differently because it's a documentary rather than a drama. He explained:
We can see why BBFC awarded a 15 rating, although equally we can see why other authorities have also granted it a 12A.
The use of profane language is fairly infrequent, some of it was used at a time of stress but there were occasions when it was used as everyday language. As this is a fly-on-the-wall style film, showing life as it is, rather than a scripted film
where the language is used for effect, we felt the film warranted a 12A.
Director Sean McAllister spoke of the councils' decisions: I think they're responding as human beings. He added that Steve's language was credible and real and culturally embedded within how he speaks. He continued:
The irony is that the motivation for making this film and the heart of why this film should be seen has got the thing censored.
When people actually see it, everyone's saying 'where's the swearing?' They [the BBFC] have done a word count, which is an F count, and they've simply censored it based on that. And they've got to get over that.
When in Mission Impossible people are having their heads blown off and 12As are being granted, the whole thing is hypocritical, backward and needs reassessing. Language not used for effect
The BBFC repeated its mantra that its classification guidelines are the result of a large scale public consultation designed to reflect broad public opinion across the UK. Bit in reality the 'large scale' part of its public consultation asks a
few broad brush questions about whether people generally agree with the BBFC about ratings. The questions do not offer any more nuanced insight into what people think about swearing in the context of everyday parlance of some working
The Nun is a 2018 USA horror mystery thriller by Corin Hardy.
Starring Taissa Farmiga, Bonnie Aarons and Charlotte Hope.
When a young nun at a cloistered abbey in Romania takes her own life, a priest with a haunted past and a novitiate on the threshold of her final vows are sent by the Vatican to investigate. Together they uncover the order's unholy secret.
Risking not only their lives but their faith and their very souls, they confront a malevolent force in the form of the same demonic nun that first terrorized audiences in 'The Conjuring 2,' as the abbey becomes a horrific battleground between
the living and the damned.
Lebanon's film censors have banned the new horror movie, The Nun, from a cinema release. The censors claimed that the film was offensive to the Christian faith.
The Warner Bros production was awaiting a screening licence from the General Security's censorship committee ahead of an expected release on 6 September. However last Wednesday, the Catholic committee watched the movie and asked the General
Security to ban it in Lebanon for religious reasons.
It is unclear which scenes caused 'the offence', but some believe the ban may stem from the victimisation of nuns in the film.
According to the constitution, multi-religious Lebanon can impose censorship on local and international productions for a number of reasons. These include banning films for stirring religious and political sensitivities as well as those with
sexually explicit content.
Tony Hall, the BBC's director general, has repeated his call for global streaming companies, Netflix and Amazon to suffer the same censorship as the UK's traditional broadcasters -- or else risk killing off distinctive British content. He said to
the Royal Television Society's London conference:
It cannot be right that the UK's media industry is competing against global giants with one hand tied behind its back.
In so many ways -- prominence, competition rules, advertising, taxation, content regulation, terms of trade, production quotas -- one set of rules applies to UK companies, and barely any apply to the new giants. That needs rebalancing, too. We
stand ready to help, where we can.
Hall will use the speech to warn that young British audiences now spend almost as much time watching Netflix -- which only launched its UK streaming service in 2012 -- as watching BBC television and iPlayer combined.
Citing Ofcom figures, Hall warned that Britain's public service broadcasters have cut spending on content in real terms by around £1bn since 2004. He said that global streaming companies are not spending enough on British productions to make up
the difference, while their UK-based productions tend to focus on material which has a global appeal rather than a distinctly British flavour. Hall added:
This isn't just an issue for us economically, commercially or as institutions. There is an impact on society. The content we produce is not an ordinary consumer good. It helps shape our society. It brings people together, it helps us understand
each other and share a common national story.
Theatre director Maryam Kazemi and theatre manager Saeed Assadi were detained by Iranian authorities over a video trailer for a production of Shakespeare's A Midsummer Night's Dream on 9 September 2018.
The trailer features men and women dancing together, which is illegal in Iran.
Cultural censorship official Shahram Karami said the issue was with the type of music played and the actors' movements used in the trailer.
Both men were later bailed on surety of about $23,000 each.
The decades-long Turkish tradition of watching a classic American cowboy film on Sunday morning came to an end in August 2018, with state-run broadcaster TRT giving them the boot as US-Turkey relations deteriorate.
American Westerns have been shown at 9.55am on Sundays since the 1980s; according to NRT News , the John Wayne film Big Jake that aired on 19 August was the last.
TRT will now show films supported by the Turkish Ministry of Culture in that timeslot.
The change comes after a diplomatic dispute over US pastor Andrew Brunson, who is under house arrest on charges relating to the 2016 attempted coup in Turkey.
Arab News says the decision comes after the Turkish media censor, Radio and Television Supreme Council, warned about the expansion of American imperialism and culture through movies.
Israel's public broadcaster has apologised to listeners after playing part of an opera by German composer Richard Wagner on 31 August 2018.
Classical music radio station Kol HaMusica (the Voice of Music) said its editor erred in choosing to play the final act of Wagner's Goetterdaemmerung (Twilight of the Gods) opera, which goes against the broadcaster's long-standing
directive not to play any music by the controversial 19-century figure, who was Adolf Hitler's favourite composer.
Wagner's music has been unofficially banned in what is now Israel since 1938. In addition to composing music, Wagner also wrote a pamphlet called Judaism in Music, in which he said that the Jew was incapable of artistic expression.
Tech companies that fail to remove terrorist content quickly could soon face massive fines. The European Commission proposed new rules on Wednesday that would require internet platforms to remove illegal terror content within an hour of it being
flagged by national authorities. Firms could be fined up to 4% of global annual revenue if they repeatedly fail to comply.
Facebook (FB), Twitter (TWTR) and YouTube owner Google (GOOGL) had already agreed to work with the European Union on a voluntary basis to tackle the problem. But the Commission said that progress has not been sufficient.
A penalty of 4% of annual revenue for 2017 would translate to $4.4 billion for Google parent Alphabet and $1.6 billion for Facebook.
The proposal is the latest in a series of European efforts to control the activities of tech companies.
The terror content proposal needs to be approved by the European Parliament and EU member states before becoming law.
The European Court of Human Rights (ECtHR) has found that the UK's mass surveillance programmes, revealed by NSA whistleblower Edward Snowden, did not meet the quality of law requirement and were incapable of keeping the interference
to what is necessary in a democratic society.
The landmark judgment marks the Court's first ruling on UK mass surveillance programmes revealed by Mr Snowden. The case was started in 2013 by campaign groups Big Brother Watch, English PEN, Open Rights Group and computer science expert Dr
Constanze Kurz following Mr Snowden's revelation of GCHQ mass spying.
Documents provided by Mr Snowden revealed that the UK intelligence agency GCHQ were conducting population-scale interception, capturing the communications of millions of innocent people. The mass spying programmes included TEMPORA, a bulk data
store of all internet traffic; KARMA POLICE, a catalogue including a web browsing profile for every visible user on the internet; and BLACK HOLE, a repository of over 1 trillion events including internet histories, email and instant messenger
records, search engine queries and social media activity.
The applicants argued that the mass interception programmes infringed UK citizens' rights to privacy protected by Article 8 of the European Convention on Human Rights as the population-level surveillance was effectively indiscriminate, without
basic safeguards and oversight, and lacked a sufficient legal basis in the Regulation of Investigatory Powers Act (RIPA).
In its judgment, the ECtHR acknowledged that bulk interception is by definition untargeted ; that there was a lack of oversight of the entire selection process, and that safeguards were not sufficiently robust to provide adequate
guarantees against abuse.
In particular, the Court noted concern that the intelligence services can search and examine "related communications data" apparently without restriction -- data that identifies senders and recipients of communications, their
location, email headers, web browsing information, IP addresses, and more. The Court expressed concern that such unrestricted snooping could be capable of painting an intimate picture of a person through the mapping of social networks,
location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with.
The Court acknowledged the importance of applying safeguards to a surveillance regime, stating:
In view of the risk that a system of secret surveillance set up to protect national security may undermine or even destroy democracy under the cloak of defending it, the Court must be satisfied that there are adequate and effective guarantees
The Government passed the Investigatory Powers Act (IPA) in November 2016, replacing the contested RIPA powers and controversially putting mass surveillance powers on a statutory footing.
However, today's judgment that indiscriminate spying breaches rights protected by the ECHR is likely to provoke serious questions as to the lawfulness of bulk powers in the IPA.
Jim Killock, Executive Director of Open Rights Group said:
Viewers of the BBC drama, the Bodyguard, may be shocked to know that the UK actually has the most extreme surveillance powers in a democracy. Since we brought this case in 2013, the UK has actually increased its powers to indiscriminately
surveil our communications whether or not we are suspected of any criminal activity.
In light of today's judgment, it is even clearer that these powers do not meet the criteria for proportionate surveillance and that the UK Government is continuing to breach our right to privacy.
Silkie Carlo, director of Big Brother Watch said:
This landmark judgment confirming that the UK's mass spying breached fundamental rights vindicates Mr Snowden's courageous whistleblowing and the tireless work of Big Brother Watch and others in our pursuit for justice.
Under the guise of counter-terrorism, the UK has adopted the most authoritarian surveillance regime of any Western state, corroding democracy itself and the rights of the British public. This judgment is a vital step towards protecting millions
of law-abiding citizens from unjustified intrusion. However, since the new Investigatory Powers Act arguably poses an ever greater threat to civil liberties, our work is far from over.
Antonia Byatt, director of English PEN said:
This judgment confirms that the British government's surveillance practices have violated not only our right to privacy, but our right to freedom of expression too. Excessive surveillance discourages whistle-blowing and discourages investigative
journalism. The government must now take action to guarantee our freedom to write and to read freely online.
Dr Constanze Kurz, computer scientist, internet activist and spokeswoman of the German Chaos Computer Club said:
What is at stake is the future of mass surveillance of European citizens, not only by UK secret services. The lack of accountability is not acceptable when the GCHQ penetrates Europe's communication data with their mass surveillance techniques.
We all have to demand now that our human rights and more respect of the privacy of millions of Europeans will be acknowledged by the UK government and also by all European countries.
Dan Carey of Deighton Pierce Glynn, the solicitor representing the applicants, stated as follows:
The Court has put down a marker that the UK government does not have a free hand with the public's communications and that in several key respects the UK's laws and surveillance practices have failed. In particular, there needs to be much
greater control over the search terms that the government is using to sift our communications. The pressure of this litigation has already contributed to some reforms in the UK and this judgment will require the UK government to look again at
its practices in this most critical of areas.
John Carpenter's Halloween just downrated to 15 for cinema release
14th September 2018
Halloween is a 1978 USA horror by John Carpenter.
Starring Donald Pleasence, Jamie Lee Curtis and Tony Moran.
John Carpenter's Halloween has just been passed 15 uncut for strong threat, violence, nudity for 2018 cinema release.
It has always been uncut in the UK and US. It was 18 rated until 2018, when it was passed 15 for a cinema release.
The previous BBFC 18 rating has been widely questioned for some time now. It seems to compare with 15 rated horrors rather than 18 rated horrors, but perhaps the quality filmmaking makes it a bit more scary than if judged by the violence you
The film has not yet been submitted for reconsideration on video, so is still nominally 18 rated for 2018 home video releases.
Uncut and MPAA R rated in the US. There is also an Extended TV cut.
The National Secular Society has criticised a Brent Council in north London for removing an advert featuring a hindu temple. The council took down a poster, which advertised the area as the London Borough of Culture 2020, after complaints from
the Hindu Council UK.
Dipa Das, a councillor in neighbouring Tower Hamlets who is also a representative member of the Hindu Council, complained about the poster on Twitter. Das tweeted to Brent Council's customer services team:
Absolutely disgraceful way of promoting that you are a borough of culture, image of any place of worship on the toilet is totally unacceptable, I urge the council to take immediate action and remove the temple picture from the toilet.
In response the council's team apologised and removed the posters:
We apologise sincerely for this error as we recognise at the locations of some of the JCDecaux advertising sites were not appropriate given the content of this campaign, no offence was intended.
NSS spokesperson Chris Sloggett said the decision was a pathetic surrender to demands for a blasphemy code and a waste of council resources:
Brent Council has given in to an unreasonable religious demand. It has taken the easy way out but in the process it's placed an unreasonable restriction on the freedom to advertise. And it's weakened its own ability, and the ability of other
councils, to stand up for free expression.
Upholding blasphemy codes doesn't create social harmony. It weakens it by encouraging religious groups of all stripes, and others, to insist that their hurt feelings also be recognised. It also wastes public resources.
Councils, the state and wider society need to get much better at telling religious groups to accept the fact that sometimes in a free society they will be offended.
The National Secular Society has called for Spain to abolish its blasphemy laws following the detainment of an actor accused of offending religious sentiment.
Willy Toledo, a cinema and television actor, was detained under a Madrid court order after he ignored summons for questioning, arguing he had not committed any offence and so there is no need to appear before a judge.
Toledo was summoned twice over a Facebook post he wrote in July 2017, in which he defended three women who were
prosecuted under blasphemy laws because they simulated a religious procession with a giant plastic vagina as part of a feminist protest. In his post Toledo said:
I shit on God, and I have enough shit left over to shit on the dogma of the sanctity and virginity of the Virgin Mary. This country is unbearably shameful.
In response, the Spanish Association of Christian Lawyers filed a complaint against Toledo for shitting on the dogma, and because his words were an offense against religious sentiment.
Following his release form the court Toledo told reporters:
I am doing what I have to do, which is to draw attention to this, because it is shameful that there are still five articles in the criminal code related to religious sentiments.
Dozens of Spanish citizens applauded and shouted Me cago en Dios (I shit on God) as he left the courthouse. Some Twitter users have started using the hashtag #MeCagoEnDios to express their support. Oscar-winner Javier Bardem has also spoken out
in defence of Toledo.
Stephen Evans, Executive Director at the National Secular Society, joined in the criticism of Spain's blasphemy laws saying:
The existence of a law that outlaws offending or derision of religious feelings, dogmas, beliefs or rituals shames Spain. Blasphemy laws are an affront to free expression and should be consigned to history. Let's hope the arrest of Willy Toledo
precipitates the demise of Spain's arbitrary restrictions on speech.
Note that the expression shit on God (cagarse en Dios) is commonplace in everyday Spanish discourse.
Article 525 of the Spanish Penal Code forbids the defamation of any individual's or group's religious sentiments, beliefs, or practices, setting out monetary fines for those who offend religious people. The law tends to be used to defend Catholic
The European Parliament has voted to approve new copyright powers enabling the big media industry to control how their content is used on the internet.
Article 11 introduces the link tax which lets news companies control how their content is used. The target of the new law is to make Google pay newspapers for its aggregating Google News service. The collateral damage is that millions of
websites can now be harangued for linking to and quoting articles, or even just sharing links to them.
Article 13 introduces the requirements for user content sites to create censorship machines that pre-scan all uploaded content and block anything copyrighted. The original proposal, voted on in June, directly specified content hosts use
censorship machines (or filters as the EU prefers to call them). After a cosmetic rethink since June, the law no longer specifies automatic filters, but instead specifies that content hosts are responsible for copyright published. And of course
the only feasible way that content hosts can ensure they are not publishing copyrighted material is to use censorship machines anyway. The law was introduced, really with just the intention of making YouTube and Facebook pay more for content from
the big media companies. The collateral damage to individuals and small businesses was clearly of no concern to the well lobbied MEPs.
Both articles will introduce profound new levels of censorship to all users of the internet, and will also mean that there will reduced opportunities for people to get their contributions published or noticed on the internet. This is simply
because the large internet companies are commercial organisations and will always make decisions with costs and profitability in mind. They are not state censors with a budget to spend on nuanced decision making. So the net outcome will be to
block vast swathes of content being uploaded just in case it may contain copyright.
An example to demonstrate the point is the US censorship law, FOSTA. It requires content hosts to block content facilitating sex trafficking. Internet companies generally decided that it was easier to block all adult content rather than to try
and distinguish sex trafficking from non-trafficking sex related content. So sections of websites for dating and small ads, personal services etc were shut down overnight.
The EU however has introduced a few amendments to the original law to slightly lessen the impact an individuals and small scale content creators.
Article 13 will now only apply to platforms where the main purpose ...is to store and give access to the public or to stream significant amounts of copyright protected content uploaded / made available by its users and
that optimise content and promotes for profit making purposes .
When defining best practices for Article 13, special account must now be taken of fundamental rights, the use of exceptions and limitations. Special focus should also be given to ensuring that the burden on SMEs remain
appropriate and that automated blocking of content is avoided (effectively an exception for micro/small businesses). Article 11 shall not extend to mere hyperlinks, which are accompanied by individual words (so it seems links are safe, but
quoted snippets of text must be very short) and the protection shall also not extend to factual information which is reported in journalistic articles from a press publication and will therefore not prevent anyone from reporting such factual
Article 11 shall not prevent legitimate private and non-commercial use of press publications by individual users .
Article 11 rights shall expire 5 years after the publication of the press publication. This term shall be calculated from the first day of January of the year following the date of publication. The right referred to in
paragraph 1 shall not apply with retroactive effect .
Individual member states will now have to decide how Article 11 is implemented, which could create some confusion across borders.
At the same time, the EU rejected the other modest proposals to help out individuals and small creators:
No freedom of panorama. When we take photos or videos in public spaces, we're apt to incidentally capture copyrighted works: from stock art in ads on the sides of buses to t-shirts worn by protestors, to building facades claimed by architects
as their copyright. The EU rejected a proposal that would make it legal Europe-wide to photograph street scenes without worrying about infringing the copyright of objects in the background.
No user-generated content exemption, which would have made EU states carve out an exception to copyright for using excerpts from works for criticism, review, illustration, caricature, parody or pastiche.
A final round of negotiation with the EU Council and European Commission is now due to take place before member states make a decision early next year. But this is historically more of a rubber stamping process and few, if any, significant
changes are expected.
However, anybody who mistakenly thinks that Brexit will stop this from impacting the UK should be cautious. Regardless of what the EU approves, the UK might still have to implement it, and in any case the current UK Government supports many of
the controversial new measures.
Despite waves of calls and emails from European Internet users, the European Parliament today voted to accept the principle of a universal pre-emptive copyright filter for content-sharing sites, as well as the idea that news publishers should
have the right to sue others for quoting news items online -- or even using their titles as links to articles. Out of all of the potential amendments offered that would fix or ameliorate the damage caused by these proposals, they voted for worst
on offer .
There are still opportunities, at the EU level, at the national level, and ultimately in Europe's courts, to limit the damage. But make no mistake, this is a serious setback for the Internet and digital rights in Europe.
It also comes at a trepidatious moment for pro-Internet voices in the heart of the EU. On the same day as the vote on these articles, another branch of the European Union's government, the Commission, announced plans to introduce a new regulation
on preventing the dissemination of terrorist content online . Doubling down on speedy unchecked censorship, the proposals will create a new removal order, which will oblige hosting service providers to remove content within one hour of being
ordered to do so. Echoing the language of the copyright directive, the Terrorist Regulation aims at ensuring smooth functioning of the digital single market in an open and democratic society, by preventing the misuse of hosting services for
terrorist purposes; it encourages the use of proactive measures, including the use of automated tools.
Not content with handing copyright law enforcement to algorithms and tech companies, the EU now wants to expand that to defining the limits of political speech too.
And as bad as all this sounds, it could get even worse. Elections are coming up in the European Parliament next May. Many of the key parliamentarians who have worked on digital rights in Brussels will not be standing. Marietje Schaake, author of
some of the better amendments for the directive, announced this week that she would not be running again. Julia Reda, the German Pirate Party representative, is moving on; Jan Philipp Albrecht, the MEP behind the GDPR, has already left Parliament
to take up a position in domestic German politics. The European Parliament's reserves of digital rights expertise, never that full to begin with, are emptying.
The best that can be said about the Copyright in the Digital Single Market Directive, as it stands, is that it is so ridiculously extreme that it looks set to shock a new generation of Internet activists into action -- just as the DMCA, SOPA/PIPA
and ACTA did before it.
If you've ever considered stepping up to play a bigger role in European politics or activism, whether at the national level, or in Brussels, now would be the time.
It's not enough to hope that these laws will lose momentum or fall apart from their own internal incoherence, or that those who don't understand the Internet will refrain from breaking it. Keep reading and supporting EFF, and join Europe's
powerful partnership of digital rights groups, from Brussels-based EDRi to your local national digital rights organization . Speak up for your digital business, open source project, for your hobby or fandom, and as a contributor to the global
This was a bad day for the Internet and for the European Union: but we can make sure there are better days to come.
Powell's bill purports to "tackle online hate, fake news and radicalisation" by making social media companies liable for what is published in large, closed online forms -- and is the latest in a series of poorly drafted attempts by
parliamentarians to address communications online.
If only Powell's proposal were the worst piece of legislation parliament will consider this autumn. Yesterday, MPs debated the
Counter-Terrorism and Border Security Bill, which would make it a crime to view information online that is "likely to be useful" to a terrorist. No terrorist intent would be required -- but you would risk up to 15 years in prison
if found guilty. This would make the work of journalists and academics very difficult or impossible.
Attempts to tackle online content are coming from all corners with little coordination -- although a factor common to all these proposals is that they utterly fail to safeguard freedom of expression.
Over the summer, the Commons Select Committee on Culture, Media and Sport issued a
preliminary report on tackling fake news and the government launched a
consultation on a possible new law to prevent "intimidation" of those standing for elections.
In addition, the government is expected to publish later this year a
white paper on internet safety aimed " to make sure the UK is the safest place in the world to be online." The Law Commission, already tasked with publishing
a report on offensive online communications , was last week asked to review whether misogyny should be considered a hate crime.
Jodie Ginsberg, CEO of Index, said:
"We're having to play whack-a-mole at the moment to prevent poorly drawn laws inadvertently stifling freedom of expression, especially online. The scattergun approach is no way to deal with concerns about online communications. Instead of
paying lip service to freedom of expression as a British value, it needs to be front and centre when developing policies".
"We already have laws to deal with harassment, incitement to violence, and even incitement to hatred. International experience shows us that even well-intentioned laws meant to tackle hateful views online often end up hurting the minority
groups they are meant to protect, stifle public debate, and limit the public's ability to hold the powerful to account."
even bigger test to businesses than GDPR . It's a regulation that will create a likely deficit in the customer information they collect even post-GDPR.
Current cookie banner notifications, where websites inform users of cookie collection, will make way for cookie request pop-ups that deny cookie collection until a user has opted in or out of different types of cookie collection. Such a pop-up is
expected to cause a drop in web traffic as high as 40 per cent. The good news is that it will only appear should the user not have already set their cookie preferences at browser level.
The outcome for businesses whose marketing and advertising lies predominantly online is the inevitable reduction in their ability to track, re-target and optimise experiences for their visitors.
For any business with a website and dependent on cookies, the new regulations put them at severe risk of losing this vital source of consumer data . As a result, businesses must find a practical, effective and legal alternative to alleviate the
burden on the shoulders of all teams involved and to offset any drastic shortfall in this crucial data.
Putting the power in the hands of consumers when it comes to setting browser-level cookie permissions will limit a business's ability to extensively track the actions users take on company websites and progress targeted cookie-based advertising.
Millions of internet users will have the option to withdraw their dataset from the view of businesses, one of the biggest threats ePrivacy poses.
MEPs approve copyright law requiring Google and Facebook to use censorship machines to block user uploads that may contain snippets of copyright material, including headlines, article text, pictures and video
The European Parliament has approved a disgraceful copyright law that threatens to destroy the internet as we know it.
The rulehands more power to news and record companies against Internet giants like Google and Facebook. But it also allows companies to make sweeping blocks of user-generated content, such as internet memes or reaction GIFs that use copyrighted
material. The tough approach could spell the end for internet memes, which typically lay text over copyrighted photos or video from television programmes, films, music videos and more.
MEPs voted 438 in favour of the measures, 226 against, with 39 abstentions. The vote introduced Articles 11 and 13 to the directive, dubbed the link tax and censorship machines.
Article 13 puts the onus of policing for copyright infringement on the websites themselves. This forces web giants like YouTube and Facebook to scan uploaded content to stop the unlicensed sharing of copyrighted material. If the internet
companies find that such scanning does not work well, or makes the service unprofitable, the companies could pull out of allowing users to post at all on topics where the use of copyright material is commonplace.
The second amendment to the directive, Article 11, is intended to give publishers and newspapers a way to make money when companies like Google link to their stories.Search engines and online platforms like Twitter and Facebook will have to pay a
license to link to news publishers when quoting portions of text from these outlets.
Following Wednesday's vote, EU lawmakers will now take the legislation to talks with the European Commission and the 28 EU countries.
Online game distributor Steam has approved its first uncensored adult game, Negilgee : Love Stories.
Steam had announced its change of policy in June of this year ironically after a bit of backlash when Steam proposing to step up the censorship of adult games. The previous policy required explicit content to be censored at sale but allowed
subsequent patches to restore the cuts.
On Friday, Dharker Studios is slated to start selling an uncensored version of its game Negilgee : Love Stories, which features nudity and sex scenes.
Other developers have also submitted uncensored games for approval on Steam.
An indie developer called Kagura Games, meanwhile, said some developers have already put up their uncensored games up for review, so we'll be following that closely, and consult with Steam to decide what the best course of action is for releasing
our future titles on Steam.
Australia's Herald Sun newspaper has republished its cartoon of tennis star Serena Williams on a defiant front page in which it attacked its critics and foreshadowed a future where satire is outlawed. The front page reads:
WELCOME TO PC WORLD
If the self-appointed censors of Mark Knight get their way on his Serena Williams cartoon, our new politically correct life will be very dull indeed.
The page features a collection of Mark Knight cartoons, including the depiction of Williams spitting a dummy and stamping on her racquet.
The cartoon, first published on Monday, was Knight's take on the tennis star's bad behaviour insulting the umpire calling him a thief.
The cartoon caused a reaction in the PC worlds some how suggesting that it is not allowed to mock the bad behaviour of a black woman.
Knight has rejected such suggestions saying:
I saw the world number one tennis player have a huge hissy fit and spit the dummy. That's what the cartoon was about, her poor behaviour on the court.
I drew her as an African-American woman. She's powerfully built. She wears these outrageous costumes when she plays tennis. She's interesting to draw. I drew her as she is, as an African-American woman.
Niche porn producer, Pandora Blake, Misha Mayfair, campaigning lawyer Myles Jackman and Backlash are campaigning to back a legal challenge to the upcoming internet porn censorship regime in the UK. They write on a new
We are mounting a legal challenge.
Do you lock your door when you watch porn 203 or do you publish a notice in the paper? The new UK age verification law means you may soon have to upload a proof of age to visit adult sites. This would connect your legal identity to a database of
all your adult browsing. Join us to prevent the damage to your privacy.
The UK Government is bringing in age verification for adults who want to view adult content online; yet have failed to provide privacy and security obligations to ensure your private information is securely protected.
The law does not currently limit age verification software to only hold data provided by you just in order to verify your age. Hence, other identifying data about you could include anything from your passport information to your credit card
details, up to your full search history information. This is highly sensitive data.
What are the Privacy Risks?
Data Misuse - Since age verification providers are legally permitted to collect this information, what is to stop them from increasing revenue through targeting advertising at you, or even selling your personal data?
Data Breaches - No database is perfectly secure, despite good intentions. The leaking or hacking of your sensitive personal information could be truly devastating. The Ashley Madison hack led to suicides. Don't let the Government allow your
private sexual preferences be leaked into the public domain.
What are we asking money for?
We're asking you to help us crowdfund legal fees so we can challenge the new age verification rules under the Digital Economy Act 2017. We re asking for 2£10,000 to cover the cost of initial legal advice, since it's a complicated area of law.
Ultimately, we'd like to raise even more money, so we can send a message to Government that your personal privacy is of paramount importance.
Lucy Powell writes in the Guardian, (presumably intended as an open comment):
Closed forums on Facebook allow hateful views to spread unchallenged among terrifyingly large groups. My bill would change that
You may wonder what could bring Nicky Morgan, Anna Soubry, David Lammy, Jacob Rees-Mogg and other senior MPs from across parliament together at the moment. Yet they are all sponsoring a bill I'm proposing that will tackle online hate, fake news
and radicalisation. It's because, day-in day-out, whatever side of an argument we are on, we see the pervasive impact of abuse and hate online 203 and increasingly offline, too.
Social media has given extremists a new tool with which to recruit and radicalise. It is something we are frighteningly unequipped to deal with.
Worryingly, it is on Facebook, which most of us in Britain use, where people are being exposed to extremist material. Instead of small meetings in pubs or obscure websites in the darkest corners of the internet, our favourite social media site is
increasingly where hate is cultivated. From hope to hate: how the early internet fed the far right Read more
Online echo chambers are normalising and allowing extremist views to go viral unchallenged. These views are spread as the cheap thrill of racking up Facebook likes drives behaviour and reinforces a binary worldview. Some people are being groomed
unwittingly as unacceptable language is treated as the norm. Others have a more sinister motive.
While in the real world, alternative views would be challenged by voices of decency in the classroom, staffroom, or around the dining-room table, there are no societal norms in the dark crevices of the online world. The impact of these bubbles of
hate can be seen, in extreme cases, in terror attacks from radicalised individuals. But we can also see it in the rise of the far right, with Tommy Robinson supporters rampaging through the streets this summer, or in increasing Islamophobia and
Through Facebook groups (essentially forums), extremists can build large audiences. There are many examples of groups that feature anti-Muslim or antisemitic content daily, in an environment which, because critics are removed from the groups,
normalises these hateful views. If you see racist images, videos and articles in your feed but not the opposing argument, you might begin to think those views are acceptable and even correct. If you already agree with them, you might be motivated
This is the thinking behind Russia's interference in the 2016 US presidential election. The Russian Internet Research Agency set up Facebook groups, amassed hundreds of thousands of members, and used them to spread hate and fake news, organise
rallies, and attack Hillary Clinton. Most of its output was designed to stoke the country's racial tensions.
It's not only racism that is finding a home on Facebook. Marines United was a secret group of 30,000 current and former servicemen in the British armed forces and US Marines. Members posted nude photos of their fellow servicewomen, taken in
secret. A whistleblower described the group as revenge porn, creepy stalker-like photos taken of girls in public, talk about rape. It is terrifying that the group grew so large before anyone spoke out, and that Facebook did nothing until someone
informed the media.
Because these closed forums can be given a secret setting, they can be hidden away from everyone but their members. This locks out the police, intelligence services and charities that could otherwise engage with the groups and correct
disinformation. This could be particularly crucial with groups where parents are told not to vaccinate their children against diseases. Internet warriors: inside the dark world of online haters Read more
Despite having the resources to solve the problem, Facebook lacks the will. In fact, at times it actively obstructs those who wish to tackle hate and disinformation. Of course, it is not just Facebook, and the proliferation of online platforms
and forums means that the law has been much too slow to catch up with our digital world.
We should educate people to be more resilient and better able to spot fake news and recognise hate, but we must also ensure there are much stronger protections to spread decency and police our online communities. The responsibility to regulate
these social media platforms falls on the government. It is past time to act. Advertisement
That's why I am introducing a bill in parliament which will do just that. By establishing legal accountability for what's published in large online forums, I believe we can force those who run these echo chambers to stamp out the evil that is
currently so prominent. Social media can be a fantastic way of bringing people together 203 which is precisely why we need to prevent it being hijacked by those who instead wish to divide.
On Wednesday, the EU will vote on whether to accept two controversial proposals in the new Copyright Directive; one of these clauses, Article 13, has the potential to allow anyone, anywhere in the world, to effect mass, rolling waves of
censorship across the Internet.
The way things stand today, companies that let their users communicate in public (by posting videos, text, images, etc) are required to respond to claims of copyright infringement by removing their users' posts, unless the user steps up to
contest the notice. Sites can choose not to remove work if they think the copyright claims are bogus, but if they do, they can be sued for copyright infringement (in the United States at least), alongside their users, with huge penalties at
stake. Given that risk, the companies usually do not take a stand to defend user speech, and many users are too afraid to stand up for their own speech because they face bankruptcy if a court disagrees with their assessment of the law.
This system, embodied in the United States' Digital Millennium Copyright Act (DMCA) and exported to many countries around the world, is called notice and takedown, and it offers rightsholders the ability to unilaterally censor the Internet on
their say-so, without any evidence or judicial oversight. This is an extraordinary privilege without precedent in the world of physical copyright infringement (you can't walk into a cinema, point at the screen, declare I own that, and get the
movie shut down!).
But rightsholders have never been happy with notice and takedown. Because works that are taken down can be reposted, sometimes by bots that automate the process, rightsholders have called notice and takedown a game of whac-a-mole , where they
have to keep circling back to remove the same infringing files over and over.
Rightsholders have long demanded a notice and staydown regime. In this system, rightsholders send online platforms digital copies of their whole catalogs; the platforms then build copyright filters that compare everything a user wants to post to
this database of known copyrights, and block anything that seems to be a match.
Tech companies have voluntarily built versions of this system. The most well-known of the bunch is YouTube's Content ID system, which cost $60,000,000 to build, and which works by filtering the audio tracks of videos to categorise them.
Rightsholders are adamant that Content ID doesn't work nearly well enough, missing all kinds of copyrighted works, while YouTube users report rampant overmatching, in which legitimate works are censored by spurious copyright claims: NASA gets
blocked from posting its own Mars rover footage; classical pianists are blocked from posting their own performances , birdsong results in videos being censored , entire academic conferences lose their presenters' audio because the hall they
rented played music at the lunch-break--you can't even post silence without triggering copyright enforcement. Besides that, there is no bot that can judge whether something that does use copyrighted material is fair dealing. Fair dealing is
protected under the law, but not under Content ID.
If Content ID is a prototype, it needs to go back to the drawing board. It overblocks (catching all kinds of legitimate media) and underblocks (missing stuff that infuriates the big entertainment companies). It is expensive, balky, and
It's coming soon to an Internet near you.
On Wednesday, the EU will vote on whether the next Copyright Directive will include Article 13, which makes Content-ID-style filters mandatory for the whole Internet, and not just for the soundtracks of videos--also for the video portions, for
audio, for still images, for code, even for text. Under Article 13, the services we use to communicate with one another will have to accept copyright claims from all comers, and block anything that they believe to be a match.
This measure will will censor the Internet and it won't even help artists to get paid.
Let's consider how a filter like this would have to work. First of all, it would have to accept bulk submissions. Disney and Universal (not to mention scientific publishers, stock art companies, real-estate brokers, etc) will not pay an army of
data-entry clerks to manually enter their vast catalogues of copyrighted works, one at a time, into dozens or hundreds of platforms' filters. For these filters to have a hope of achieving their stated purpose, they will have to accept thousands
of entries at once--far more than any human moderator could review.
But even if the platforms could hire, say, 20 percent of the European workforce to do nothing but review copyright database entries, this would not be acceptable to rightsholders. Not because those workers could not be trained to accurately
determine what was, and was not, a legitimate claim--but because the time it would take for them to review these claims would be absolutely unacceptable to rightsholders.
It's an article of faith among rightsholders that the majority of sales take place immediately after a work is released, and that therefore infringing copies are most damaging when they're available at the same time as a new work is released
(they're even more worried about pre-release leaks).
If Disney has a new blockbuster that's leaked onto the Internet the day it hits cinemas, they want to pull those copies down in seconds, not after precious days have trickled past while a human moderator plods through a queue of copyright claims
from all over the Internet.
Combine these three facts:
Anyone can add anything to the blacklist of copyrighted works that can't be published by Internet users;
The blacklists have to accept thousands of works at once; and
New entries to the blacklist have to go into effect instantaneously.
It doesn't take a technical expert to see how ripe for abuse this system is. Bad actors could use armies to bots to block millions of works at a go (for example, jerks could use bots to bombard the databases with claims of ownership over the
collected works of Shakespeare, adding them to the blacklists faster than they could possibly be removed by human moderators, making it impossible to quote Shakespeare online).
But more disturbing is targeted censorship: politicians have long abused takedown to censor embarrassing political revelations or take critics offline , as have violent cops and homophobic trolls .
These entities couldn't use Content ID to censor the whole Internet: instead, they had to manually file takedowns and chase their critics around the Internet. Content ID only works for YouTube -- plus it only allows trusted rightsholders to add
works wholesale to the notice and staydown database, so petty censors are stuck committing retail copyfraud.
But under Article 13, everyone gets to play wholesale censor, and every service has to obey their demands: just sign up for a rightsholder account on a platform and start telling it what may and may not be posted. Article 13 has no teeth for
stopping this from happening: and in any event, if you get kicked off the service, you can just pop up under a new identity and start again.
Some rightsholder lobbyists have admitted that there is potential for abuse here, they insist that it will all be worth it, because it will get artists paid. Unfortunately, this is also not true.
For all that these filters are prone to overblocking and ripe for abuse, they are actually not very effective against someone who actually wants to defeat them.
Let's look at the most difficult-to-crack content filters in the world: the censoring filters used by the Chinese government to suppress politically sensitive materials. These filters have a much easier job than the ones European companies will
have to implement: they only filter a comparatively small number of items, and they are built with effectively unlimited budgets, subsidized by the government of one of the world's largest economies, which is also home to tens of millions of
skilled technical people, and anyone seeking to subvert these censorship systems is subject to relentless surveillance and risks long imprisonment and even torture for their trouble.
Those Chinese censorship systems are really, really easy to break , as researchers from the University of Toronto's Citizen Lab demonstrated in a detailed research report released a few weeks ago.
People who want to break the filters and infringe copyright will face little difficulty. The many people who want to stay on the right side of the copyright --but find themselves inadvertently on the wrong side of the filters--will find
themselves in insurmountable trouble, begging for appeal from a tech giant whose help systems all dead-end in brick walls. And any attempt to tighten the filters to catch these infringers, will of course, make it more likely that they will block
A system that allows both censors and infringers to run rampant while stopping legitimate discourse is bad enough, but it gets worse for artists.
Content ID cost $60,000,000 and does a tiny fraction of what the Article 13 filters must do. When operating an online platform in the EU requires a few hundred million in copyright filtering technology, the competitive landscape gets a lot more
bare. Certainly, none of the smaller EU competitors to the US tech giants can afford this.
On the other hand, US tech giants can afford this (indeed, have pioneered copyright filters as a solution , even as groups like EFF protested it ), and while their first preference is definitely to escape regulation altogether, paying a few
hundred million to freeze out all possible competition is a pretty good deal for them.
The big entertainment companies may be happy with a deal that sells a perpetual Internet Domination License to US tech giants for a bit of money thrown their way, but that will not translate into gains for artists. The fewer competitors there are
for the publication, promotion, distribution and sale of creative works, the smaller the share will be that goes to creators.
We can do better: if the problem is monopolistic platforms (and indeed, monopolistic distributors ), tackling that directly as a matter of EU competition law would stop those companies from abusing their market power to squeeze creators.
Copyright filters are the opposite of antitrust, though: it will make the biggest companies much bigger, to the great detriment of all the little guys in the entertainment industry and in the market for online platforms for speech.
Many thanks to my local MEP Athea McIntyre who responded to my email about the rise of the censorship machines
I appreciate your concerns regarding the new Copyright reform proposals. However, the objective of Article 13 is to make sure authors, such as musicians, are appropriately paid for their work, and to ensure that platforms fairly share revenues
which they derive from creative works on their sites with creators. I will be voting for new text which seeks to exclude small and microenterprise platforms from the scope and to introduce greater proportionality for SMEs.
In the text under discussion, if one of the main purposes of a platform is to share copyright works, if they optimise these works and also derive profit from them, the platform would need to conclude a fair license with the rightholders, if
rightholders request this. If not, platforms will have to check for and remove specific copyright content once this is supplied from rightholders. This could include pirated films which are on platforms at the same time as they are shown at the
cinema. However, if a platform's main purpose is not to share protected works, it does not optimise copyright works nor to make profit from them, it would not be required to conclude a license. There are exemptions for online encyclopaedias
(Wikipedia), sites where rightholders have approved to the uploading of their works and software platforms, while online market places (including Ebay) are also out of the scope.
Closing this value gap is an essential part of the Copyright Directive, which Secretary of
State Matthew Hancock supports addressing . My Conservative colleagues and I support the general policy justification behind it, which is to make sure that platforms are responsible for their sites and that authors are fairly rewarded and
incentivised to create work. Content recognition will help to make sure creators, such as song writers, can be better identified and paid fairly for their work. Nevertheless, this should not be done at the expense of users' rights. We are
dedicated to striking the right balance between adequately rewarding rightholders and safeguarding users' rights. There are therefore important safeguards to protect users' rights, respect data protection, and to make sure that only proportionate
measures are taken.
I will therefore be supporting the mandate to enter into trilogue negotiations tomorrow so that the Directive can become law.
[Surely one understand that musicians are getting a bit of a rough deal from the internet giants and one can see where McIntyre is coming from. However it is clear that little thought has been made into how rules will
pan out in the real profit driven world where the key take holders are doing their best for their shareholders, not the European peoples. It is surely driving the west into poverty when laws are so freely passed just to do a few nice things,
whilst totally ignoring the cost of destroying people's businesses and incomes].
Offsite Comment: ...And from the point of view of the internet giants
South Yorkshire Police first tweeted a straighforward poster about reporting hate crime:
Hate can be any incident or crime, motivated by prejudice or hostility (or perceived to be so) against a person's race, religion, sexual orientation, transgender identity or disability. Hate hurts and nobody should have to tolerate it. Report it
and put a stop to it #HateHurts
A couple of hours later the police outrageously tweeted again suggesting that people should also report non crimes like online insults:
In addition to reporting hate crime, please report non-crime hate incidents, which can include things like offensive or insulting comments, online, in person or in writing. Hate will not be tolerated in South Yorkshire. Report it and put a stop
to it #HateHurtsSY
I wonder if they they then explain to burglary victims that they are too busy to investigate such crimes because they are busy investigating non-crime internet insults.
ARTICLE 19 is leading a coalition of international human rights organisations, who will tell the European Court of Justice (CJEU) that the de-listing of websites under the right to be forgotten should be limited in order to protect global
freedom of expression. The hearing will take place on September 11 with a judgment expected in early 2019.
The CJEU hearing in Google vs CNIL is taking place after France's highest administrative court asked for clarification in relation to the 2014 ruling in Google Spain. This judgment allows European citizens to ask search engines like Google to
remove links to inadequate, irrelevant or ... excessive content -- commonly known as the right to be forgotten (RTBF). While the content itself remains online, it cannot be found through online searches of the individual's name.
The CJEU has been asked to clarify whether a court or data regulator should require a search engine to de-list websites only in the country where it has jurisdiction or across the entire world.
France's data regulator, the Commission Nationale de l'Informatique et des Libertes (CNIL) has argued that if they uphold a complaint by a French citizen, search engines such as Google should not only be compelled to remove links from google.fr
but all Google domains.
ARTICLE 19 and the coalition of intervening organisations have warned that forcing search engines to de-list information on a global basis would be disproportionate. Executive Director of ARTICLE 19, Thomas Hughes said:
This case could see the right to be forgotten threatening global free speech. European data regulators should not be allowed to decide what Internet users around the world find when they use a search engine. The CJEU must limit the scope of the
right to be forgotten in order to protect the right of Internet users around the world to access information online.
ARTICLE 19 argues that rights to privacy and rights to freedom of expression must be balanced when it comes to making deciding whether websites should be de-listed. Hughes added:
If European regulators can tell Google to remove all references to a website, then it will be only a matter of time before countries like China, Russia and Saudi Arabia start to do the same. The CJEU should protect freedom of expression not set
a global precedent for censorship.
The bill threatens investigative journalism and academic research by making it a crime to view material online that could be helpful to a terrorist. This would deter investigative journalists from doing their work and would make academic
research into terrorism difficult or impossible.
New border powers in the bill could put journalists' confidential sources at risk. The bill's border security measures would mean that journalists could be forced to answer questions or hand over material that would reveal the identity of a
confidential source. These new powers could be exercised without any grounds for suspicion.
The bill also endangers freedom of expression in other ways. It would make it an offence to express an opinion in support of a proscribed (terrorist) organisation in a way that is reckless as to whether this could encourage another person to
support the organisation. This would apply even if the reckless person was making the statement to one other person in a private home.
The bill would criminalise the publication of a picture or video clip of an item of clothing or for example a flag in a way that aroused suspicion that the person is a member or supporter of a terrorist organisation. This would cover, for
example, someone taking a picture of themselves at home and posting it online.
Joy Hyvarinen, head of advocacy said: The fundamentally flawed Counter-Terrorism and Border Security Bill should be sent back to the drawing board. It is not fit for purpose and it would limit freedom of expression, journalism and academic
research in a way that should be completely unacceptable in a democratic country.
Perennial hindu whinger Rajan Zed has recommended another beer referencing the religious character of Kali.
A Chilean craft brewery Cerveza Bundor's Kali IPA, has caught the attention of Zed for its reimagined image of Hindu goddess Kali.
Zed said in a statement that inappropriate usage of Hindu deities or concepts or symbols for commercial or other agenda was not okay as it hurt the devotees. Zed said that goddess Kali was highly revered in Hinduism and was meant to be
worshipped in temples or home shrines and not to be used in selling beer for mercantile greed.
Kali is an American IPA (India Pale Ale) and is described as having a tropical fruit character.
Jungle Love is a 2012 Philippines drama by Sherad Anthony Sanchez.
Starring Gloria Morales, Mei Bastes and Martin Riffer.
A jungle in an undisclosed Philippine location hosts a middle-aged woman who runs off with a baby, two juveniles lost in sexual games, military cadets leisurely wandering about and a guide with an obscure presence. All consumed with the game of
their own lives until the jungle comes to play.
The film won an Honorable Mention (Feature Film), at the Porn Film Festival Berlin 2013.
Shadows of Fiendish Ancestress and Occasionally Parajanov on Durian Cialis is a 2017 Singapore / Japan / Philippines romance by Tze Chuan Chew.
Starring Raissa Ai, Karla MC Bautista and Paolo Dumlao.
With reference to native historical texts and the mythological and religious depiction of the Holy Whore, Chew constructs a mythology of a hermaphrodite who comes to town to impart a wisdom that proves to be too carnal and untimely. Years in the
making and strung together with documentary-like footage of orgiastic happenings, punctuated with moments of refrain into randomness, the film soon escalates into a schizophrenic psychedelia of multicultural and polymorphous sexuality.
Two films that were set to be showcased at a film festival in Grays have been banned by Thurrock Council.
The Thurrock International Celebration of Film runs from September 6th to 9th at the Thameside Theatre in Grays
The organisers were stunned to hear that council has refused them permission to screen two of the planned films. One of the festival organisers, Hi Ching, explained what has happened. He said: T
Thurrock Council has banned the films Jungle Love and Shadows Of Fiendish Ancestress And Occasionally Parajanov On Durian Cialis (pictured right) from being screened at the TIC Film Festival at Thameside Theatre because an initial assessment
suggests both films would be rated R18 and therefore can only be shown in licensed sex premises.
In both films, sexuality does indeed play a central role, but the BBFC explanatory notes state that the R18 rating is normally intended for works whose primary purpose is sexual arousal or stimulation of the viewer. These two films do anything
but -- and moreover, a rating of suitable only for 18 years and over was already in place in order to make sure that only adults would be able to see these films.
Both films have been screened at other film festivals around the world. One reviewer summed up that Jungle Love accomplishes the nearly impossible task of turning what could be a lewd and perverted showcase into a mirror of our innate desire to
venture into the unknown, to abandon the clutches of good taste, and to get lost in the limitless jungle where men are but beasts among other beasts.
By banning these two films, Thurrock Council have the dubious honour of joining these two countries and doing exactly what they have done: performing censorship and stifling discussion. Both films require serious discussion about opportunities
and limits of filmic representation of sexuality -- but with its decision, Thurrock Council tried to make such a discussion impossible.
Councillor Deborah Huelin, Portfolio Holder for Communities, said:
Thurrock Council is supporting the film festival taking place at the Thameside by Thurrock International Celebration of Culture (TICC) by providing the Thameside Theatre as a venue.
The film programme includes a number of short and full length films that aim to celebrate diversity. Most of the films have not been given a rating by the BBFC (British Board of Film Classification) and in such cases responsibility for issuing
ratings for films to be shown in a local area lies with the local authority.
Based on an initial review by the council, it was identified that two of the films could likely be rated R18 if a full assessment were carried out under the guidelines issued by the BBFC. These types of films can only be shown in specially
licensed sex cinemas or supplied by licensed sex shops. The entertainment license for the Thameside Theatre does not allow them to show R18 films which means that these two films cannot form part of the festival.
Shadows Of Fiendish Ancestress And Occasionally Parajanov On Durian Cialis had previously been banned in Singapore in 2017. The picture had been scheduled to have its world premiere at the Singapore International Film Festival, part of
the Singapore Media Festival , this week. But it was this week denied a public release certificate by Films Appeal Committee, on the grounds that it could hurt Muslim religious feelings.
Pornhub's Age verification system AgeID has announced an exclusive partnership with OCL and its Portes solution for providing anonymous face-to-face age verification solution where retailers OK the age of customers who buy a card enabling porn
access. The similar AVSecure scheme allows over 25s to buy the access card without showing any ID but may require to see unrecorded ID from those appearing less than 25.
According to the company, the PortesCard is available to purchase from selected high street retailers and any of the U.K.'s 29,000 PayPoint outlets as a voucher. Each PortesCard will cost £4.99 for use on a single device, or £8.99 for use across
multiple devices. This compares with £10 for the AVSecure card.
Once a card or voucher is purchased, its unique validation code must be activated via the Portes app within 24 hours before expiring. Once the user has been verified they will automatically be granted access to all adult sites using AgeID. Maybe
this 24 hour limit is something to do with an attempt to restrict secondary sales of porn access codes by ensuring that they get tied to devices almost immediately. It all sounds a little hasslesome.
As an additional layer of protection, parents can quickly and simply block access on their children's devices to sites using Portes, so PortesCards cannot be associated with AgeID.
But note that an anonymously bought card is not quite a 100% safe solution. One has to consider whether if the authorities get hold of a device whether the can then see a complete history of all websites accessed using the app or access code. One
also has to consider whether someone can remotely correlate an 'anonymous' access code with all the tracking cookies holding one's id.
A review is to take place into whether misogynistic conduct should be treated as a hate crime, following Labour MP Stella Creasy's call to change the law.
The move was announced during a debate on proposed legislation to criminalise upskirting in England and Wales. On Wednesday, MPs approved the Voyeurism Bill, which would ban the taking of unsolicited pictures under someone's clothing, known as
upskirting, in England and Wales.
'Justice' Minister Lucy Frazer said the Voyeurism Bill was not the right vehicle for seeking such a change in the law but said she sympathised with Creasy's views. She said ministers would fund a review into the coverage and approach of hate
The Law Commission will now review how sex and gender characteristics are treated within existing hate crime laws and whether new offences are needed. This review will include how protected characteristics, including sex and gender
characteristics, should be considered by new or existing hate crime law.
Update: Governments should not be policing thought
The Law Commission will review how sex and gender characteristics are treated within existing hate crime laws and whether new offences are needed.
Index does not believe the UK needs new laws to protect women from abuse and violence.
The UK already has dozens of laws on its books that make criminal the kind of abusive actions that are disproportionately targeted at women: rape, harassment, stalking. Despite this, the most egregious crimes against women frequently go
unpunished. In the case of rape, conviction rates are woeful. A report published in 2017 found that only one in 14 rapes reported in England and Wales ended in a conviction.
Creating new laws that make misogyny a hate crime will do little to change this, as lawyers argued earlier this week . Nor are they likely to help change attitudes. In fact they can do the opposite.
Laws that criminalise speech are deeply problematic. In a free society, thoughts should not be criminal no matter how hateful they are. Yet laws that make hate criminal -- in a well-meaning but misplaced effort to protect minorities and
persecuted groups -- are on the rise.
We should all be worried about this. As the US delegation noted in a United Nations Human Rights Council meeting in 2015, hate speech laws are increasingly being abused by those in power to target political opponents or to persecute the very
minority groups such laws are meant to protect.
In addition, they do little to improve tolerance or treatment of such groups: Such laws, including blasphemy laws, tend to reinforce divisions rather than promote societal harmony, the US delegation said. The presence of these laws has little
discernible effect on reducing actual incidences of hate speech. In some cases such laws actually serve to foment violence against members of minority groups accused of expressing unpopular viewpoints.
As if to prove their point, Russia used the same meeting to praise hate speech laws and the need to police hate speech in Ukraine so as not to ignite nationalistic fires.
Tackling hate requires changes in society's attitude. Some of those changes need laws -- such as those we rightly already have to outlaw discrimination in the workplace. Some require major changes in our institutions to the structures and
practices that reinforce inequality. But prohibiting speech, or policing thought, is not the way to do this.
Offsite Comment: Stella Creasy's war on thoughtcrime
I am sure, Mr Speaker, that you will have seen the 2010 film The King's Speech , portraying George VI. It contained 11 uses of the F-word and was granted a classification of 12A. I recently saw the highly rated documentary A Northern
Soul by Hull film-maker Sean McAllister. Its main character uses the F-word 14 times and it is heard 19 times in total in the film. None of it was aggressive or gratuitous, and the film simply portrays the life of a working-class Hull man
and his work helping local children, but it has been given a 15 certificate nationally. May we therefore have a debate about whether there is a class bias in the way censors seek to protect younger teenagers from the reality and language that
many experience in their lives every day?
Andrea Leadsom Lord President of the Council and Leader of the House of Commons
The hon. Lady raises a genuinely interesting point, and I urge her to seek an Adjournment debate so she can discuss it with Ministers and then take it forward.
The government is amending its Counter-Terrorism and Border Security Bill with regards to criminalising accessing terrorism related content on the internet.
MPs, peers and the United Nations have already raised human rights concerns over pre-existing measures in the Counter-Terrorism and Border Security Bill, which proposed to make accessing propaganda online on three or more different occasions a
The Joint Human Rights Committee found the wording of the law vague and told the government it violated Article 10 of the European Convention on Human Rights (ECHR). The committee concluded in July:
This clause may capture academic and journalistic research as well as those with inquisitive or even foolish minds.
The viewing of material without any associated intentional or reckless harm is, in our view, an unjustified interference with the right to receive information...unless amended, this implementation of this clause would clearly risk breaching
Article 10 of the ECHR and unjustly criminalising the conduct of those with no links to terrorism.
The committee called for officials to narrow the new criminal offence so it requires terrorist intent and defines how people can legally view terrorist material.
The United Nations Special Rapporteur on the right to privacy also chipped accusing the British government of straying towards thought crime with the law.
In response, the government scrapped the three clicks rule entirely and broadened the concept of viewing to make the draft law read:
A person commits an offence if...the person views or otherwise accesses by means of the internet a document or record containing information of that kind.
It also added a clause saying a reasonable excuse includes:
Having no reason to believe, that the document or record in question contained, or was likely to contain, information of a kind likely to be useful to a person committing or preparing an act of terrorism.
Big Brother Watch has collaborated with leading campaigners, investigative journalists, and lawyers to share stories from the frontline on surveillance and data collection in the UK.
If you believe that you have nothing to hide and nothing to fear, this report will make you think again.
From unionists to journalists, and even welfare recipients and school children, we found that surveillance is increasingly affecting the lives of innocent people in the UK, chilling citizens' rights to freedom of expression and privacy, and
In exactly one week, the European Parliament will hold
a crucial debate and vote on a proposal
so terrible , it can only be called an extinction-level event for the Internet as we know it.
At issue is the text of the new EU Copyright Directive, which updates the 17-year-old copyright regulations for the 28 member-states of the EU. It makes a vast array of technical changes to EU copyright law, each of which has stakeholders rooting
for it, guaranteeing that whatever the final text says will become the law of the land across the EU.
The Directive was pretty uncontroversial, right up to the day last May when the EU started enforcing the General Data Protection Regulation (GDPR), a seismic event that eclipsed all other Internet news for weeks afterward. On that very day, a
German MEP called Axl Voss quietly changed the text of the Directive to
reintroduce two long-discarded proposals -- "Article 11" and "Article 13" -- proposals that had been evaluated by the EU's own experts and dismissed as dangerous and unworkable.
Under Article 11 -- the " link tax " -- online services are banned from allowing links to news services on their platforms unless they get a license to make links to the news; the rule does not define "news service" or
"link," leaving 28 member states to make up their own definitions and leaving it to everyone else to comply with 28 different rules.
Under Article 13 -- the " censorship machines " -- anyone who allows users to communicate in public by posting audio, video, stills, code, or anything that might be copyrighted -- must send those posts to a copyright enforcement
algorithm. The algorithm will compare it to all the known copyrighted works (anyone can add anything to the algorithm's database) and censor it if it seems to be a match.
These extreme, unworkable proposals represent a grave danger to the Internet. The link tax means that only the largest, best-funded companies will be able to offer a public space where the news can be discussed and debated. The censorship
machines are a gift to every petty censor and troll (just claim copyright in an embarrassing recording and watch as it disappears from the Internet!), and will add hundreds of millions to the cost of operating an online platform, guaranteeing
that Big Tech's biggest winners will never face serious competition and will rule the Internet forever.
That's terrible news for Europeans, but it's also alarming for all the Internet's users, especially Americans.
The Internet's current winners -- Google, Facebook, Twitter, Apple, Amazon -- are overwhelmingly American, and they embody the American regulatory indifference to surveillance and privacy breaches.
But the Internet is global, and that means that different regions have the power to export their values to the rest of the world. The EU has been a steady source of pro-privacy, pro-competition, public-spirited Internet rules and regulations, and
European companies have a deserved reputation for being less prone to practicing "
surveillance capitalism " and for being more thoughtful about the human impact of their services.
In the same way that California is a global net exporter of lifesaving emissions controls for vehicles, the EU has been a global net exporter of privacy rules, anti-monopoly penalties, and other desperately needed corrections for an Internet that
grows more monopolistic, surveillant, and abusive by the day.
Many of the cheerleaders for Articles 11 and 13 talk like these are a black eye for Google and Facebook and other U.S. giants, and it's true that these would result in hundreds of millions in compliance expenditures by Big Tech, but it's money
that Big Tech (and only Big Tech) can afford to part with. Europe's much smaller Internet companies need not apply.
It's not just Europeans who lose when the EU sells America's tech giants the right to permanently rule the Internet: it's everyone, because Europe's tech companies, co-operatives, charities, and individual technologists have the potential to make
everyone's Internet experience better. The U.S. may have a monopoly on today's Internet, but it doesn't have a monopoly on good ideas about how to improve tomorrow's net.
The global Internet means that we have friends and colleagues and family all over the world. No matter where you are in the world today, please take ten minutes to get in touch with two friends in the EU , send them this article, and then
ask them to get in touch with their MEPs by visiting
Save Your Internet .
There's only one Internet and we all live on it. Europeans rose up to kill
ACTA , the last brutal assault on Internet freedom, helping Americans fight our own government's short-sighted foolishness; now the rest of the world can return the favor to our friends in the EU.
Following an investigation, Ofcom has revoked the broadcast licence held by Ausaf UK Limited for Ausaf TV, a channel which was intended to serve the Pakistani community in the UK, but had not started broadcasting at the time of Ofcom's decision.
In line with our ongoing duty under the Broadcasting Act 1990, Ofcom opened an investigation into the licensee about whether those in control were 'fit and proper' to hold the licence.
After carefully considering all available evidence, including oral representations made by the licensee, our investigation concluded that:
the individual in control of Ausaf UK Limited had close links to the Pakistan and UK editions of the Daily Ausaf newspaper, in which articles were published which we considered amounted to hate speech and incitement to crime/terrorist actions;
the licensee provided misleading or false information about the links between the Daily Ausaf and Ausaf UK Limited during the course of our investigation; and
there is a material risk that the licensee could breach our broadcasting rules; for example, by airing similar content to that published in the Daily Ausaf on Ausaf TV, which would be harmful to viewers if the licensee were permitted to
this brings into question public confidence in the regulatory activity if Ofcom were to remain satisfied that the licensee was fit and proper to broadcast.
In light of these serious findings, we are no longer satisfied that that those in control of Ausaf UK Limited are fit and proper to hold a broadcast licence. We have therefore revoked the licence.
The channel had not started broadcasting, and it will now be prevented from doing so.
A number of TV broadcasters, mobile network and internet service providers has urged the UK government to introduce a new internet censor of social media companies. In a letter to The Sunday Telegraph, executives from the BBC, ITV and Channel 4,
as well as Sky, BT and TalkTalk, called for a new censor to help tackle fake news, child exploitation, harassment and other growing issues online. The letter said:
We do not think it is realistic or appropriate to expect internet and social media companies to make all the judgment calls about what content is and is not acceptable, without any independent oversight.
There is an urgent need for independent scrutiny of the decisions taken, and greater transparency.
This is not about censoring the internet:[ ...BUT... ] it is about making the most popular internet platforms safer, by ensuring there is accountability and transparency over the decisions these private companies are already
taking. The UK government is aware of the problems on Facebook, Twitter, and other social media platforms. Last October, it introduced an Internet Safety Green Paper as part of its digital charter manifesto pledge. Following a consultation
period, then digital secretary Matt Hancock (he's now the health secretary) said a white paper would be introduced later in 2018.
And in a comment suggesting that maybe the call is more about righting market imbalances than concern over societal problems. The letter noted that its signatories all pay high and fair levels of tax. The letter also notes that
broadcasters and telcos are held to account by Ofcom, while social media forms are not, which again gives the internet companies an edge in the market.
Back in 2001, the European Parliament came together to pass regulations and set up copyright laws for the internet, a technology that was just finding its footing after the dot com boom and bust. Wikipedia had just been born, and there were 29
million websites. No one could imagine the future of this rapidly growing ecosystem -- and today, the internet is even more complex. Over a billion websites, countless mobile apps, and billions of additional users. We are more interconnected than
ever. We are more global than ever. But 17 years later, the laws that protect this content and its creators have not kept up with the exponential growth and evolution of the web.
Next week, the European Parliament will decide how information online is shared in a vote that will significantly affect how we interact in our increasingly connected, digital world. We are in the last few moments of what could be our last
opportunity to define what the internet looks like in the future. The next wave of proposed rules under consideration by the European Parliament will either permit more innovation and growth, or stifle the vibrant free web that has allowed
creativity, innovation, and collaboration to thrive. This is significant because copyright does not only affect books and music, it profoundly shapes how people communicate and create on the internet for years to come.
This is why we must remember the original objective for this update to the law: to make copyright rules that work for better access to a quickly-evolving, diverse, and open internet.
The very context in which copyright operates has changed completely. Consider Wikipedia, a platform which like much of the internet today, is made possible by people who act as consumers and creators. People read Wikipedia, but they also write
and edit articles, take photos for Wikimedia Commons, or contribute to other Wikimedia free knowledge projects. Content on Wikipedia is available under a free license for anyone to use, copy, or remix.
Every month, hundreds of thousands of volunteers make decisions about what content to include on Wikipedia, what constitutes a copyright violation, and when those decisions need to be revised . We like it this way -- it allows people, not
algorithms, to make decisions about what knowledge should be presented back to the rest of the world.
Changes to the EU Directive on Copyright in the Digital Single Market could have serious implications for Wikipedia and other independent and nonprofit websites like it.
The internet today is collaborative and open by nature. And that is why our representatives to the EU must institute policies that promote the free exchange of information online for everyone.
We urge EU representatives to support reform that adds critical protections for public domain works of art, history, and culture, and to limit new exclusive rights to existing works that are already free of copyright.
The world should be concerned about new proposals to introduce a system that would automatically filter information before it appears online. Through pre-filtering obligations or increased liability for user uploads, platforms would be forced to
create costly, often biased systems to automatically review and filter out potential copyright violations on their sites. We already know that these systems are historically faulty and often lead to false positives. For example, consider the
experience of a German professor who
repeatedly received copyright violation notices when using public domain music from Beethoven, Bartók, and Schubert in videos on YouTube.
The internet has already created alternative ways to manage these issues. For instance, Wikipedia contributors already work hard to catch and remove infringing content if it does appear. This system, which is largely driven by human efforts, is
very effective at preventing copyright infringement.
Much of the conversation surrounding EU copyright reform has been dominated by the market relationships between large rights holders and for-profit internet platforms. But this small minority does not reflect the breadth of websites and users on
the internet today. Wikipedians are motivated by a passion for information and a sense of community. We are entirely nonprofit, independent, and volunteer-driven. We urge MEPs to consider the needs of this silent majority online when designing
copyright policies that work for the entire internet.
As amendments to the draft for a new Copyright Directive are considered, we urge the European Parliament to create a copyright framework that reflects the evolution of how people use the internet today. We must remember the original problem
policymakers set out to solve: to bring copyright rules in line with a dramatically larger, more complex digital world and to remove cross-border barriers. We should remain true to the original vision for the internet -- to remain an open,
accessible space for all.
Met Police Commissioner Cressida Dick believes detectives should have access to material from social media companies within minutes. She said UK police forces had faced a very protracted procedure in such cases.
The call comes after a suspect in the murder of Lucy McHugh, 13, was jailed for withholding his Facebook password from police. Last week, Stephen Nicholson was jailed for 14 months having admitted failing to comply with an order under the
Regulation of Investigatory Powers Act requiring him to disclose a Facebook password.
Detectives investigating her murder say it is taking an inordinate amount of time to access evidence from Facebook.
Angus Crawford, BBC News Correspondent, explained:
Facebook is a US company and so has to abide by US laws on data protection and due process. This means they have no duty to hand any information over to a foreign police force.
Only a request via the US Department of Justice using something called the Mutual Legal Assistance Treaty will oblige disclosure, but this is cumbersome, expensive and can take months.
A spokeswoman for the social media company said Facebook is working closely with law enforcement and following well-established legal mechanisms. Facebook says it already has a team which works with law enforcement and they have been cooperating
with Hampshire Police on the Lucy McHugh case.
[Of course the police should get instant access to social media when pursuing people guilty of a serious crime. But of course they need to be denied the facility when pursuing innocent people being harassed for a
In July MEPs voted down plans to fast-track the Copyright Directive, derailing Article 13's plan to turn Internet platforms into copyright enforcers.
Yet the fight to stop Article 13's vision of the Internet - one where all speech is approved or rejected by an automated upload filter - is not over.
On 12 September MEPs will vote once again, but this time as of yet unknown amendments will be added to the mix. Bad ideas like Article 13 - and perhaps worse - will be voted on individually, so it's not a simple up or down vote. To identify and
oppose bad amendments, MEPs must understand exactly why Article 13 threatens free speech.
Many MEPs are undecided. Please write to them now. You can use the points below to construct your own unique message. IF YOU'RE OUTSIDE THE UK use this tool instead:
Oppose changes to Internet platform liability. If platforms become liable for user content, they will have no choice but to scan all uploads with automated filters.
Say no to upload filters. Filters struggle to identify the vital legal exceptions to copyright that enable research, commentary, creative works, parody and more. Poor judgement means innocent speech gets blocked along with copyright
Internet companies do not make good copyright enforcers. To avoid liability penalties, platforms will err on the side of caution and over-block.
Free speech takes precedence over copyright. Threatening free expression is way too high a price to pay for the sake of copyright enforcement.
General monitoring of all content is technically infeasible. No filter can possibly review every form of content covered in Article 13's extraorindarily wide mandate which includes text, audio, video, images and software.
If you are part of a tech business, or a creator, like a musician, photographer, video editor or a writer, let your MEP know!
We need copyright reform that does not threaten free expression The controversial Copyright Directive is fast approaching another pivotal vote on 12 September. For the third time in half a year MEPs will decide whether Article 13 -
or something even worse - will usher in a new era, where all content is approved or rejected by automated gatekeepers.
Seen through an economic lens the Directive's journey is viewed as a battle between rights holders and tech giants. Yet a growing chorus of ordinary citizens, Internet luminaries, human rights organisations and creatives have rightly expanded the
debate to encompass the missing human dimension.
Open Rights Group opposes Article 13 - or any new amendments proposing similar ideas - because it poses a real threat to the fundamental right to free speech online.
Free speech defenders claimed a victory over industry lobbyists this summer when MEPs rejected plans to fast-track the Directive and a lasting triumph is now in reach. UK residents are in an especially strong position to make a difference because
many of their MEPs remain undecided. Unlike some other EU states, voting patterns aren't falling strictly on party lines in the UK.
This time new amendments will be added, and the underlying principles of Article 13 will again face a vote. They include:
Changes to Internet platform liability
If Internet platforms become directly liable for user content, they will become de facto copyright enforcers. This will leave them little choice but to introduce general monitoring of all user content with automated filters. Companies are not fit
to police free speech. To avoid penalties they will err on the side of caution and over-block user content.
The Implicit or explicit introduction of upload filters
Everything we know about automated filters shows
they struggle to comprehend context. Yet identifying the vital legal exceptions to copyright that enable research, commentary, creative works, parody and more requires a firm grasp of context. An algorithm's poor judgement will cause
innocent speech to be routinely blocked along with copyright violations.
The introduction of general monitoring
General monitoring of all user content is a step backwards for a free and open Internet. It is also technically infeasible to monitor every form of content covered in Article 13's extraordinarily wide mandate which includes text, audio, video,
images and software.
Outspoken Article 13 cheerleader Axel Voss MEP said "Censorship machines is not what we intend to implement and no one in the European Parliament wants that." This is what happens when copyright reform is pursued with little
consideration to human rights.
The proposals within Article 13 would change the way that the Internet works, from free and creative sharing, to one where anything can be removed without warning, by computers. This is far too high a price to pay for copyright enforcement. We
need a copyright reform which does not sacrifice fundamental human and digital rights.
The Five Eyes governments of the UK, US, Canada, Australia and New Zealand have threatened the tech industry to voluntarily create backdoor access to their systems, or be compelled to by law if they don't.
The move is a final warning to platform holders such as WhatsApp, Apple and Google who deploy encryption to guarantee user privacy on their services. A statement by the Five Eyes governments says:
Encryption is vital to the digital economy and a secure cyberspace, and to the protection of personal, commercial and government information ...HOWEVER.. . the increasing use and sophistication of certain encryption designs present
challenges for nations in combating serious crimes and threats to national and global security.
Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate
investigations and avoid detection and prosecution.
If the industry does not voluntarily establish lawful access solutions to their products the statement continued, we may pursue technological, enforcement, legislative or other measures to guarantee entry.
[The trouble with discriminatory laws such as this is that they encourage hatred of others rather than diffusing the issue. Identity politics is very aggressive. Lynch mobs gather to push for for the most severe punishments for
the most trivial of transgressions. Police and the prosecuting authorities always seem to side with the complainant and the resulting injustice is noted by more or less everyone in society. It succeeds only in winding everybody up and chipping
away at any remaining respect for the way that the authorities run our lives. In an equal society everybody should have exactly the same rights to be protected form the ill intent of others].
The Labour MP Stella Creasy has put forward an amendment to the upskirting bill, due to be debated in the Commons this Wednesday, that would add misogyny as an aggravating factor in England and Wales. This would enable courts to consider it when
sentencing an offender and require police forces to record it.
Creasy hopes this will be the first step towards recognising misogyny as a hate crime. Creasy said:
Upskirting is a classic example of a crime in which misogyny is motivating the offence. We protect women in the workplace from discrimination on grounds of their sex, but not in the courtroom -- with upskirting, street harassment, sexually based
violence and abuse a part of life for so many it's time to learn from where misogyny has been treated as a form of hate crime and end this gap.
The Guardian understands that the Law Commission, which has called for a fundamental review of all hate crime legislation, supports the spirit of Creasy's amendment.
In Scotland, the Holyrood government will shortly launch a consultation on the reform of all aspects of hate crime legislation, after an independent report recommended including gender , as well as age, as a hate crime in law. Although the
National Police Chiefs' Council rejected a proposal to extend the policy nationwide in July, it has set up a working group to examine the issue.
An era of adult television has come to an and, according to a story in the Los Angeles Times , which reported that the Time-Warner owned, pay cable network HBO has spent the summer, quietly and without fanfare, removing its once-prodigious
library of erotic documentaries and entertainment programs from the network and the HBO streaming platforms, HBO Go and HBO Now.
Since the 1990s, HBO has produced and broadcast such series as the influential Real Sex , the Las Vegas brothel reality series Cathouse , and recurring instructional sex specials hosted by adult performer Katie Morgan.
But HBO has not produced new adult late night programs for several years, and now the network will no longer offer repeats or archived shows from its adult category either.
While HBO's new owner, the telecom giant AT&T, informed HBO employees earlier this year that it planned big changes for the network, the elimination of HBO's erotic fare, network execs told the Times , was not mandated by AT&T and in fact
began well before the telecom conglomerate took over. The reason that HBO is ditching their late night lineup, according to what one spokesperson told the Times , is simply that HBO viewers have lost interest, most likely due to the proliferation
of adult content online.
US moralists always want more. The Parents Television Council writes:
The Parents Television Council applauds HBO and its corporate parent, AT&T, for removing the pornographic content from its platform -- but urges AT&T to make the same move by removing X-rated pornographic content from DirecTV. PTC
President Tim Winter whinged:
AT&T's HBO made a wise decision to remove pornographic content, even citing that 'there wasn't strong demand for this kind of adult programming.' While that is a huge positive step forward, the same logic should also extend to AT&T-owned
DirecTV, which still offers hardcore pornographic content to subscribers.
How can a company that says it is built on responsibility continue to deliver and profit from pornography? How much does DirecTV porn really increase the earnings per share? Is this a reasonable tradeoff for a so-called responsible company?
Given that AT&T's CEO Randall Stephenson was the 36th National President of the Boy Scouts, it's hard to reconcile that role with the DirecTV pornographic lineup. Are the explicit pornographic titles on DirecTV about grandmothers, mothers,
or stepsisters what he wants his scouts to be thinking of?
French lawmakers have voted to outlaw catcalls as part of repressive legislation on sexual misconduct. As of next month, catcalling on streets and public transportation can result in on-the-spot fines of up to 750, with more for increasingly
aggressive and physical behavior. French junior minister for gender equality Marlène Schiappa said when the law was passed by France's highest legal authority, the Conseil d'Ã?tat, that harassment in the street has previously not been punished.
From now on, it will be.
Included in the bill are new laws concerning consent for victims of sexual violence under 15, and an extension for underage victims to file complaints to 30 years after they turn 18.
Dutch right-wing leader Geert Wilders has cancelled his Prophet Mohammed cartoon contest after receiving credible death threats. He said in a statement:
I have decided to cancel the competition to avoid the risk of making people victims of Islamist violence. I don't want Muslims to use the cartoon competition as an excuse for Islamist violence.
He added that those against the cartoon event see not only me, but the entire Netherlands as a target.
It is surely a retreat for the western ideals of free speech but generally most are happy that a trigger point for violence has been voided. Tom Rogan commented in the Washington Examiner:
Unfortunately, this ideal is increasingly under threat in the West. And while it's true that Islamic extremists continue to deter and intimidate contravening voices in the name of tolerance, the major threat to Western free speech takes root in
an alignment of leftist and centrist voices. These voices and their lobbying efforts in government and in the private sector put forward the notion that emotion is more important than ideas. And the ideal that some ears are more equal than
others. Here, I speak of the politicians who censor speech on the great commons of social media, of the leftist celebrities who assert that it is good to punch Nazis, and of the academics who believe historic wrongs are resolved by histrionic
It is these Western authoritarians veiled in liberal cloaks that Wilders has yielded to. But it is not just these. By now presenting himself as a martyr for free expression, Wilders will be able to attract more moderate Dutch voters to his
political cause. And considering that Wilders controls the second-largest political party in the Dutch parliament, his political potential here is obvious. We should not wish for that outcome. Wilders, after all, is a prejudiced man who seeks
his own aggrandisement before all else.
California's net neutrality bill, SB 822 has received a majority of votes in the Senate and is heading to the governor's desk. In this fight, ISPs with millions of dollars to spend lost to the voice of the majority of Americans who support net
neutrality. This is a victory that can be replicated.
ISPs like Verizon, AT&T, and Comcast hated this bill. SB 822 bans blocking, throttling, and paid prioritization, classic ways that companies have violated net neutrality principles. It also incorporates much of what the FCC learned and
incorporated into the 2015 Open Internet Order, preventing new assaults on the free and open Internet. This includes making sure companies can't circumvent net neutrality at the point of interconnection within the state of California. It also
prevents companies from using zero rating--the practice of not counting certain apps or services against a data limit--in a discriminatory way. That is to say that, say, there could be a plan where all media streaming services were zero-rated,
but not one where just one was. One that had either paid for the privilege or one owned by the service provider. In that respect, it's a practice much like discriminatory paid prioritization, where ISPs create fast lanes for those who can pay or
for other companies they own.
ISPs and their surrogates waged a war of misinformation on this bill. They argued that net neutrality made it impossible to invest in expanding and upgrading their service, even though they make plenty of money . Lobbying groups sent out
robocalls that didn't mention net neutrality--which remains overwhelmingly popular--merely mentioned the bill's number and claimed, with no evidence, that it would force ISPs to raise their prices by $30 . And they argued against the zero-rating
provision when we know those practices disproportionately affect lower-income consumers.
There was a brief moment in this fight when it looked like the ISPs had won. Amendments offered in the Assembly Committee on Communication and Conveyance after the bill had passed the California Senate mostly intact gutted the bill. But you made
your voices heard again and again until the bill's strength was restored and we turned opponents into supporters in the legislature.
In the middle of all of this, the story broke that Verizon had throttled the service of a fire department in California during a wildfire. During the largest wildfire in California history, the Santa Clara fire department found that its unlimited
data plan was being throttled by Verizon and, when contacted, the ISP told the fire department they needed to pay more for a better plan. Under the 2015 Open Internet Order, the FCC would have been able to investigate Verizon's actions . But
since that order's been repealed, Verizon might escape meaningful punishment for its actions.
California's fight is a microcosm of the nation's. Net neutrality is popular across the country . The same large ISPs that led the fight against it in California are the ones that serve the rest of the country, a majority of which don't have a
choice of provider . The arguments that they made in California are the same ones they made to the FCC to get the Open Internet Order repealed. The only thing preventing what happened to California's firefighters from happening elsewhere is
Verizon saying it won't.
We need to net neutrality protections on as many levels as we can get them. And Congress can still vote to restore the FCC's 2015 Open Internet Order. In fact, the Senate already did. So contact your member of the House of Representatives and
tell them to vote for the Congressional Review Act and save national net neutrality protections.
More than 11,000 complaints have been made to TV censor Ofcom about the Celebrity Big Brother punching episode. In Thursday's episode, Ryan Thomas was given a warning for punching fellow housemate Roxanne Pallett.
The former Corrie star said there was no anger or malice in what happened after Roxanne complained to the show's producers about his behaviour. Big Brother bosses issued him with a formal warning for physical contact.
Ofcom said it had received 11,215 complaints about the episode, saying:
We are assessing these complaints against our broadcasting rules, before deciding whether or not to investigate.