Melon Farmers Original Version

Censor Watch


2018: November

 2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024 
Jan   Feb   Mar   April   May   June   July   Aug   Sep   Oct   Nov   Dec   Latest  

 

Censorship machines...

Yes, the EU's New #CopyrightDirective is All About Filters. By Cory Doctorow


Link Here30th November 2018
Full story: Copyright in the EU...Copyright law for Europe

When the EU started planning its new Copyright Directive (the "Copyright in the Digital Single Market Directive"), a group of powerful entertainment industry lobbyists pushed a terrible idea: a mandate that all online platforms would have to create crowdsourced databases of "copyrighted materials" and then block users from posting anything that matched the contents of those databases.

At the time, we, along with academics and technologists explained why this would undermine the Internet, even as it would prove unworkable. The filters would be incredibly expensive to create, would erroneously block whole libraries' worth of legitimate materials, allow libraries' more worth of infringing materials to slip through, and would not be capable of sorting out "fair dealing" uses of copyrighted works from infringing ones.

The Commission nonetheless included it in their original draft. Two years later, after the European Parliament went back and forth on whether to keep the loosely-described filters, with German MEP Axel Voss finally squeezing a narrow victory in his own committee, and an emergency vote of the whole Parliament. Now, after a lot of politicking and lobbying, Article 13 is potentially only a few weeks away from becoming officially an EU directive, controlling the internet access of more than 500,000,000 Europeans.

The proponents of Article 13 have a problem, though: filters don't work, they cost a lot, they underblock, they overblock, they are ripe for abuse (basically, all the objections the Commission's experts raised the first time around). So to keep Article 13 alive, they've spun, distorted and obfuscated its intention, and now they can be found in the halls of power, proclaiming to the politicians who'll get the final vote that "Article 13 does not mean copyright filters."

But it does.

Here's a list of Frequently Obfuscated Questions and our answers. We think that after you've read them, you'll agree: Article 13 is about filters, can only be about filters, and will result in filters.

  • Article 13 is about filtering, not "just" liability

    Today, most of the world (including the EU) handles copyright infringement with some sort of takedown process. If you provide the public with a place to publish their thoughts, photos, videos, songs, code, and other copyrightable works, you don't have to review everything they post (for example, no lawyer has to watch 300 hours of video every minute at YouTube before it goes live). Instead, you allow rightsholders to notify you when they believe their copyrights have been violated and then you are expected to speedily remove the infringement. If you don't, you might still not be liable for your users' infringement, but you lose access to the quick and easy 'safe harbor' provided by law in the event that you are named as part of any copyright lawsuit (and since the average internet company has a lot more money than the average internet user, chances are you will be named in that suit). What you're not expected to be is the copyright police. And in fact, the EU has a specific Europe-wide law that stops member states from forcing Internet services from having to play this role: the same rule that defines the limits of their liability, the E-Commerce Directive, in the very next article, prohibits a "general obligation to monitor." That's to stop countries from saying "you should know that your users are going to break some law, some time, so you should actively be checking on them all the time -- and if you don't, you're an accomplice to their crimes." The original version of Article tried to break this deal, by re-writing that second part. Instead of a prohibition on monitoring, it required it, in the form of a mandatory filter.

    When the European Parliament rebelled against that language, it was because millions of Europeans had warned them of the dangers of copyright filters. To bypass this outrage, Axel Voss proposed an amendment to the Article that replaced an explicit mention of filters, but rewrote the other part of the E-Commerce directive. By claiming this "removed the filters", he got his amendment passed -- including by gaining votes by MEPs who thought they were striking down Article 13.Voss's rewrite says that sharing sites are liable unless they take steps to stop that content before it goes online.

    So yes, this is about liability, but it's also about filtering. What happens if you strip liability protections from the Internet? It means that services are now legally responsible for everything on their site. Consider a photo-sharing site where millions of photos are posted every hour. There are not enough lawyers -- let alone copyright lawyers -- let alone copyright lawyers who specialise in photography -- alive today to review all those photos before they are permitted to appear online.

    Add to that all the specialists who'd have to review every tweet, every video, every Facebook post, every blog post, every game mod and livestream. It takes a fraction of a second to take a photograph, but it might take hours or even days to ensure that everything the photo captures is either in the public domain, properly licensed, or fair dealing. Every photo represents as little as an instant's work, but making it comply with Article 13 represents as much as several weeks' work. There is no way that Article 13's purpose can be satisfied with human labour.

    It's strictly true that Axel Voss's version of Article 13 doesn't mandate filters -- but it does create a liability system that can only be satisfied with filters.

    But there's more: Voss's stripping of liability protections has Big Tech like YouTube scared, because if the filters aren't perfect, they will be potentially liable for any infringement that gets past them -- and given their billions, that means anyone and everyone might want to get a piece of them. So now, YouTube has started lobbying for the original text, copyright filters and all. That text is still on the table, because the trilogue uses both Voss' text (liability to get filters) and member states' proposal (all filters, all the time) as the basis for the negotiation.

  • Most online platforms cannot have lawyers review all the content they make available

    The only online services that can have lawyers review their content are services for delivering relatively small libraries of entertainment content, not the general-purpose speech platforms that make the Internet unique. The Internet isn't primarily used for entertainment (though if you're in the entertainment industry, it might seem that way): it is a digital nervous system that stitches together the whole world of 21st Century human endeavor. As the UK Champion for Digital Inclusion discovered when she commissioned a study of the impact of Internet access on personal life, people use the Internet to do everything, and people with Internet access experience positive changes across their lives : in education, political and civic engagement, health, connections with family, employment, etc.

    The job we ask, say, iTunes and Netflix to do is a much smaller job than we ask the online companies to do. Users of online platforms do sometimes post and seek out entertainment experiences on them, but as a subset of doing everything else: falling in love, getting and keeping a job, attaining an education, treating chronic illnesses, staying in touch with their families, and more. iTunes and Netflix can pay lawyers to check all the entertainment products they make available because that's a fraction of a slice of a crumb of all the material that passes through the online platforms. That system would collapse the instant you tried to scale it up to manage all the things that the world's Internet users say to each other in public.

  • It's impractical for users to indemnify the platforms

    Some Article 13 proponents say that online companies could substitute click-through agreements for filters, getting users to pay them back for any damages the platform has to pay out in lawsuits. They're wrong. Here's why.

    Imagine that every time you sent a tweet, you had to click a box that said, "I promise that this doesn't infringe copyright and I will pay Twitter back if they get sued for this." First of all, this assumes a legal regime that lets ordinary Internet users take on serious liability in a click-through agreement, which would be very dangerous given that people do not have enough hours in the day to read all of the supposed 'agreements' we are subjected to by our technology.

    Some of us might take these agreements seriously and double-triple check everything we posted to Twitter but millions more wouldn't, and they would generate billions of tweets, and every one of those tweets would represent a potential lawsuit.

    For Twitter to survive those lawsuits, it would have to ensure that it knew the true identity of every Twitter user (and how to reach that person) so that it could sue them to recover the copyright damages they'd agreed to pay. Twitter would then have to sue those users to get its money back. Assuming that the user had enough money to pay for Twitter's legal fees and the fines it had already paid, Twitter might be made whole... eventually. But for this to work, Twitter would have to hire every contract lawyer alive today to chase its users and collect from them. This is no more sustainable than hiring every copyright lawyer alive today to check every tweet before it is published.

  • Small tech companies would be harmed even more than large ones

    It's true that the Directive exempts "Microenterprises and small-sized enterprises" from Article 13, but that doesn't mean that they're safe. The instant a company crosses the threshold from "small" to "not-small" (which is still a lot smaller than Google or Facebook), it has to implement Article 13's filters. That's a multi-hundred-million-dollar tax on growth, all but ensuring that the small Made-in-the-EU competitors to American Big Tech firms will never grow to challenge them. Plus, those exceptions are controversial in the Trilogue, and may disappear after yet more rightsholder lobbying.

  • Existing filter technologies are a disaster for speech and innovation

    ContentID is YouTube's proprietary copyright filter. It works by allowing a small, trusted cadre of rightsholders to claim works as their own copyright, and limits users' ability to post those works according to the rightsholders' wishes, which are more restrictive than what the law's user protections would allow. ContentID then compares the soundtrack (but not the video component) of any user uploads to the database to see whether it is a match.

    Everyone hates ContentID. Universal and the other big rightsholders complain loudly and frequently that ContentID is too easy for infringers to bypass. YouTube users point out that ContentID blocks all kind of legit material, including silence , birdsong , and music uploaded by the actual artist for distribution on YouTube . In many cases, this isn't a 'mistake,' in the sense that Google has agreed to let the big rightsholders block or monetize videos that do not infringe any copyright, but instead make a fair use of copyrighted material.

    ContentID does a small job, poorly: filtering the soundtracks of videos to check for matches with a database populated by a small, trusted group. No one (who understands technology) seriously believes that it will scale up to blocking everything that anyone claims as a copyrighted work (without having to show any proof of that claim or even identify themselves!), including videos, stills, text, and more.

  • Online platforms aren't in the entertainment business

    The online companies most impacted by Article 13 are platforms for general-purpose communications in every realm of human endeavor, and if we try to regulate them like a cable operator or a music store, that's what they will become.

  • The Directive does not adequately protect fair dealing and due process

    Some drafts of the Directive do say that EU nations should have "effective and expeditious complaints and redress mechanisms that are available to users" for "unjustified removals of their content. Any complaint filed under such mechanisms shall be processed without undue delay and be subject to human review. Right holders shall reasonably justify their decisions to avoid arbitrary dismissal of complaints."

    What's more, "Member States shall also ensure that users have access to an independent body for the resolution of disputes as well as to a court or another relevant judicial authority to assert the use of an exception or limitation to copyright rules."

    On their face, these look like very good news! But again, it's hard (impossible) to see how these could work at Internet scale. One of EFF's clients had to spend ten years in court when a major record label insisted -- after human review, albeit a cursory one-- that the few seconds' worth of tinny background music in a video of her toddler dancing in her kitchen infringed copyright. But with Article 13's filters, there are no humans in the loop: the filters will result in millions of takedowns, and each one of these will have to receive an "expeditious" review. Once again, we're back to hiring all the lawyers now alive -- or possibly, all the lawyers that have ever lived and ever will live -- to check the judgments of an unaccountable black box descended from a system that thinks that birdsong and silence are copyright infringements.

    It's pretty clear the Directive's authors are not thinking this stuff through. For example, some proposals include privacy rules: "the cooperation shall not lead to any identification of individual users nor the processing of their personal data." Which is great: but how are you supposed to prove that you created the copyrighted work you just posted without disclosing your identity? This could not be more nonsensical if it said, "All tables should weigh at least five tonnes and also be easy to lift with one hand."

  • The speech of ordinary Internet users matters

    Eventually, arguments about Article 13 end up here: "Article 13 means filters, sure. Yeah, I guess the checks and balances won't scale. OK, I guess filters will catch a lot of legit material. But so what? Why should I have to tolerate copyright infringement just because you can't do the impossible? Why are the world's cat videos more important than my creative labour?"

    One thing about this argument: at least it's honest. Article 13 pits the free speech rights of every Internet user against a speculative theory of income maximisation for creators and the entertainment companies they ally themselves with: that filters will create revenue for them.

    It's a pretty speculative bet. If we really want Google and the rest to send more money to creators, we should create a Directive that fixes a higher price through collective licensing.

    But let's take a moment here and reflect on what "cat videos" really stand in for here. The personal conversations of 500 million Europeans and 2 billion global Internet users matter : they are the social, familial, political and educational discourse of a planet and a species. They have worth, and thankfully it's not a matter of choosing between the entertainment industry and all of that -- both can peacefully co-exist, but it's not a good look for arts groups to advocate that everyone else shut up and passively consume entertainment product as a way of maximising their profits.

 

 

Extract Repression in anybody's book...

10 years in Chinese prison for writing a homoerotic novel


Link Here30th November 2018

In an assault on freedom of expression, a court in China sentenced a successful novelist, Ms. Liu, to 10 years in prison on October 31 for including explicit homoerotic content in her work. The charge against her was making and selling obscene material for profit. Information about the case has just recently been circulated online, generating a widespread outcry on social media against censorship as well as the disproportionate and excessive severity of her sentence.

The writer, who uses the pen name Tianyi, was arrested in 2017, after the publication of her novel Occupy . Pornography is illegal in China . The 1997 penal code forbids depicting sexual acts except for medical or artistic purposes. According to police in Anhui Province, in eastern China, the book described obscene behavior between males, including violence, abuse, and humiliation.

...See the full article from hrw.org

 

 

Website blocking 2.0...

India ups the ante with 12500 websites blocked to protect a local blockbuster movie


Link Here30th November 2018
The Madras High Court has handed down one of the most aggressive site-blocking orders granted anywhere in the world. Following an application by Lyca Productions , more than 12,500 sites will be preemptively blocked by 37 Indian ISPs to prevent 2.0 - India's most expensive film ever - being leaked following its premiere.

What we're looking at here is a preemptive blocking order of a truly huge scale against sites that have not yet made the movie available and may never do so.

In the meantime, however, a valuable lesson about site-blocking is already upon us. Within hours of the blocks being handed down, a copy of 2.0 appeared online and is now available via various torrent and streaming sites labeled as a 1080p PreDVDRip. Forums reviewed by TF suggest users aren't having a problem obtaining it.

With a reported budget of US$76 million, 2.0 is the most expensive Indian film. The sci-fi flick is attracting huge interest and at one stage it was reported that Arnold Schwarzenegger had been approached to play a leading role in the flagship production.

 

 

The House that Jack Built...

The US film censor is not impressed by a special one night screening of the uncut version


Link Here29th November 2018
The House That Jack Built is a 2018 Denmark / France / Germany / Sweden horror thriller by Lars von Trier.
Starring Matt Dillon, Bruno Ganz and Uma Thurman. BBFC link IMDb
Lars von Trier's upcoming drama follows the highly intelligent Jack (Matt Dillon) over a span of 12 years and introduces the murders that define Jack s development as a serial killer. We experience the story from Jack s point of view, while he postulates each murder is an artwork in itself. As the inevitable police intervention is drawing nearer, he is taking greater and greater risks in his attempt to create the ultimate artwork.

The MPAA is Going After distributors IFC over a one day special screening of the  Director's Cut of The House That Jack Built.

Ahead of the general release of the cut R rated version of Lars von Trier's new film on December 14, the Director's Cut of the film played select theaters, for one night only, and it looks like those screenings have landed IFC Films in trouble with the US film censors of the MPAA.

The MPAA has rules allowing only one version of a film to be shown in cinemas at a time. Ratings can in fact be changed but only after a certain time has elapsed, and with the previous rating being revoked.

As reported by Deadline, IFC now faces potential sanctions over the screenings. The MPAA said in a statement that they have:

Communicated to the distributor, IFC Films, that the screening of an unrated version of the film in such close proximity to the release of the rated version -- without obtaining a waiver -- is in violation of the rating system's rules. The effectiveness of the MPAA ratings depends on our ability to maintain the trust and confidence of American parents. That's why the rules clearly outline the proper use of the ratings. Failure to comply with the rules can create confusion among parents and undermine the rating system -- and may result in the imposition of sanctions against the film's submitter.

A hearing in the very near future will allow IFC to plead their case, and it's possible that the MPAA could revoke the rating they had issued to the film.

 

 

California Is Still Trying to Gag IMDb...

The EFF is opposing the censorship of film stars ages


Link Here29th November 2018

California is still trying to gag websites from sharing true, publicly available, newsworthy information about actors. While this effort is aimed at the admirable goal of fighting age discrimination in Hollywood, the law unconstitutionally punishes publishers of truthful, newsworthy information and denies the public important information it needs to fully understand the very problem the state is trying to address. So we have once again filed a friend of the court brief opposing that effort.

The case, IMDB v. Becerra , challenges the constitutionality of California Civil Code section 1798.83.5 , which requires "commercial online entertainment employment services providers" to remove an actor's date of birth or other age information from their websites upon request. The purported purpose of the law is to prevent age discrimination by the entertainment industry. The law covers any "provider" that "owns, licenses, or otherwise possesses computerized information, including, but not limited to, age and date of birth information, about individuals employed in the entertainment industry, including television, films, and video games, and that makes the information available to the public or potential employers." Under the law, IMDb.com, which meets this definition because of its IMDb Pro service, would be required to delete age information from all of its websites, not just its subscription service.

We filed a brief in the trial court in January 2017, and that court granted IMDb's motion for summary judgment, finding that the law was indeed unconstitutional. The state and the Screen Actors Guild, which intervened in the case to defend the law, appealed the district court's ruling to the U.S. Court of Appeals for the Ninth Circuit. We have now filed an amicus brief with that court. We were once again joined by First Amendment Coalition, Media Law Resource Center, Wikimedia Foundation, and Center for Democracy and Technology.

As we wrote in our brief, and as we and others urged the California legislature when it was considering the law, the law is clearly unconstitutional. The First Amendment provides very strong protection to publish truthful information about a matter of public interest. And the rule has extra force when the truthful information is contained in official governmental records, such as a local government's vital records, which contain dates of birth.

This rule, sometimes called the Daily Mail rule after the Supreme Court opinion from which it originates, is an extremely important free speech protection. It gives publishers the confidence to publish important information even when they know that others want it suppressed. The rule also supports the First Amendment rights of the public to receive newsworthy information.

Our brief emphasizes that although IMDb may have a financial interest in challenging the law, the public too has a strong interest in this information remaining available. Indeed, if age discrimination in Hollywood is really such a compelling issue, and EFF does not doubt that it is, hiding age information from the public makes it difficult for people to participate in the debate about alleged age discrimination in Hollywood, form their own opinions, and scrutinize their government's response to it.

 

 

Winnie whines...

Chinese Disneyland looks set to be purged of its Winnie the Pooh rides


Link Here29th November 2018

One of the most beloved Disney characters seems to be on the way out at the Shanghai Disneyland and it all has to do with the Chinese president getting all wound up by mild joke.

A report has come out that Winnie the Pooh and virtually all references to the bear may be taken out of the park in Shanghai. That would include removing all merchandise, having no character meet-and-greet, and reworking two rides to different themes. One of the rides is The Many Adventures of Winnie the Pooh dark ride and the other is a spinning teacup ride called Pooh's Hunny Pot Spin.

According to a report from Theme Park University, this would all be at the command of Chinese President Xi Jinping who took umbrage at the jokey cartoon comparison of Xi Jinping and Winnie the Pooh.

 

 

Offsite Article: Fake-news patrolling may be next big Internet boom...


Link Here29th November 2018
Full story: Fake News...Declining respect for the authorities is blamed on 'fake' news
Fake news prevention presents huge business opportunity. By Ryan Holmes

See article from business.financialpost.com

 

 

Nice Baps...

Miserable gits on the local council want to close down cafe over its name


Link Here28th November 2018
 The owners of a cafe in Cornwall have say they face closure over their business's slightly suggestive name.

Kevin and Laura Baker said they could lose everything after miserable councillors lodged objections to their roadside sandwich shop, Nice Baps. They opened the cafe five years ago in a shipping container revamped to look like a log cabin on a layby of the A39 outside Wadebridge.

But the Bakers said their livelihood was now jeopardy after two parish councillors objected to them renewing their street trading licence.They said they were told by a licensing officer that the councillors took issue with the name and size of the cafe.

Egloshayle Parish Council denied that the cafe's name was a problem and claim that the concerns were about the business breaching its licensing agreement.

The Bakers will now face a Cornwall Council licensing hearing next month. Support free-thinking journalism and subscribe to Independent Minds. They responded to the parish council claim:

If we had breached one of the conditions of our license, why have we not had a letter to say can you do something to change it?

 

 

Nice Baps...

But ASA's advert censors kindly averted their eyes and only noticed Kelly Brook's shoes


Link Here28th November 2018
A TV and VOD ad for Skechers seen in August 2018:
  • a. The TV ad for Skechers seen on 30 August 2018 featured TV presenter Kelly Brook walking along a pavement wearing a jumper and jeans. She says, I like my clothes form fitting, but not my shoes. That is why I wear Skechers knitted footwear. So I look and feel my best. People tend to notice things like that. In the same shot a man carrying a box of oranges was distracted by Kelly Brook and crashed in to his colleague causing them both to drop the contents of the boxes they were carrying. A male cyclist passed the TV presenter and looks back at her.
  • b. The VOD ad was the same as ad (a). Issue

Three complainants questioned whether the ad was offensive because it objectified women.

ASA Assessment: Complaints not upheld

The ad featured the TV presenter, wearing a jumper with jeans and trainers, walking down a high street. The ASA considered that the outfit was not revealing and nothing about Kelly Brook's behaviour was sexualised or objectifying. We noted that the men in the ad did notice the presenter and that their reactions in doing so were exaggerated. However, we did not consider that the ad contained anything which pointed to an exploitative scenario or tone. We concluded that the ad did not objectify or degrade women and therefore was not socially irresponsible and unlikely to cause serious or widespread offence.

 

 

Just ragging...

Spanish comedian in court after he blows his nose on the country's flag


Link Here28th November 2018
A Spanish comedian has been hauled in front of a judge for blowing his nose on the national flag on TV.

Dani Mateo could be prosecuted for offence of public affront to the symbols of Spain, which comes with a fine, or for carrying out a hate crime, which carries a maximum sentence of four years in jail.

The complaint was brought by a trade union representing police officers. They protested over a sketch on satirical news show El Intermedio, broadcast on the La Sexta channel last month, in which Mateo joked that he was going to read the only text that genuinely creates consensus in Spain: the patient guidelines in a packet of Frenadol. But as he read the instructions on the cold remedy, he pretended to sneeze, and blew his nose on the Spanish flag. He joked:

Christ, sorry! I didn't want to offend anyone. I didn't want to offend Spaniards, nor the king, nor the Chinese who sell these rags. Not rags, I didn't mean rags.

 

 

Google's conscience...

Google employees write open letter opposing Google supporting the Chinese internet censorship regime


Link Here28th November 2018
Full story: Facebook Censorship...Facebook quick to censor

We are Google employees. Google must drop Dragonfly.

We are Google employees and we join Amnesty International in calling on Google to cancel project Dragonfly, Google's effort to create a censored search engine for the Chinese market that enables state surveillance.

We are among thousands of employees who have raised our voices for months. International human rights organizations and investigative reporters have also sounded the alarm, emphasizing serious human rights concerns and repeatedly calling on Google to cancel the project. So far, our leadership's response has been unsatisfactory.

Our opposition to Dragonfly is not about China: we object to technologies that aid the powerful in oppressing the vulnerable, wherever they may be. The Chinese government certainly isn't alone in its readiness to stifle freedom of expression, and to use surveillance to repress dissent. Dragonfly in China would establish a dangerous precedent at a volatile political moment, one that would make it harder for Google to deny other countries similar concessions.

Our company's decision comes as the Chinese government is openly expanding its surveillance powers and tools of population control. Many of these rely on advanced technologies, and combine online activity, personal records, and mass monitoring to track and profile citizens. Reports are already showing who bears the cost, including Uyghurs, women's rights advocates, and students. Providing the Chinese government with ready access to user data, as required by Chinese law, would make Google complicit in oppression and human rights abuses.

Dragonfly would also enable censorship and government-directed disinformation, and destabilize the ground truth on which popular deliberation and dissent rely. Given the Chinese government's reported suppression of dissident voices, such controls would likely be used to silence marginalized people, and favor information that promotes government interests.

Many of us accepted employment at Google with the company's values in mind, including its previous position on Chinese censorship and surveillance, and an understanding that Google was a company willing to place its values above its profits. After a year of disappointments including Project Maven, Dragonfly, and Google's support for abusers, we no longer believe this is the case. This is why we're taking a stand.

We join with Amnesty International in demanding that Google cancel Dragonfly. We also demand that leadership commit to transparency, clear communication, and real accountability. Google is too powerful not to be held accountable. We deserve to know what we're building and we deserve a say in these significant decisions.

Signed by 478 Google employees

 

 

Terms of censorship...

Parliamentary group defines islamophobia as a type of racism that targets muslim identity


Link Here28th November 2018

The All Party Parliamentary Group (APPG) on British Muslims has made history by putting forward the first working definition of Islamophobia in the UK. Its report, Islamophobia Defined, states:

Islamophobia is rooted in racism and is a type of racism that targets expressions of Muslimness or perceived Muslimness.

The culmination of almost two years of consultation and evidence gathering, the definition takes into account the views of different organisations, politicians, faith leaders, academics and communities from across the country.

No doubt the term will be still be used as an accusation intended to silence people from mentioning negative traits associated with islam.

 

 

Pirates and proxies...

Australian parliament passes new law with wide ranging blocking of copyright infringing websites


Link Here28th November 2018
The Australian Parliament has passed controversial amendments to copyright law. There will now be a tightened site-blocking regime that will tackle mirrors and proxies more effectively, restrict the appearance of blocked sites in Google search, and introduce the possibility of blocking dual-use cyberlocker type sites.

Section 115a of Australia's Copyright Act allows copyright holders to apply for injunctions to force ISPs to prevent subscribers from accessing pirate sites. While rightsholders say that it's been effective to a point, they have lobbied hard for improvements.

The resulting Copyright Amendment (Online Infringement) Bill 2018 contained proposals to close the loopholes. After receiving endorsement from the Senate earlier this week, the legislation was today approved by Parliament.

Once the legislation comes into force, proxy and mirror sites that appear after an injunction against a pirate site has been granted can be blocked by ISPs without the parties having to return to court. Assurances have been given, however, that the court will retain some oversight.

Search engines, such as Google and Bing, will also be affected. Accused of providing backdoor access to sites that have already been blocked, search providers will now have to remove or demote links to overseas-based infringing sites, along with their proxies and mirrors.

The Australian Government will review the effectiveness of the new amendments in two years' time.

 

 

Scarred by scars...

BFI to refuse funds to films with facially scarred villains


Link Here27th November 2018

Films that have facially-scarred villains will no longer receive funding from the British Film Institute, the organisation has announced, as part of a campaign to remove the stigma around disfigurement.

From Darth Vader to Scar in The Lion King, film-makers have long made a link between physical disfigurement and evil. The BFI is backing the #IAmNotYourVillain campaign launched by the group Changing Faces.

Ben Roberts, the BFI's deputy CEO, said:

Film is a catalyst for change and that is why we are committing to not having negative representations depicted through scars or facial difference in the films we fund, says Ben Roberts, the BFI's deputy CEO.

This campaign speaks directly to the criteria in the BFI diversity standards, which call for meaningful representations on screen. We fully support Changing Faces's #IAmNotYourVillain campaign, and urge the rest of the film industry to do the same.

 

 

A Suicide Bomber Sits in the Library...

And people around him say shh! You can't write books about muslim terrorists!


Link Here26th November 2018
A Suicide Bomber Sits in the Library , a new comic book has been pulled from publication at the behest of the PC lynch mob.

The graphic novel, written by the Newbery medal-winning author Jack Gantos and illustrated by Sandman artist Dave McKean, it is part of a series linked by 'sitting' and was due to be released in May 2019. The book is pretty much a morality tale. It follows a suicide bomber who changes his mind once he discovers the joys of reading.

A group called the Asian Author Alliance responded with a call for the book to be censored. In an open letter, 1000 signatories said  the book was steeped in Islamophobia and profound ignorance. The letter continued :

The simple fact is that today, the biggest terrorist threat in the US is white supremacy. In publishing A Suicide Bomber Sits in the Library, [publisher] Abrams is wilfully fear-mongering and spreading harmful stereotypes in a failed attempt to show the power of story.

As criticism of the comic spread online, McKean, one of the UK's most acclaimed comics illustrators, responded , saying that the book was firmly on the side of literacy, empathy and non-violence. He tweeted:

The premise of the book is that a boy uses his mind and faith to decide for himself that violence is not the right course

I had just this anxiety when the script came to me. I just hoped we'd moved beyond each of us only being able to talk to and from our own little cultural bubble.

Abrams announced the cancellation of the comic saying in a statement that it had decided to withdraw it, with the support of McKean and Gantos:

While the intention of the book was to help broaden a discussion about the power of literature to change lives for the better, we recognise the harm and offence felt by many at a time when stereotypes breed division, rather than discourse.

 

 

Obituary: Bernardo Bertolucci...

Notable film director leaves a fine legacy of films involving a tussle with censors


Link Here26th November 2018

Last Tango Paris Marlon BrandoBernardo Bertolucci was an Italian film director and screenwriter, whose films include The Conformist, Last Tango in Paris, 1900, The Last Emperor (for which he won the Academy Award for Best Director and the Academy Award for Best Adapted Screenplay), The Sheltering Sky, Little Buddha, Stealing Beauty and The Dreamers. In recognition of his work, he was presented with the inaugural Honorary Palme d'Or Award at the opening ceremony of the 2011 Cannes Film Festival.

He died on the 26th November 2018. He leaves a fine legacy of clashes with film censors:

Last Tango in Paris is a 1972 France / Italy romance by Bernardo Bertolucci.
Starring Marlon Brando, Maria Schneider and Maria Michi. YouTube iconBBFC link IMDb
Cut by the BBFC for an X rated 1973 cinema release, although a few local authorities banned the film anyway. Later passed uncut for all releases since 1978. Also cut for an R rating in the US although the NC-17 rated version is uncut.

See further details at Melon Farmers Film Cuts: Last Tango in Paris

1900 is a 1976 Italy/France/West Germany drama by Bernardo Bertolucci.
With Robert De Niro, Gérard Depardieu and Dominique Sanda. BBFC link IMDb
Exists in 3 versions, a cut International Theatrical Version, the full length Italian Version and the Italian Version without a real sex masturbation scene.

See further details at Melon Farmers Film Cuts: 1900

The Dreamers is a 2003 UK / France / Italy romance by Bernardo Bertolucci.
Starring Michael Pitt, Louis Garrel and Eva Green. BBFC link IMDb
Uncut in the UK for 18 rated cinema and video release. The R rated version was cut in the US for an MPAA R rating but the Unrated version is uncut.

See further details at Melon Farmers Film Cuts: The Dreamers

 

 

Obituary: Nicholas Roeg...

Notable film director leaves a fine legacy of films involving a tussle with censors


Link Here26th November 2018

Nicolas Roeg CBE BSC was an English film director and cinematographer, best known for directing Performance (1970), Walkabout (1971), Don't Look Now (1973), The Man Who Fell to Earth (1976), Bad Timing (1980), and T he Witches (1990).

Making his directorial debut 23 years after his entry into the film business, Roeg quickly became known for an idiosyncratic visual and narrative style, characterized by the use of disjointed and disorienting editing.For this reason, he was considered a highly influential filmmaker, with such directors as Steven Soderbergh, Christopher Nolan, and Danny Boyle citing him as such.

In 1999, the British Film Institute acknowledged Roeg's importance in the British film industry by respectively naming Don't Look Now and Performance the 8th and 48th greatest British films of all time in its Top 100 British films poll.

He died on the 23rd November 2018. He leaves a fine legacy of clashes with film censors:

Performance is a 1970 UK crime drama by Donald Cammell & Nicolas Roeg.
With James Fox, Mick Jagger and Anita Pallenberg. BBFC link IMDb

History of censorship at the hands of the BBFC and Its own distributor, Warner. BBFC cuts later waived but the extensive footage removed by Warners is still missing.

See further details at Melon Farmers Film Cuts: Performance

Walkabout is a 1971 UK adventure drama by Nicolas Roeg.
With Jenny Agutter, David Gulpilil and Luc Roeg. YouTube iconBBFC link IMDb

Uncut by the BBFC albeit after an appeal against proposed cuts for the 1971 cinema release. There exists a shortened version in the US, although the Unrated version is uncut.

See further details at Melon Farmers Film Cuts: Walkabout

Bad Timing is a 1980 UK mystery thriller by Nicolas Roeg.
With Art Garfunkel, Theresa Russell and Harvey Keitel. YouTube icon BBFC link IMDb

Re-edited to separate a juxtaposed image of a child with an image of lovemaking. This cut has persisted into all subsequent releases.

The Witches is a 1990 UK / USA family horror fantasy by Nicolas Roeg.
Starring Anjelica Huston, Mai Zetterling and Jasen Fisher. YouTube icon BBFC link IMDb

Cut by the BBFC for a PG rating. There is an uncut European version. Uncut in the US but with an alternative happy ending

See further details at Melon Farmers Film Cuts: The Witches

 

 

Fighting censorship...

Russia considers increasing fines as Google refuses to comply with Russia's list of banned websites


Link Here26th November 2018
Full story: Internet Censorship in Russia...Russia and its repressive state control of media
Russia's state censors have formally accused Google of breaking the law by not removing links to websites that are banned in the country.

Roskomnadzor, the state communications censor, said in a statement that the company had not connected to a database of banned sources in the country, leaving it out of compliance.

The potential penalty that Google could face is currently 700,000 roubles, or about $10,000. But Reuters reports that the Russian government has been considering more drastic actions, including fining companies up to 1 percent of annual revenue for failing to comply with similar laws.

 

 

Offsite Article: Online porn filters will never work...


Link Here26th November 2018
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
Beyond the massive technical challenge, filters are a lazy alternative to effective sex education. By Lux Alptraum

See article from theverge.com

 

 

Protected categories...

Twitter outlaws misgendering or deadnaming of trans people


Link Here25th November 2018
Full story: Twitter Censorship...Twitter offers country by country take downs
Deadnaming and misgendering could now get you a suspension from Twitter as it looks to sure up its safeguarding policy for people in the protected transgender category.

Twitter's recently updated censorship policy now reads:

Repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone

We prohibit targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.

According to the Ofxord English dictionary misgendering means:

Refer to (someone, especially a transgender person) using a word, especially a pronoun or form of address, that does not correctly reflect the gender with which they identify.

According to thegayuk.com:

Deadnaming is when a person refers to someone by a previous name, it could be done with malice or by accident. It mostly affects transgender people who have changed their name during their transition.

 

 

Serious issues...

Ofcom investigates a complaint about a Chinese propaganda channel broadcasting a confession said to be extracted under duress.


Link Here25th November 2018
A British corruption investigator has asked the UK's TV censor Ofcomr to revoke Chinese state media's broadcast license for helping to stage his allegedly forced confession and subsequent jailing in China.

Peter Humphrey was arrested for his work in pursuing corruption in the pharmaceutical sector. He was sentenced to over two years in prison by a Shanghai court in 2014. He served hi time and was then deported.

He has now submitted a complaint to Ofcom about China Central Television (CCTV) for its alleged role in the episode. He said that CCTV journalists cooperated with police to extract, record, make post-production and then broadcast his confession worldwide through its international propaganda channels.

Humphrey accuses Chinese authorities of drugging him and locking him in a chair inside a small metal cage to conduct the confession saying:

China Central Television (CCTV) journalists then aimed their cameras at me and recorded me reading out the answers already prepared for me by the police, his complaint added.

A spokesman for Ofcom confirmed it had received a complaint which we are assessing as a priority. Ofcom has previously taken action against the broadcast of 'confessions' extracted under duress.

 

 

Offsite Article: Oh shit!...


Link Here25th November 2018
Now smart toilets will scan your deposits for drug use and illness

See article from telegraph.co.uk

 

 

South Wind...

The latest cinema release cut for category


Link Here24th November 2018

South Wind (Juzni Vetar) is a 2018 Serbia crime film by Milos Avramovic.
Starring Milos Bikovic, Miodrag Radonjic and Dragan Bjelogrlic. BBFC link IMDb

Cut Small15 Small UK: Passed 15 for strong violence, sex, sex references, drug misuse, very strong language after 43s of BBFC category cuts ( 129:43s ) :
  • 2018 cinema release
The BBFC commented:
  • Company chose to reduce a scene of sadistic bloody violence, featuring accompanying very strong language, in order to obtain a 15 classification.  An uncut 18 classification was available.

Juzni Vetar was earlier passed 18 uncut for strong bloody violence ( 130:26s ) but the distributors preferred a cut 15 release.

Summary Notes

A story about Petar Maras, a Belgrade criminal in his late twenties, whose one reckless move causes an avalanche of events that will greatly affect the lives of those around him.

 

 

Another vice at the BBFC...

A new vice president and a schedule for a guidelines update


Link Here24th November 2018
The BBFC reported that its vice president Alison will be standing down after 10 years at the BBFC. Murphy Cobbings is set to take over the role.

The BBFC also reported that its latest updated guidelines are expected to be published in early January, coming into effect six weeks later in February.

 

 

Offsite Article: The age of censorship...


Link Here24th November 2018
Sony's Video Game Censorship Sets a Worrying Precedent

See article from gamerevolution.com

 

 

What about the fake promises?...

French Parliament passes law allowing the immediate censorship of anything claimed to be 'fake news' during elections


Link Here23rd November 2018
Full story: Internet Censorship in France...Web blocking in the name of child protection
France's parliament has passed a new law empowering judges to order the immediate censorahip of 'fake news' during election campaigns.

The law, conceived by President Emmanuel Macron, was rejected twice by the senate before being passed by the parliament on Tuesday. It is considered western Europe's first attempt to officially ban material claimed to be fake.

Candidates and political parties will now be able to appeal to a judge to censor information claimed to be false during the three months before an election.

The law also allows the CSA, the French national TV censor, to suspend television channels controlled by a foreign state or under the influence of that state if they deliberately disseminate false information claimed likely to affect the ballot.

The law also states that users must be provided with information that is fair, clear and transparent on how their personal data is being used.

 

 

Updated: Rainbow 6 Siege...

Games developers announce will remove sex and gambling references worldwide so as to comply with Chinese censorship requirements


Link Here23rd November 2018
In order to prepare Rainbow 6 Siege for expansion into China, Ubisoft announced that it will be making some global censor cuts to the game's visuals to remove gore and references to sex and gambling.

In a blog post, Ubisoft explained:

A Single, Global Version

We want to explain why these changes are coming to the global version of the game, as opposed to branching and maintaining two parallel builds. We want to streamline our production time to increase efficiency

By maintaining a single build, we are able to reduce the duplication of work on the development side. This will allow us to be more agile as a development team, and address issues more quickly.

Ubisoft provided examples of their censorship:

  • Icons featuring knives become fists
  • Icons featuring skulls are replaced
  • Skulls in artwork are fleshed out into faces
  • Images of slot machines are removed
  • Blood spatters are removed from a Chinese landscape painting
  • Strip club neon nudity is removed

Update: Community pressure

23rd November 2018. See  article from polygon.com

Earlier this month, Ubisoft Montreal informed the Rainbow Six Siege community that all versions of the competitive first-person shooter would be censored to comply with Chinese regulations .

The community, in turn, informed Ubisoft Montreal that it was extremely upset by that decision.

Now, developers say they're changing course by reverting all aesthetic changes made to the game. Ubisoft Montreal said:

We have been following the conversation with our community closely over the past couple of weeks, alongside regular discussions with our internal Ubisoft team, and we want to ensure that the experience for all our players, especially those that have been with us from the beginning, remains as true to the original artistic intent as possible.

The next update, referred to as Year Three Season Four, will see the majority of these changes returned to their original look. The Moroccan-themed expansion will also feature three new characters and a new map. It's expected to launch in early December on PlayStation 4, Windows PC, and Xbox One .

 

 

Trans transgression...

Nude featuring a 15 year old in the movie girls causes a little soul searching at Netflix


Link Here23rd November 2018
Girl is a 2018 Belgium / Netherlands drama by Lukas Dhont.
Starring Victor Polster, Arieh Worthalter and Oliver Bodart. IMDb
 

Summary Notes

Lara is a 15-year-old who feels she is a girl, born in the body of a boy. She dreams of becoming a ballerina.

Lukas Dhont's feature directorial debut, Girl wowed Cannes when it premiered in May, picking up a distribution deal with Netflix and four awards including the Fipresci Prize in the Un Certain Regard section.

Now the film, Belgium's official entry in the foreign-language Oscar race, is the subject of controversy following Dhont's comments in a European newspaper that Netflix had plans to edit out a scene with full frontal nudity of its star, then 15.

The director commented on the outcome of the debate in a statement to the Hollywood Reporter:

Regarding reports made this week in the Belgian media, we as filmmakers had some internal conversations with Netflix in which we discussed how some of the material in Girl could possibly be received outside of Europe. We were given the option to be able to edit the film, and it always was a dialogue in which the filmmakers had the strongest say.

The version of Girl that will be shown on Netflix will be the same version that premiered in Cannes, and in theaters in Belgium and other parts of the world.

 

 

Bankrupt censorship...

South Africa's advert censor is replaced by a new advert censor


Link Here23rd November 2018
South Africa's advert censor, the Advertising Standards Authority', has gone bankrupt and is being replaced by another. The ASA went into liquidation at the end of September after years of mismanagement and alleged financial impropriety.

The Advertising Regulatory Board will take over as the advert censor with a new membership and staff structure. It will be headed by former ASA legal counsel Gail Schimmel and the organisation will be funded by the advertising industry.

The brand and marketing industry is being asked to rally around the new organisation. Two other industry bodies, The Association for Communication & Advertising (ACA) and the Interactive Advertising Bureau (IAB), have also lent their support to the new ARB.

 

 

Adult gaming website hacked...

Further demonstrating how dangerous it is for the government to demand that identity information is handed over before viewers can access the adult web


Link Here21st November 2018
The website of an adult video game featuring sexualised animals has been hacked, with the information of nearly half a million subscribers stolen.

High Tail Hall is a customisable role-playing game, which features what the website describes as sexy furry characters, including buxom zebras and scantily clad lionesses.

The compromised information, including email addresses, names and order histories, resurfaced on a popular hacking forum a few months later. HTH Studio has acknowledged the breach and say that it has been fixed. The company added:

Both our internal security and web team security assures us that no financial data was compromised. The security of our users is the highest priority.

It further recommended that all users change their passwords. So although credit card data is safe users are still at risk from identity fraud, outing and blackmail.

It is the latest in a long series of hacks aimed at adult sites and demonstrates the dangers for UK porn viewers when they are forced to supply identity information to be able to browse the adult web.

 

 

Thai film comes to grief...

Thi Baan is pulled from cinema release as film censors demand cuts to pivotal scene


Link Here21st November 2018
Thi Baan Series 2 Episode 2 is a 2018 Thai comedy drama by Surasak Pongson

A monk grieving at his ex-girlfriend's funeral is the alleged scene that caused a Thai film to be censored just days before its Thursday release.

A group of Thai directors revealed that what the film censors called a sensitive scene in Thi Baan The Series 2.2 depicted a monk character bursting into tears in front of his ex-girlfriend's coffin.

Thailand's film censor board has demanded that the pivotal scene be cut and the film resubmitted to the board before it can be screened at cinemas.

In the meantime the release of Thi Baan The Series 2.2 has been indefinitely postponed.

 

 

Updated: Overlord...

Cut in Australia for an MA15+ rating


Link Here20th November 2018
Overlord is a 2018 USA action war horror by Julius Avery.
Starring Wyatt Russell, Bokeem Woodbine and Iain De Caestecker. BBFC link IMDb

On the eve of D-Day, American paratroopers are dropped behind enemy lines to carry out a mission crucial to the invasion's success. But as they approach their target, they begin to realize there is more going on in this Nazi-occupied village than a simple military operation. They find themselves fighting against supernatural forces, part of a Nazi experiment.

Overlord has been cut in Australia for MA15+ cinema and home video release. MA15+ is something like a 15A in UK terms.

The film was originally rated R18+ uncut for: High impact violence; Strong impact themes; Moderate impact language. However distributors Paramount preferred a lower rating and a month later submitted a cut version. This was duly rated MA15+ for Strong impact themes, violence; Moderate impact language:

For comparison:

  • UK: Passed 18 uncut for strong bloody violence, gory images
  • US: Rated R uncut for strong bloody violence, disturbing images, language, and brief sexual content.

Update: Overrule

20th November 2018. From MediaCensorshipInAustralia Facebook Page

Paramount Australia has confirmed that after a change of heart, OVERLORD will be screened in the original R 18+ version in cinemas after all. Village Cinemas have changed their earlier MA 15+ rating for the coming soon listing to R 18+.

A comment on Paramount's Facebook page suggest there's a chance that the MA 15+ version was just an option being considered.

 

 

Sony breast firming...

Sony censors Dead or Alive Xtreme 3: Scarlet for PS4


Link Here20th November 2018
Koei Tecmo is the latest publisher to fall foul Sony's new found censorship. A product page for the newly announced Dead or Alive Xtreme 3: Scarlet reveals that two items will be removed from the PlayStation 4 version of the game, despite them being present in the Nintendo Switch edition.

Both the Gold Fan and Softening Gel have been cut out of the PS4 release. The golf fan allows players to blow the skirts of characters, with the objective being to turn them inside out so you can get a glimpse of their underwear. Meanwhile, the gel softens the recipient's breasts, making them more springy .

However, both of these items were included in the previous PS4 and PS Vita variants of the game, so it's clear that this is a new directive from the PlayStation maker.

The game is only available in Asian markets.

 

 

Low lifes on the moral highground...

Punjab Film Certification Board suspended for a corruption investigation


Link Here20th November 2018
Pakistan's Punjab government has suspended the Punjab Film Certification Board due to the government no longer being able to afford its members.

The board's chairman, vice chairman along with all its other members were removed in an attempt to reevaluate the budget of the censor board. After receiving several reports of financial corruption, the ministry has decided to conduct an audit of the board.

According to sources, former film actor and censor board chairperson Zeba's monthly salary of whopping Rs1.2 million per was the centre of the issue. Other board members were also receiving quite a hefty monthly sum, including the newly appointed chairperson Shoaib bin Aziz. An official questioned:

Only a few films are produced every year so I would like to know what work this committee has been doing, other than receiving big salaries? questioned the official.

 

 

Mary Christmas...

Miserable preacher is wound up by poster featuring a mash up of Mary Poppins and Christmas


Link Here20th November 2018
An evangelical preacher has branded a billboard advertising Grimsby's Christmas lights switch on as religious blasphemy.

Paul Vivian ludicrously claimed the sign, featuring a cartoon drawing of popular character Mary Poppins, makes a very cruel mockery of Christmas and contravenes international human rights.

It wishes the public a Mary Christmas and invites them to the town's Supercalifragilistic Light Switch On on November 22.

Vivian, co-lead pastor at Vineyard Evangelical Church, is calling for an immediate apology and removal of the sign. He wrote a letter to Grimsby Live:

The Urgent Need to Respect Religious Festivals in Great Grimsby.

I feel I have to write to you, to express my absolute disgust and outrage at what I saw today. I looked with astonishment at what appears to be a very cruel and unfeeling mockery of Christianity's most popular celebration, namely Christmas, when Christians all over the world celebrate the birth of Jesus Christ.

I think I need to remind the people responsible for this outrageous 'religious blasphemy' of Article 20 of the International Covenant on Civil and Political Rights which obliges countries to adopt legislative measures against 'any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence'.

 

 

Offsite Article: How US Republicans Gave Up on Porn...


Link Here20th November 2018
Once, the fight against pornography was the beating heart of the American culture war. Now porn is a ballooning industry with no real opponents. What happened? By Tim Alberta

See article from politico.com

 

 

Disinformation...

The British establishment is still clutching at straws claiming it is fake news that has set the people against it, and nothing to do with it treating the people like shit


Link Here19th November 2018
Full story: Fake news in the UK...Government sets up fake news unit
The likes of Facebook and Twitter should fund the creation of a new UK watchdog to internet censor to police fake news, censorship campaigners have claimed.

Sounding like a religious morality campaign, the LSE Commission on Truth, Trust and Technology , a group made up of MPs, academics and industry, also proposed the Government should scrap plans to hand fresh powers to existing cesnors such as Ofcom and the Information Commissioner.

The campaigners argue for the creation a new body to monitor the effectiveness of technology companies' self regulation. The body, which would be called the Independent Platform Agency, would provide a permanent forum for monitoring and cesnorsing the behaviour of online sites and produce an annual review of the state of disinformation, the group said.

Damian Tambini, adviser to the LSE commission and associate professor in LSE's department of media and communications, claimed:

Parliament, led by the Government, must take action to ensure that we have the information and institutions we need to respond to the information crisis. If we fail to build transparency and trust through independent institutions we could see the creeping securitisation of our media system.

 

 

Thailand criticises US free speech...

Thailand objects to the TV drama Madam Secretary featuring its harsh lese majeste laws in a story line


Link Here19th November 2018
Madam Secretary: Ghosts is a TV drama by Rob Greenlea.
Starring Téa Leoni, Tim Daly and Keith Carradine. IMDb

While Elizabeth was grooming to annouce her candidacy, Henry attended a conference organized by his ex-girlfriend in Thailand. She made a patriotic move by questioning the existence of the mornachy and was apprehended immediately. Henry was later put in jail bacause of his attempt to vouch for her. Dalton signed off on a covert operation to save both American citizens after they were sentenced to death for insulting the monarchy.

Thailand has hit out at the CBS show Madam Secretary on Sunday in response to an episode that referenced the country's monarchy, claiming it to be misleading.

Thailand has some of the world's harshest royal defamation laws and monitors royal criticism both in Thailand and abroad, with critics regularly being imprisoned for massive prison sentences of up to 15 years (per count).

Madam Secretary, came under fire after a character travels to Thailand and presumably criticises the monarchy at a conference before being hauled away by police.

Thailand's Ministry of Foreign Affairs said it had asked its embassy in Washington to convey concern and disappointment to CBS over the November 4 episode. The Thai authorities hold that the harsh sentences are justified by the high esteem in which many Thais hold the royalty. Ministry spokesperson Busadee Santipitaks claimed that:

The episode titled Ghosts presented the Kingdom of Thailand and the Thai monarchy in a misleading manner, leading to grave concern and dismay from many Thais who have seen it.

The episode in question did not take into account the sensitivity of the Thai people in this regard.

 

 

Miserable Bangladesh...

High Court orders the censorship of all internet porn websites for 6 months


Link Here19th November 2018
The Bangladesh High Court has ordered the country's government to block all pornography websites and publication of all obscene materials from the internet for the next six months.

The court also ordered the authorities concerned to explain in four weeks why pornography websites and publication of obscene materials should not be declared illegal.

The judges issue the orders in response to a writ petition filed by Law and Life Foundation campaigning for internet censorship.

 

 

They have ways of making you not talk...

So how does China manage to delete Twitter posts it does not like?


Link Here19th November 2018
Full story: China International Censors...China pressures other countries into censorship

Despite being blocked in China, Twitter and other overseas social media sites have long been used freely by Chinese activists and government critics to speak about otherwise censored topics...until now.

China is now extending its reach to foreign sites outside of its borders. Chinese authorities have launched a stealth crackdown over the past year.

Chinese activists and other Twitter users say they have been pressured by police to delete sensitive tweets. In some cases, Chinese authorities are getting access to delete accounts themselves.

Last Friday, Cao reported that the Twitter account of Wu Gan, a Chinese activist sentenced last December to eight years in prison for subversion, had been suddenly deleted -- erasing more than 30,000 posts representing years of political critique and commentary. He was taken in by police over tweets critical of the Communist Party. After being held at a police station overnight, the user was made to hand over login information and watch police delete the tweets.

 

 

Once Upon a Deadpool...

A festive version of Deadpool 2 was cut in the US for a PG-13 rating, but it doesn't make the cut for an equivalent BBFC 12A rating


Link Here17th November 2018
Once Upon a Deadpool (Deadpool 2 Re-edited Version) is a 2018 USA action comedy adventure by David Leitch.
Starring Ryan Reynolds, Josh Brolin and Morena Baccarin. BBFC link IMDb

Deadpool 2 was initially released as an MPAA R rated Theatrical Version and an unrated extended 'Super Duper $@%!#& Cut', Both are 15 rated by the BBFC.

Later the film was cut for an MPAA PG-13 rated festive version titled Once Upon a Deadpool. This version was rated PG-13 for intense sequences of violence and action, crude sexual content, language, thematic elements and brief drug material.

Once Upon a Deadpool retained its 15 rating in the UK but the BBFC consumer advice demonstrates that it has been toned down. The original Theatrical Version was passed 15 uncut for strong bloody violence, sex references, very strong language, whilst Once Upon a Deadpool was passed 15 for strong violence, crude humour. So it seems that blood has been deleted from the violence, sex references have been removed, and strong language has been cut.

 

 

A well oiled publicity machine...

Political Iceland advert is not allowed on TV and inevitably cleans up on social media


Link Here17th November 2018
This years Christmas advert from supermarket Iceland, with partners Greenpeace, is a political campaigning advert about the ecological downsides of the production of palm oil.

The advert features a cartoon orangutan who has fled the destruction of the rainforest to hide in a little girl's bedroom. The little girl takes up the cause to protect the habitat of orangutans whilst Icelands says that it is removing palm oil from its own brand products.

Clearcast is a group funded by TV broadcasters and presents itself as experts about advert censorship, the advert censor ASA, and ASA's rules. Clearcast pre-vets all broadcast adverts and advises about compliance with ASA rules.

Clearcast originally advised that the Iceland advert was too political, as there rules governing political adverts on TV. In particular:

An advertisement contravenes the prohibition on political advertising if it is: An advertisement which is inserted by or on behalf of a body whose objects are wholly or mainly of a political nature.

There was a bit of a to do on social media, presumably thinking that the ban on political advertising should not apply to environmental political campaigners. The advert ended up noting nearly 5 million views on YouTubeand 15 million on Facebook, so Iceland will be well pleased.

 

 

Extracts Friends and Censors...

A Facebook Blueprint for Content Governance and Enforcement. By Mark Zuckerberg


Link Here16th November 2018
Full story: Facebook Censorship...Facebook quick to censor

Mark Zuckerberg has been publishing a series of articles ddressing the most important issues facing Facebook. This is the second in the series. Here are a few selected extracts

Community Standards

The team responsible for setting these policies is global -- based in more than 10 offices across six countries to reflect the different cultural norms of our community. Many of them have devoted their careers to issues like child safety, hate speech, and terrorism, including as human rights lawyers or criminal prosecutors.

Our policy process involves regularly getting input from outside experts and organizations to ensure we understand the different perspectives that exist on free expression and safety, as well as the impacts of our policies on different communities globally. Every few weeks, the team runs a meeting to discuss potential changes to our policies based on new research or data. For each change the team gets outside input -- and we've also invited academics and journalists to join this meeting to understand this process. Starting today, we will also publish minutes of these meetings to increase transparency and accountability.

The team responsible for enforcing these policies is made up of around 30,000 people, including content reviewers who speak almost every language widely used in the world. We have offices in many time zones to ensure we can respond to reports quickly. We invest heavily in training and support for every person and team. In total, they review more than two million pieces of content every day. We issue a transparency report with a more detailed breakdown of the content we take down.

For most of our history, the content review process has been very reactive and manual -- with people reporting content they have found problematic, and then our team reviewing that content. This approach has enabled us to remove a lot of harmful content, but it has major limits in that we can't remove harmful content before people see it, or that people do not report.

Accuracy is also an important issue. Our reviewers work hard to enforce our policies, but many of the judgements require nuance and exceptions. For example, our Community Standards prohibit most nudity, but we make an exception for imagery that is historically significant. We don't allow the sale of regulated goods like firearms, but it can be hard to distinguish those from images of paintball or toy guns. As you get into hate speech and bullying, linguistic nuances get even harder -- like understanding when someone is condemning a racial slur as opposed to using it to attack others. On top of these issues, while computers are consistent at highly repetitive tasks, people are not always as consistent in their judgements.

The vast majority of mistakes we make are due to errors enforcing the nuances of our policies rather than disagreements about what those policies should actually be. Today, depending on the type of content, our review teams make the wrong call in more than 1 out of every 10 cases.

Proactively Identifying Harmful Content

The single most important improvement in enforcing our policies is using artificial intelligence to proactively report potentially problematic content to our team of reviewers, and in some cases to take action on the content automatically as well.

This approach helps us identify and remove a much larger percent of the harmful content -- and we can often remove it faster, before anyone even sees it rather than waiting until it has been reported.

Moving from reactive to proactive handling of content at scale has only started to become possible recently because of advances in artificial intelligence -- and because of the multi-billion dollar annual investments we can now fund. To be clear, the state of the art in AI is still not sufficient to handle these challenges on its own. So we use computers for what they're good at -- making basic judgements on large amounts of content quickly -- and we rely on people for making more complex and nuanced judgements that require deeper expertise.

In training our AI systems, we've generally prioritized proactively detecting content related to the most real world harm. For example, we prioritized removing terrorist content -- and now 99% of the terrorist content we remove is flagged by our systems before anyone on our services reports it to us. We currently have a team of more than 200 people working on counter-terrorism specifically.

Some categories of harmful content are easier for AI to identify, and in others it takes more time to train our systems. For example, visual problems, like identifying nudity, are often easier than nuanced linguistic challenges, like hate speech. Our systems already proactively identify 96% of the nudity we take down, up from just close to zero a few years ago. We are also making progress on hate speech, now with 52% identified proactively. This work will require further advances in technology as well as hiring more language experts to get to the levels we need.

In the past year, we have prioritized identifying people and content related to spreading hate in countries with crises like Myanmar. We were too slow to get started here, but in the third quarter of 2018, we proactively identified about 63% of the hate speech we removed in Myanmar, up from just 13% in the last quarter of 2017. This is the result of investments we've made in both technology and people. By the end of this year, we will have at least 100 Burmese language experts reviewing content.

Discouraging Borderline Content

One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content. This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.

ur research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average -- even when they tell us afterwards they don't like the content.

This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement. By making the distribution curve look like the graph below where distribution declines as content gets more sensational, people are disincentivized from creating provocative content that is as close to the line as possible.

The category we're most focused on is click-bait and misinformation. People consistently tell us these types of content make our services worse -- even though they engage with them. As I mentioned above, the most effective way to stop the spread of misinformation is to remove the fake accounts that generate it. The next most effective strategy is reducing its distribution and virality.

Interestingly, our research has found that this natural pattern of borderline content getting more engagement applies not only to news but to almost every category of content. For example, photos close to the line of nudity, like with revealing clothing or sexually suggestive positions, got more engagement on average before we changed the distribution curve to discourage this. The same goes for posts that don't come within our definition of hate speech but are still offensive.

This pattern may apply to the groups people join and pages they follow as well. This is especially important to address because while social networks in general expose people to more diverse views, and while groups in general encourage inclusion and acceptance, divisive groups and pages can still fuel polarization. To manage this, we need to apply these distribution changes not only to feed ranking but to all of our recommendation systems for things you should join.

One common reaction is that rather than reducing distribution, we should simply move the line defining what is acceptable. In some cases this is worth considering, but it's important to remember that won't address the underlying incentive problem, which is often the bigger issue. This engagement pattern seems to exist no matter where we draw the lines, so we need to change this incentive and not just remove content.

Building an Appeals Process

Any system that operates at scale will make errors, so how we handle those errors is important. This matters both for ensuring we're not mistakenly stifling people's voices or failing to keep people safe, and also for building a sense of legitimacy in the way we handle enforcement and community governance.

We began rolling out our content appeals process this year. We started by allowing you to appeal decisions that resulted in your content being taken down. Next we're working to expand this so you can appeal any decision on a report you filed as well. We're also working to provide more transparency into how policies were either violated or not.

...Read the full article from facebook.com

 

 

ID'ed as censors...

DCMS minister Margot James informs parliamentary committee of government thoughts on online digital ID


Link Here16th November 2018
Digital ID was discussed by the Commons Science and Technology Committee on 13th November 2018.

Carol Monaghan Committee Member:  At the moment, platforms such as Facebook require age verification, but that simply means entering a date of birth, and children can change that. If you are planning to extend that, or look at how it might apply to other social media, how confident are you that the age verification processes would be robust enough to cope?

Margot James MP, Minister for Digital and the Creative Industries: At the moment, I do not think that we would be, but age verification tools and techniques are developing at pace, and we keep abreast of developments. At the moment , we think we have a robust means by which to verify people's age at 18; the challenge is to develop tools that can verify people's age at a younger age, such as 13. Those techniques are not robust enough yet, but a lot of technological research is going on, and I am reasonably confident that, over the next few years, there will be robust means by which to identify age at younger than 18.

Stephen Metcalfe Committee Member: My question is on the same point about how we can create a verification system that you cannot just get around by putting in a fake date of birth. I assume that the verification for 18 - plus is based around some sort of credit card, or some sort of bank card. The issue there is that, potentially, someone could borrow another person's card, because it does not require secret information--it requires just the entering of the 16-digit number, or something. But on the younger ages, given that we are talking about digital life and digital literacy, do you think that the time has come to talk about having a digital verified ID that young people get and which you cannot fiddle with--a bit like an online ID card, or digital passport? I know that that idea has been around a little while.

Margot James: It has. I do think that the time has come when that is required, but there are considerable hoops to go through before we can arrive at a system of digital identity, including someone's age, that is acknowledged, respected and entered into by the vast majority of people. As you probably know, the Government have committed in prior years to the Verify system, which we think has got as far as it can go, which is not far enough. We have a team of excellent policy officials in the DCMS looking afresh at other techniques of digital identity. It is a live issue and there have been many attempts at it; there is frustration, and not everybody would agree with what I have said. But you asked my view, and that is it--and the Department is focusing a lot of energy on that area of research.

Chair: Can you imagine that your legislation, when it comes, could include the concept, to which Stephen referred, of a digital identity for children?

Margot James: That is a long way off--or it is not next year, and probably not the year after, given how much consultation it would require. The new work has only just started, so it is not a short-term solution, and I do not expect to see it as part of our White Paper that we publish this winter. That does not mean to say that we do not think that it is important; we are working towards getting a system that we think could have public support.

To go slightly beyond the terms of your inquiry, with regard to the potential for delivering a proper digital relationship between citizen and G overnment through delivery of public services, a digital identity system will be important. We feel that public service delivery has a huge amount to gain from the digital solution.

Bill Grant Committee Member:: I am pleased to note that the Government are addressing issues that have been with us for nearly a decade--the dark side of social media and the risk to children, not least the risk that we all experience as parliamentarians. Can you offer any reason why it has taken so long for Government to begin that process? Would you be minded to accelerate the process to address the belated start?

Margot James: One reason is that progress has been made by working with technology companies. The Home Office has had considerable success in working with technology companies to eradicate terrorist content online. To a lesser but still significant extent, progress has also been made on a voluntary basis with the reduction in child abuse images and child sexual exploitation. I said "significant , " but this is a Home Office area--I am working closely with the Home Office, because the White Paper is being developed in concert with it--and it is clear that it does not feel that anything like enough is being done through voluntary measures.

Chair: Do you feel that?

Margot James: Yes, I do. A lot of the highly dangerous material has gone under the radar in the dark web, but too much material is still available, apparently, on various platforms, and it takes them too long to remove it.

Chair: Ultimately, the voluntary approach is not working adequately.

Margot James: Exactly--that is our view now. I was trying to address the hon. Member's question about why it had taken a long time. Partly it is that technology changes very fast , but, partly, it is because voluntary engagement was delivering, but it has impressed itself on us in the last 12 months that it is not delivering fast enough or adequately. We have not even talked about the vast range of other harms, some of which are illegal and some legal but harmful, and some in the grey area in between, where decidedly inadequate progress has been made as a result of the many instances of voluntary engagement, not just between the Government and the technology sector but between charitable organisations and non-governmental organisations, including the police.

Bill Grant: It was envisaged earlier that there would be some sort of regulator or ombudsman, but , over and above that , Martha Lane Fox's think - tank proposed the establishment of an office for responsible technology, which would be overarching, in whatever form the regulation comes. Would you be minded to take that on board?

Margot James: That is one proposal that we will certainly look at, yes. Martha Lane Fox does a lot of very good work in this area, has many years' experience of it, and runs a very good organisation in the "tech for good" environment, so her proposals are well worth consideration. That is one reason why I was unable to give a specific answer earlier, because there are good ideas, and they all need proper evaluation. When the White Paper is published, we will engage with you and any other interested party , and invite other organisations to contribute to our thinking, prior to the final legislation being put before Parliament and firming up the non-legislative measures, which are crucial. We all know that legislation does not solve every ill, and it is crucial that we continue the very good work being done by many internet companies to improve the overall environment.

 

 

This censor is not yet rated...

MPAA appoints a new chair of its ratings board, CARA


Link Here16th November 2018
The MPAA has named Kelly McMahon to succeed Joan Graves as chair of the Classification and Rating Administration, CARA.

Graves, a 77-year-old grandmother of two, retires next year after 30 years with the organization.

McMahon, joined the MPAA 11 years ago and currently serves as VP and corporate counsel. She is the legal counsel to CARA, providing guidance about compliance with the CARA rules and the advertising review process. She also oversees the CARA Appeals Board process.

CARA was created by former MPAA president and CEO Jack Valenti 50 years ago this month. This voluntary program provided an alternative to government censorship of movies and was designed first and foremost to be a resource for parents, while simultaneously protecting the First Amendment, the rights of filmmakers, and the creative process.

 

 

Comment: BBC pro-religion bias...

National Secular Society responds to a consultation on the BBC's guidelines establishing new restrictions on the criticism of religion


Link Here16th November 2018

The National Secular Society has urged the BBC to treat free expression as a positive value as it raised concerns that new guidelines defer excessively to religious sensitivities.

In response to a consultation on the draft guidelines the NSS warned that the corporation risked curtailing free speech by placing an excessive focus on avoiding offence when handling religion.

The NSS said the BBC should defend and uphold the principle of free expression. The society warned that the BBC's current position risked exacerbating a climate of self-censorship and acquiescing to de facto blasphemy codes.

The NSS said in places the guidelines gave religions protections which were otherwise only afforded to people. The society also questioned a section which appeared to place a particular premium on depictions of the Islamic prophet Muhammad.

Much of the NSS's criticism focused on the excessive deference given to religious sensitivities. In a statement of the BBC's values, the guidance says: In exercising freedom of expression, we must offer appropriate protection to vulnerable groups and avoid causing unnecessary offence.

The guidance also says the BBC should take care to avoid unjustified offence because religious beliefs are central to many people's lives and arouse strong views and emotions. It says this despite suggesting there is no longer an offence of blasphemy or blasphemous libel in any part of the UK.

The NSS said these lines risked acquiescing to de facto blasphemy codes and placed an unjustified focus on the feelings of the religious.

The society suggested a replacement section which would say the BBC should take care not to create a de facto blasphemy law. It also pointed out that the BBC's statement on blasphemy is factually incorrect, as Scotland and Northern Ireland both have blasphemy laws.

Elsewhere the NSS said the guidelines risked creating a double standard concerning treatment of religion, with critics of religion facing additional and unjustified burdens and restrictions.

The BBC's guidance says content dealing with religion which is likely to cause offence to those with religious views and beliefs must be referred to a senior editorial figure.

It also says producers of religious programmes and related content must ensure religious views and beliefs206 are not subject to abusive treatment, adding contributors should not be allowed to denigrate the beliefs of others.

The NSS said robust debate and exchanges of views should not be beyond the bounds of what is reasonable, provided such exchanges are measured and not abusive or insulting.

The NSS welcomed the fact that the guidance no longer contains a specific prohibition on depictions of the Islamic prophet Muhammad but questioned the inclusion of a section dedicated specifically to that subject.

The guidance says the BBC must have strong editorial justification for publishing any depiction of the Prophet Muhammad. It adds that any proposal to do so must be referred to a senior editorial figure, who should normally consult Editorial Policy. It says many Muslims regard any depiction of Muhammad as highly offensive.

The NSS described this as an improvement on previous guidance which forbade any depiction of Muhammad. But it added that the section suggested a particular taboo which added to a climate of censorship brought on by the unreasonable and reactionary views of some religious extremists.

 

 

Offsite Article: Tragic Event...


Link Here16th November 2018
Video Describes the Gruesome Scenes Deleted from Event Horizon

See article from bloody-disgusting.com

 

 

Offsite Article: PayPal's corporate censorship...


Link Here16th November 2018
Full story: Paypal Censors...Paypal unilaterally decide to act as media censors
Why are left-wingers demanding that Silicon Valley police political opinions? By Fraser Myers

See article from spiked-online.com

 

 

Offsite Article: Partly free...


Link Here16th November 2018
Detailed report on Internet censorship laws in South Korea

See article from lawless.tech

 

 

UK adult businesses to be crucified from Easter 2019...

DCMS minister Margot James informs parliamentary committee of the schedule for the age verification internet porn censorship regime


Link Here 15th November 2018
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
Age Verification and adult internet censorship was discussed by the Commons Science and Technology Committee on 13th November 2018.

Carol Monaghan Committee Member: The Digital Economy Act made it compulsory for commercial pornography sites to undertake age verification, but implementation has been subject to ongoing delays. When do we expect it to go live?

Margot James MP, Minister for Digital and the Creative Industries: We can expect it to be in force by Easter next year. I make that timetable in the knowledge that we have laid the necessary secondary legislation before Parliament. I am hopeful of getting a slot to debate it before Christmas, before the end of the year. We have always said that we will permit the industry three months to get up to speed with the practicalities and delivering the age verification that it will be required to deliver by law. We have also had to set up the regulator--well, not to set it up, but to establish with the British Board of Film Classification , which has been the regulator, exactly how it will work. It has had to consult on the methods of age verification, so it has taken longer than I would have liked, but I would balance that with a confidence that we have got it right.

Carol Monaghan: Are you confident that the commercial pornography companies are going to engage fully and will implement the law as you hope?

Margot James: I am certainly confident on the majority of large commercial pornography websites and platforms being compliant with the law. They have engaged well with the BBFC and the Department , and want to be on the right side of the law. I have confidence, but I am wary of being 100% confident, because there are always smaller and more underground platforms and sites that will seek ways around the law. At least, that is usually the case. We will be on the lookout for that, and so will the BBFC. But the vast majority of organisations have indicated that they are keen to comply with the legislation.

Carol Monaghan: One concern that we all have is that children can stumble across pornography. We know that on social media platforms, where children are often active, up to a third of their content can be pornographic, but they fall outside the age verification regulation because it is only a third and not the majority. Is that likely to undermine the law? Ultimately the law, as it stands, is there to safeguard our children.

Margot James: I acknowledge that that is a weakness in the legislative solution. I do not think that for many mainstream social media platforms as much of a third of their content is pornographic, but it is well known that certain social media platforms that many people use regularly have pornography freely available. We have decided to start with the commercial operations while we bring in the age verification techniques that have not been widely used to date. But we will keep a watching brief on how effective those age verification procedures turn out to be with commercial providers and will keep a close eye on how social media platforms develop in terms of the extent of pornographic material, particularly if they are platforms that appeal to children--not all are. You point to a legitimate weakness, on which we have a close eye.

 

 

Suffocating European livelihoods at the behest of big business...

Julia Reda outlines amendments to censorship machines and link tax as the upcoming internet censorship law gets discussed by the real bosses of the EU


Link Here 15th November 2018
Full story: Copyright in the EU...Copyright law for Europe

The closed-door trilogue efforts to finalise the EU Copyright Directive continue. The Presidency of the Council, currently held by Austria, has now circulated among the EU member state governments a new proposal for a compromise between the differing drafts currently on the table for the controversial Articles 11 and 13.

Under this latest proposal, both upload filters and the link tax would be here to stay -- with some changes for the better, and others for the worse.

Upload filters/Censorshipmachines

Let's recall: In its final position, the European Parliament had tried its utmost to avoid specifically mentioning upload filters, in order to avoid the massive public criticism of that measure. The text they ended up with, however, was even worse: It would make online platforms inescapably liable for any and all copyright infringement by their users, no matter what action they take. Not even the strictest upload filter in the world could possibly hope to catch 100% of unlicensed content.

This is what prompted YouTube's latest lobbying efforts in favour of upload filters and against the EP's proposal of inescapable liability. Many have mistaken this as lobbying against Article 13 as a whole -- it is not. In Monday's Financial Times, YouTube spelled out that they would be quite happy with a law that forces everyone else to build (or, presumably, license from them) what they already have in place: Upload filters like Content ID.

In this latest draft, the Council Presidency sides with YouTube, going back to rather explicitly prescribing upload filters. The Council proposes two alternative options on how to phrase that requirement, but they match in effect:

Platforms are liable for all copyright infringements committed by their users, EXCEPT if they

  • cooperate with rightholders

  • by implementing effective and proportionate steps to prevent works they've been informed about from ever going online determining which steps those are must take into account suitable and effective technologies

  • Under this text, wherever upload filters are possible, they must be implemented: All your uploads will require prior approval by error-prone copyright bots .

On the good side, the Council Presidency seems open to adopting the Parliament's exception for platforms run by small and micro businesses . It also takes on board the EP's better-worded exception for open source code sharing platforms like GitHub.

On the bad side, Council rejects Parliament's efforts for a stronger complaint mechanism requiring reviews by humans and an independent conflict resolution body. Instead it takes on board the EP's insistence that licenses taken out by a platform don't even have to necessarily cover uses of these works by the users of that platform. So, for example, even if YouTube takes out a license to show a movie trailer, that license could still prevent you as an individual YouTuber from using that trailer in your own uploads.

Article 11 Link tax

On the link tax, the Council is mostly sticking to its position: It wants the requirement to license even short snippets of news articles to last for one year after an article's publication, rather than five, as the Parliament proposed.

In a positive development, the Council Presidency adopts the EP's clarification that at least the facts included in news articles as such should not be protected. So a journalist would be allowed to report on what they read in another news article, in their own words.

Council fails to clearly exclude hyperlinks -- even those that aren't accompanied by snippets from the article. It's not uncommon for the URLs of news articles themselves to include the article's headline. While the Council wants to exclude insubstantial parts of articles from requiring a license, it's not certain that headlines count as insubstantial. (The Council's clause allowing acts of hyperlinking when they do not constitute communication to the public would not apply to such cases, since reproducing the headline would in fact constitute such a communication to the public.)

The Council continues to want the right to only apply to EU-based news sources -- which could in effect mean fewer links and listings in search engines, social networks and aggregators for European sites, putting them at a global disadvantage.

However, it also proposes spelling out that news sites may give out free licenses if they so choose -- contrary to the Parliament, which stated that listing an article in a search engine should not be considered sufficient payment for reproducing snippets from it.

 

 

Er...it's easy, just claim it transgresses 'community guidelines'...

Facebook will train up French censors in the art of taking down content deemed harmful


Link Here 15th November 2018
Full story: Facebook Censorship...Facebook quick to censor

The French President, Emmanuel Macron has announced a plan to effectively embed French state censors with Facebook to learn more about how to better censor the platform. He announced a six-month partnership with Facebook aimed at figuring out how the European country should police hate speech on the social network.

As part of the cooperation both sides plan to meet regularly between now and May, when the European election is due to be held. They will focus on how the French government and Facebook can work together to censor content deemed 'harmful'. Facebook explained:

It's a pilot program of a more structured engagement with the French government so that both sides can better understand the other's challenges in dealing with the issue of hate speech online. The program will allow a team of regulators, chosen by the Elysee, to familiarize [itself] with the tools and processes set up by Facebook to fight against hate speech. The working group will not be based in one location but will travel to different Facebook facilities around the world, with likely visits to Dublin and California. The purpose of this program is to enable regulators to better understand Facebook's tools and policies to combat hate speech and, for Facebook, to better understand the needs of regulators.

 

 

Fireworks in the House...

The Lords discuss when age verification internet censorship will start


Link Here13th November 2018
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust

Pornographic Websites: Age Verification - Question

House of Lords on 5th November 2018 .

Baroness Benjamin Liberal Democrat

To ask Her Majesty 's Government what will be the commencement date for their plans to ensure that age-verification to prevent children accessing pornographic websites is implemented by the British Board of Film Classification .

Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport

My Lords, we are now in the final stages of the process, and we have laid the BBFC 's draft guidance and the Online Pornography (Commercial Basis) Regulations before Parliament for approval. We will ensure that there is a sufficient period following parliamentary approval for the public and the industry to prepare for age verification. Once parliamentary proceedings have concluded, we will set a date by which commercial pornography websites will need to be compliant, following an implementation window. We expect that this date will be early in the new year.

Baroness Benjamin

I thank the Minister for his Answer. I cannot wait for that date to happen, but does he share my disgust and horror that social media companies such as Twitter state that their minimum age for membership is 13 yet make no attempt to restrict some of the most gross forms of pornography being exchanged via their platforms? Unfortunately, the Digital Economy Act does not affect these companies because they are not predominantly commercial porn publishers. Does he agree that the BBFC needs to develop mechanisms to evaluate the effectiveness of the legislation for restricting children's access to pornography via social media sites and put a stop to this unacceptable behaviour?

Lord Ashton of Hyde

My Lords, I agree that there are areas of concern on social media sites. As the noble Baroness rightly says, they are not covered by the Digital Economy Act . We had many hours of discussion about that in this House. However, she will be aware that we are producing an online harms White Paper in the winter in which some of these issues will be considered. If necessary, legislation will be brought forward to address these, and not only these but other harms too. I agree that the BBFC should find out about the effectiveness of the limited amount that age verification can do; it will commission research on that. Also, the Digital Economy Act itself made sure that the Secretary of State must review its effectiveness within 12 to 18 months.

Lord Griffiths of Burry Port Opposition Whip (Lords), Shadow Spokesperson (Digital, Culture, Media and Sport), Shadow Spokesperson (Wales)

My Lords, once again I find this issue raising a dynamic that we became familiar with in the only too recent past. The Government are to be congratulated on getting the Act on to the statute book and, indeed, on taking measures to identify a regulator as well as to indicate that secondary legislation will be brought forward to implement a number of the provisions of the Act. My worry is that, under one section of the Digital Economy Act , financial penalties can be imposed on those who infringe this need; the Government seem to have decided not to bring that provision into force at this time. I believe I can anticipate the Minister 's answer but--in view of the little drama we had last week over fixed-odds betting machines--we would not want the Government, having won our applause in this way, to slip back into putting things off or modifying things away from the position that we had all agreed we wanted.

Lord Ashton of Hyde

My Lords, I completely understand where the noble Lord is coming from but what he said is not quite right. The Digital Economy Act included a power that the Government could bring enforcement with financial penalties through a regulator. However, they decided--and this House decided--not to use that for the time being. For the moment, the regulator will act in a different way. But later on, if necessary, the Secretary of State could exercise that power. On timing and FOBTs, we thought carefully--as noble Lords can imagine--before we said that we expect the date will be early in the new year,

Lord Addington Liberal Democrat

My Lords, does the Minister agree that good health and sex education might be a way to counter some of the damaging effects? Can the Government make sure that is in place as soon as possible, so that this strange fantasy world is made slightly more real?

Lord Ashton of Hyde

The noble Lord is of course right that age verification itself is not the only answer. It does not cover every possibility of getting on to a pornography site. However, it is the first attempt of its kind in the world, which is why not only we but many other countries are looking at it. I agree that sex education in schools is very important and I believe it is being brought into the national curriculum already.

The Earl of Erroll Crossbench

Why is there so much wriggle room in section 6 of the guidance from the DCMS to the AV regulator? The ISP blocking probably will not work, because everyone will just get out of it. If we bring this into disrepute then the good guys, who would like to comply, probably will not; they will not be able to do so economically. All that was covered in British Standard PAS 1296, which was developed over three years. It seems to have been totally ignored by the DCMS. You have spent an awful lot of time getting there, but you have not got there.

Lord Ashton of Hyde

One of the reasons this has taken so long is that it is complicated. We in the DCMS , and many others, not least in this House, have spent a long time discussing the best way of achieving this. I am not immediately familiar with exactly what section 6 says, but when the statutory instrument comes before this House--it is an affirmative one to be discussed--I will have the answer ready for the noble Earl.

Lord West of Spithead Labour

My Lords, does the Minister not agree that the possession of a biometric card by the population would make the implementation of things such as this very much easier?

Lord Ashton of Hyde

In some ways it would, but there are problems with people who either do not want to or cannot have biometric cards.

 

 

Offsite Article: The Potential Unintended Consequences of Article 13...


Link Here13th November 2018
Full story: Copyright in the EU...Copyright law for Europe
Susan Wojcicki, CEO of YouTube explains how the EU's copyright rewrite will destroy the livelihood of a huge number of Europeans

See article from youtube-creators.googleblog.com

 

 

Potent regulation...

HappyDown cocktails censured for lack of clarity about alcoholic content


Link Here11th November 2018
Full story: UK Drinks Censor...Portman Group play PC censor for drinks

A complaint about HappyDown sparkling cocktails has been upheld by the Independent Complaints Panel for failing to clearly communicate their alcoholic content.

The complainant, a member of the public, believed that the cartoon imagery used on the cans could appeal to children. The Panel did not believe that it did appeal to children but did raise concerns that the cues describing it as alcoholic were not immediately obvious. The Panel concluded that the alcoholic nature of the drink was not clearly communicated and accordingly found the product in breach of Code rule 3.1.

HappyDown's producer, Tipple Brands Limited, will work with the Advisory Service to address the issues raised.

John Timothy, Secretary to the Independent Complaints Panel, commented, Alcoholic content needs to be conveyed clearly. Producers need to ask themselves if there is any other messaging or design on their product which could undermine this clarity.

 

 

Avoiding tears...

Bible Society is miffed that its Remembrance Day advert is banned by cinemas, unsurprisingly preferring to avoid the violence, threat and intimidation associated with religion


Link Here10th November 2018
Cinemas have rejected a Bible Society advert speaking of the comfort some first world war soldiers found in the Bible. The three-minute film, titled Wipe Every Tear , explains that all British soldiers were given a Bible as part of their kit and that this was a source of hope to many.

Empire Cinemas explained that they do not take adverts from any religious groups.

The three-minute film opens with footage of soldiers in trenches. A caption explains All British soldiers were given a Bible as part of their kit. Captions continue: To many it was a source of hope. For eternal peace. The film then moves to clips of contemporary people, often in their workplace, reciting Revelation 21: 1-7. These include a farmer, a fisherman, a hairdresser, a soldier, and a chef. The concluding captions state: The Bible. Still giving peace and hope today.

The film was intended to be shown in 125 screens at 14 venues across the country in the run-up to the armistice centenary this weekend. The Bible Society is reported to have reached agreement with cinema advertising company Pearl and Dean for the distribution of the film. Pearl and Dean later emailed to say that Empire Cinemas had vetoed the film because they do not accept religious or political advertisements.

 

 

Offsite Article: Promises! Promises!...


Link Here10th November 2018
The History Of Nudity In R-Rated Films. By Dirk Libbey

See article from cinemablend.com

 

 

Offsite Article: Reverse motion...


Link Here10th November 2018
As 6 major Hollywood studios become 5, a little speculation how the MPAA will cope with its shrinking budget

See article from torrentfreak.com

 

 

BBFC: Age verification we don't trust...

Analysis of BBFC's Post-Consultation Guidance by the Open Rights Group


Link Here8th November 2018
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
Following the conclusion of their consultation period, the BBFC have issued new age verification guidance that has been laid before Parliament. It is unclear why, if the government now recognises that privacy protections like this are needed, the government would also leave the requirements as voluntary.

Summary

The new code has some important improvements, notably the introduction of a voluntary scheme for privacy, close to or based on a GDPR Code of Conduct. This is a good idea, but should not be put in place as a voluntary arrangement. Companies may not want the attention of a regulator, or may simply wish to apply lower or different standards, and ignore it. It is unclear why, if the government now recognises that privacy protections like this are needed, the government would also leave the requirements as voluntary.

We are also concerned that the voluntary scheme may not be up and running before the AV requirement is put in place. Given that 25 million UK adults are expected to sign up to these products within a few months of its launch, this would be very unhelpful.

Parliament should now:

  • Ask the government why the privacy scheme is to be voluntary, if the risks of relying on general data protection law are now recognised;
  • Ask for assurance from BBFC that the voluntary scheme will cover the all of the major operators; and
  • Ask for assurance from BBFC and DCMS that the voluntary privacy scheme will be up and running before obliging operators to put Age Verification measures in place.

The draft code can be found here .

Lack of Enforceability of Guidance

The Digital Economy Act does not allow the BBFC to judge age verification tools by any standard other than whether or not they sufficiently verify age. We asked that the BBFC persuade the DCMS that statutory requirements for privacy and security were required for age verification tools.

The BBFC have clearly acknowledged privacy and security concerns with age verification in their response. However, the BBFC indicate in their response that they have been working with the ICO and DCMS to create a voluntary certification scheme for age verification providers:

"This voluntary certification scheme will mean that age-verification providers may choose to be independently audited by a third party and then certified by the Age-verification Regulator. The third party's audit will include an assessment of an age-verification solution's compliance with strict privacy and data security requirements."

The lack of a requirement for additional and specific privacy regulation in the Digital Economy Act is the cause for this voluntary approach.

While a voluntary scheme above is likely to be of some assistance in promoting better standards among age verification providers, the "strict privacy and data security requirements" which the voluntary scheme mentions are not a statutory requirement, leaving some consumers at greater risk than others.

Sensitive Personal Data

The data handled by age verification systems is sensitive personal data. Age verification services must directly identify users in order to accurately verify age. Users will be viewing pornographic content, and the data about what specific content a user views is highly personal and sensitive. This has potentially disastrous consequences for individuals and families if the data is lost, leaked, or stolen.

Following a hack affecting Ashley Madison -- a dating website for extramarital affairs -- a number of the site's users were driven to suicide as a result of the public exposure of their sexual activities and interests.

For the purposes of GDPR, data handled by age verification systems falls under the criteria for sensitive personal data, as it amounts to "data concerning a natural person's sex life or sexual orientation".

Scheduling Concerns

It is of critical importance that any accreditation scheme for age verification providers, or GDPR code of conduct if one is established, is in place and functional before enforcement of the age verification provisions in the Digital Economy Act commences. All of the major providers who are expected to dominate the age verification market should undergo their audit under the scheme before consumers will be expected to use the tool. This is especially true when considering the fact that MindGeek have indicated their expectation that 20-25 million UK adults will sign up to their tool within the first few months of operation. A voluntary accreditation scheme that begins enforcement after all these people have already signed up would be unhelpful.

Consumers should be empowered to make informed decisions about the age verification tools that they choose from the very first day of enforcement. No delays are acceptable if users are expected to rely upon the scheme to inform themselves about the safety of their data. If this cannot be achieved prior to the start of expected enforcement of the DE Act's provisions, then the planned date for enforcement should be moved back to allow for the accreditation to be completed.

Issues with Lack of Consumer Choice

It is of vital importance that consumers, if they must verify their age, are given a choice of age verification providers when visiting a site. This enables users to choose which provider they trust with their highly sensitive age verification data and prevents one actor from dominating the market and thereby promoting detrimental practices with data. The BBFC also acknowledge the importance of this in their guidance, noting in 3.8:

"Although not a requirement under section 14(1) the BBFC recommends that online commercial pornography services offer a choice of age-verification methods for the end-user".

This does not go far enough to acknowledge the potential issues that may arise in a fragmented market where pornographic sites are free to offer only a single tool if they desire.

Without a statutory requirement for sites to offer all appropriate and available tools for age verification and log in purposes, it is likely that a market will be established in which one or two tools dominate. Smaller sites will then be forced to adopt these dominant tools as well, to avoid friction with consumers who would otherwise be required to sign up to a new provider.

This kind of market for age verification tools will provide little room for a smaller provider with a greater commitment to privacy or security to survive and robs users of the ability to choose who they trust with their data.

We already called for it to be made a statutory requirement that pornographic sites must offer a choice of providers to consumers who must age verify, however this suggestion has not been taken up.

We note that the BBFC has been working with the ICO and DCMS to produce a voluntary code of conduct. Perhaps a potential alternative solution would be to ensure that a site is only considered compliant if it offers users a number of tools which has been accredited under the additional privacy and security requirements of the voluntary scheme.

GDPR Codes of Conduct

A GDPR "Code of Conduct" is a mechanism for providing guidelines to organisations who process data in particular ways, and allows them to demonstrate compliance with the requirements of the GDPR.

A code of conduct is voluntary, but compliance is continually monitored by an appropriate body who are accredited by a supervisory authority. In this case, the "accredited body" would likely be the BBFC, and the "supervisory authority" would be the ICO. The code of conduct allows for certifications, seals and marks which indicate clearly to consumers that a service or product complies with the code.

Codes of conduct are expected to provide more specific guidance on exactly how data may be processed or stored. In the case of age verification data, the code could contain stipulations on:

  • Appropriate pseudonymisation of stored data;
  • Data and metadata retention periods;
  • Data minimisation recommendations;
  • Appropriate security measures for data storage;
  • Security breach notification procedures;
  • Re-use of data for other purposes.

The BBFC's proposed "voluntary standard" regime appears to be similar to a GDPR code of conduct, though it remains to be seen how specific the stipulations in the BBFC's standard are. A code of conduct would also involve being entered into the ICO's public register of UK approved codes of conduct, and the EPDB's public register for all codes of conduct in the EU.

Similarly, GDPR Recital 99 notes that "relevant stakeholders, including data subjects" should be consulted during the drafting period of a code of conduct - a requirement which is not in place for the BBFC's voluntary scheme.

It is possible that the BBFC have opted to create this voluntary scheme for age verification providers rather than use a code of conduct, because they felt they may not meet the GDPR requirements to be considered as an appropriate body to monitor compliance. Compliance must be monitored by a body who has demonstrated:

  • Their expertise in relation to the subject-matter;
  • They have established procedures to assess the ability of data processors to apply the code of conduct;
  • They have the ability to deal with complaints about infringements; and
  • Their tasks do not amount to a conflict of interest.
Parties Involved in the Code of Conduct Process

As noted by GDPR Recital 99, a consultation should be a public process which involves stakeholders and data subjects, and their responses should be taken into account during the drafting period:

"When drawing up a code of conduct, or when amending or extending such a code, associations and other bodies representing categories of controllers or processors should consult relevant stakeholders, including data subjects where feasible , and have regard to submissions received and views expressed in response to such consultations."

The code of conduct must be approved by a relevant supervisory authority (in this case the ICO).

An accredited body (BBFC) that establishes a code of conduct and monitors compliance is able to establish their own structures and procedures under GDPR Article 41 to handle complaints regarding infringements of the code, or regarding the way it has been implemented. BBFC would be liable for failures to regulate the code properly under Article 41(4), [1] however DCMS appear to have accepted the principle that the government would need to protect BBFC from such liabilities. [2]

GDPR Codes of Conduct and Risk Management

Below is a table of risks created by age verification which we identified during the consultation process. For each risk, we have considered whether a GDPR code of conduct may help to mitigate the effects of it.

Risk CoC Appropriate? Details
User identity may be correlated with viewed content. Partially This risk can never be entirely mitigated if AV is to go ahead, but a CoC could contain very strict restrictions on what identifying data could be stored after a successful age verification.
Identity may be associated to an IP address, location or device. No It would be very difficult for a CoC to mitigate this risk as the only safe mitigation would be not to collect user identity information.
An age verification provider could track users across all the websites it's tool is offered on. Yes Strict rules could be put in place about what data an age verification provider may store, and what data it is forbidden from storing.
Users may be incentivised to consent to further processing of their data in exchange for rewards (content, discounts etc.) Yes Age verification tools could be expressly forbidden from offering anything in exchange for user consent.
Leaked data creates major risks for identified individuals and cannot be revoked or adequately compensated for. Partially A CoC can never fully mitigate this risk if any data is being collected, but it could contain strict prohibitions on storing certain information and specify retention periods after which data must be destroyed, which may mitigate the impacts of a data breach.
Risks to the user of access via shared computers if viewing history is stored alongside age verification data. Yes A CoC could specify that any accounts for pornographic websites which may track viewed content must be strictly separate and not in any visible way linked to a user's age verification account or data that confirms their identity.
Age verification systems are likely to trade off convenience for security. (No 2FA, auto-login, etc.) Yes A CoC could stipulate that login cookies that "remember" a returning user must only persist for a short time period, and should recommend or enforce two-factor authentication.
The need to re-login to age verification services to access pornography in "private browsing" mode may lead people to avoid using this feature and generate much more data which is then stored. No A CoC cannot fix this issue. Private browsing by nature will not store any login cookies or other objects and will require the user to re-authenticate with age verification providers every time they wish to view adult content.
Users may turn to alternative tools to avoid age verification, which carry their own security risks. (Especially "free" VPN services or peer-to-peer networks). No Many UK adults, although over 18, will be uncomfortable with the need to submit identity documents to verify their age and will seek alternative means to access content. It is unlikely that many of these individuals will be persuaded by an accreditation under a GDPR code.
Age verification login details may be traded and shared among teenagers or younger children, which could lead to bullying or "outing" if such details are linked to viewed content. Yes Strict rules could be put in place about what data an age verification provider may store, and what data it is forbidden from storing.
Child abusers could use their access to age verified content as an adult as leverage to create and exploit relationships with children and teenagers seeking access to such content (grooming). No This risk will exist as long as age verification is providing a successful barrier to accessing such content for under-18s who wish to do so.
The sensitivity of content dealt with by age verification services means that users who fall victim to phishing scams or fraud have a lower propensity to report it to the relevant authorities. Partially A CoC or education campaign may help consumers identify trustworthy services, but it can not fix the core issue, which is that users are being socialised into it being "normal" to input their identity details into websites in exchange for pornography. Phishing scams resulting from age verification will appear and will be common, and the sensitivity of the content involved is a disincentive to reporting it.
The use of credit cards as an age verification mechanism creates an opportunity for fraudulent sites to engage in credit card theft. No Phishing and fraud will be common. A code of conduct which lists compliant sites and tools externally on the ICO website may be useful, but a phishing site may simply pretend to be another (compliant) tool, or rely on the fact that users are unlikely to check with the ICO every time they wish to view pornographic content.
The rush to get age verification tools to market means they may take significant shortcuts when it comes to privacy and security. Yes A CoC could assist in solving this issue if tools are given time to be assessed for compliance before the age verification regime commences .
A single age verification provider may come to dominate the market, leaving users little choice but to accept whatever terms the provider offers. Partially Practically, a CoC could mitigate some of the effects of an age verification tool monopoly if the dominant tool is accredited under the Code. However, this relies on users being empowered to demand compliance with a CoC, and it is possible that users will instead be left with a "take it or leave it" situation where the dominant tool is not CoC accredited.
Allowing pornography "monopolies" such as MindGeek to operate age verification tools is a conflict of interest. Partially As the BBFC note in their consultation response, it would not be reasonable to prohibit a pornographic content provider from running an age verification service as it would prevent any site from running their own tool. However, under a CoC it is possible that a degree of separation could be enforced that requires an age verification tools to adhere to strict rules about the use of data, which could mitigate the effects of a large pornographic content provider attempting to collect as much user data as possible for their own business purposes.
 

[1] "Infringements of the following provisions shall, in accordance with paragraph 2, be subject to administrative fines up to 10 000 000 EUR, or in the case of an undertaking, up to 2 % of the total worldwide annual turnover of the preceding financial year, whichever is higher: the obligations of the monitoring body pursuant to Article 41(4)."

[2] "contingent liability will provide indemnity to the British Board of Film Classification (BBFC) against legal proceedings brought against the BBFC in its role as the age verification regulator for online pornography."

 

 

A modern swear box...

Its probably not a good idea to leave much money in a Skype or XBox Live account as Microsoft can now seize it if they catch you using a vaguely offence word


Link Here8th November 2018
Full story: Microsoft Snooping...Microsoft’s Windows 10 is a privacy nightmare
Microsoft has just inflicted a new 'code of conduct' that prohibits customers communicating nudity, bestiality, pornography, offensive language, graphic violence and criminal activity, whilst allowing Microsoft to steal the money in your account.

If users are found to have shared, or be in possession of, these types of content, Microsoft can suspend or ban the particular user and remove funds or balance on the associated account.

It also appears that Microsoft reserves the right to view user content to investigate violations to these terms. This means it has access to your message history and shared files (including on OneDrive, another Microsoft property) if it thinks you've been sharing prohibited material.

Unsurprisingly, few users are happy that Microsoft is willing to delve through their personal data.

Microsoft has not made it clear if it will automatically detect and censor prohibited content or if it will reply on a reporting system. On top of that, Microsoft hasn't clearly defined its vague terms. Nobody is clear on what the limit on offensive language is.

 

 

Creeping about your life...

Facebook friend suggestion: Ms Tress who visits your husband upstairs at your house for an hour every Thursday afternoon whilst you are at work


Link Here8th November 2018
Full story: Facebook Privacy...Facebook criticised for discouraging privacy
Facebook has files a patent that describes a method of using the devices of Facebook app users to identify various wireless signals from the devices of other users.

It explains how Facebook could use those signals to measure exactly how close the two devices are to one another and for how long, and analyses that data to infer whether it is likely that the two users have met. The patent also suggests the app could record how often devices are close to one another, the duration and time of meetings, and can even use its gyroscope and accelerometer to analyse movement patterns, for example whether the two users may be going for a jog, smooching or catching a bus together.

Facebook's algorithm would use this data to analyse how likely it is that the two users have met, even if they're not friends on Facebook and have no other connections to one another. This might be based on the pattern of inferred meetings, such as whether the two devices are close to one another for an hour every Thursday, and an algorithm would determine whether the two users meeting was sufficiently significant to recommend them to each other and/or friends of friends.

I don't suppose that Facebook can claim this patent though as police and the security services have no doubt been using this technique for years.

 

 

Rarely challenged until now...

Privacy International challenges major data brokers over GDPR privacy rules


Link Here8th November 2018
Privacy International has filed complaints against seven data brokers (Acxiom, Oracle), ad-tech companies (Criteo, Quantcast, Tapad), and credit referencing agencies (Equifax, Experian) with data protection authorities in France, Ireland, and the UK.

It's been more than five months since the EU's General Data Protection Regulation (GDPR) came into effect. Fundamentally, the GDPR strengthens rights of individuals with regard to the protection of their data, imposes more stringent obligations on those processing personal data, and provides for stronger regulatory enforcement powers -- in theory.

In practice, the real test for GDPR will be in its enforcement.

Nowhere is this more evident than for data broker and ad-tech industries that are premised on exploiting people's data. Despite exploiting the data of millions of people, are on the whole non-consumer facing and therefore rarely have their practices challenged.

 

 

Hell to pay...

Satanic temple sues Netflix with a copyright claim over a statue of Baphomet


Link Here8th November 2018
The Satanic Temple in Salem, Massachusetts is suing Netflix and producers Warner Brothers over a statue of the goat-headed deity Baphomet that appears in the TV series Chilling Adventures of Sabrina .

The temple is claiming that Netflix and Warners are violating the copyright and trademark of the temple's own Baphomet statue, which it built several years ago.

Historically, the androgynous deity has been depicted with a goat's head on a female body, but The Satanic Temple created this statue with Baphomet having a male chest an idea that was picked up by Netflix.

The Temple is seeking damages of at least $50 million for copyright infringement, trademark violation and injury to business reputation. In the Sabrina storyline, the use of the statue as the central focal point of the school associated with evil, cannibalism and possibly murder is injurious to TST's business, the Temple says in its suit.

 

 

Offsite Article: The police chiefs vs the thoughtpolice...


Link Here8th November 2018
Why police should stay out of hate incidents. By Fraser Myers

See article from spiked-online.com

 

 

Once Upon a Deadpool...

Children's version of Deadpool 2 rated PG-13 by the MPAA


Link Here7th November 2018
Deadpool 2 is a 2018 USA action comedy adventure by David Leitch.
Starring Josh Brolin, Morena Baccarin and Zazie Beetz. BBFC link IMDb

There are no censorship issues with this release. The film was initially released as an MPAA R rated  Theatrical Version and an unrated extended 'Super Duper $@%!#& Cut', Both are 15 rated by the BBFC. The film was cut in India for an adults-only 'A' rating and was banned in China.

Later the film was cut for an MPAA PG-13 rated version titled Once Upon a Deadpool. This version has just been rated PG-13 by the MPAA for intense sequences of violence and action, crude sexual content, language, thematic elements and brief drug material.

 

 

Don't pull the trigger...

New Zealand film censor demands a suicide trigger warning to be prefixed to A Star is Born


Link Here7th November 2018
Full story: Film censorship in New Zealand...At the Office of Film and Literature Classification
A Star Is Born is a 2018 USA romance by Bradley Cooper.
Starring Lady Gaga, Bradley Cooper and Sam Elliott. IMDb

Seasoned musician Jackson Maine (Bradley Cooper) discovers-and falls in love with-struggling artist Ally (Gaga). She has just about given up on her dream to make it big as a singer - until Jack coaxes her into the spotlight. But even as Ally's career takes off, the personal side of their relationship is breaking down, as Jack fights an ongoing battle with his own internal demons.

New Zealand film chief censor, David Shanks, has demanded a new warning be added to prints of the Oscar-tipped remake of A Star Is Born .

Shanks reacted after complaints of viewer distress from Police Victim Support, who said two vulnerable young people had been severely 'triggered' after watching a suicide scene in the film. The Office of Film & Literature Classification said further complaints were also filed to them by the Mental Health Foundation.

The film was rated M (PG-15 in US terminology) in Australia and this rating then automatically accepted for distribution in New Zealand albeit with the age recommendation increased to 16. The Australian consumer advice noted: Sex scenes, offensive language and drug use, but the New Zealand censor has now added suicide to the list.

Shanks praised the film's handling of the topic but said he felt that the addition was still necessary. He said:

Many people in New Zealand have been impacted by suicide. For those who have lost someone close to them, a warning gives them a chance to make an informed choice about watching.

 

 

Campiagn: Contract for the Web...

Tim Berners-Lee launches campaign to defend a free and open internet


Link Here7th November 2018
Speaking at the Web Summit conference in Lisbon, Tim Berners-Lee, inventor of the World Wide Web, has launched a campaign to persuade governments, companies and individuals to sign a Contract for the Web with a set of principles intended to defend a free and open internet.

Contract for the Web CORE PRINCIPLES

The web was designed to bring people together and make knowledge freely available. Everyone has a role to play to ensure the web serves humanity. By committing to the following principles, governments, companies and citizens around the world can help protect the open web as a public good and a basic right for everyone.

GOVERNMENTS WILL
  • Ensure everyone can connect to the internet so that anyone, no matter who they are or where they live, can participate actively online.
  • Keep all of the internet available, all of the time so that no one is denied their right to full internet access.
  • Respect people's fundamental right to privacy so everyone can use the internet freely, safely and without fear.
COMPANIES WILL
  • Make the internet affordable and accessible to everyone so that no one is excluded from using and shaping the web.
  • Respect consumers' privacy and personal data so people are in control of their lives online.
  • Develop technologies that support the best in humanity and challenge the worst so the web really is a public good that puts people first.
CITIZENS WILL
  • Be creators and collaborators on the web so the web has rich and relevant content for everyone.
  • Build strong communities that respect civil discourse and human dignity so that everyone feels safe and welcome online.
  • Fight for the web so the web remains open and a global public resource for people everywhere, now and in the future.
We commit to uphold these principles and to engage in a deliberative process to build a full "Contract for the Web", which will set out the roles and responsibilities of governments, companies and citizens. The challenges facing the web today are daunting and affect us in all our lives, not just when we are online. But if we work together and each of us takes responsibility for our actions, we can protect a web that truly is for everyone.

See more from fortheweb.webfoundation.org

 

 

The art of social division...

The politically correct National Trust promotes women's art by censoring men's art


Link Here7th November 2018
The National Trust has organised an art exhibition to promote the role of women and celebrate the life of Margaret Armstrong, the wife of a 19th-century industrialist. But instead of filling her grand country hall with artefacts about her life, the National Trust decided to cover up artworks that were created by or featured men.

Visitors described the project as ridiculous after paintings were covered with sheets and statues wrapped in bags. It was reported that staff at Cragside in Northumberland had to empty the comments box several times a day due to the volume of complaints.

Now the National Trust has admitted the idea backfired. It claimed the project was not about censoring art or being politically correct but was designed to encourage visitors to look at the collection differently and stimulate debate. The trust said:

Sometimes it doesn't work as we intended and we accept the feedback we have received, We've had a mix of positive and negative comments. We're going to look at it closely and it will be reviewed thoroughly.

 

 

Allowing pay TV to compete with Netflix whilst disadvantaging free to air...

Ofcom will allow TV channels to broadcast post watershed content during the daytime if the channel has a mandatory PIN access mechanism


Link Here 5th November 2018

Ofcom has published a statement setting out its decision to make changes to the rules about the use of mandatory PIN codes in Section One of the Broadcasting Code.

We publicly consulted on our proposals to update the mandatory daytime PIN rules in March 2018, and the statement concludes our review.

Section One of the Code currently allows for 15-rated films to be broadcast during the daytime on subscription film channels and up to 18-rated films on pay per view film channels, provided a mandatory PIN is in place. Mandatory PIN protection cannot be removed by the user and restricts access solely to those authorised to view.

The statement sets out Ofcom’s decision to extend the application of the mandatory PIN rules in Section One of the Code to permit scheduled television channels to show programmes, which can currently only be shown after the 9pm watershed, before this time, but only if mandatory daytime protection is in place.

We consider mandatory daytime protection to complement the existing 9pm watershed in providing a strong level of protection for children against programmes on broadcast services which might be unsuitable for them.

The changes to the rules include a requirement for broadcasters to clearly explain the new mandatory PIN systems in place to all users, and to provide clear guidance information with programmes to assist adults in assessing whether content is suitable for children.

The revisions to the relevant rules to extend mandatory daytime protection beyond premium film content will come into force on 1 January 2019.

We expect broadcasters and platform providers who intend to make use of mandatory daytime protection to inform their viewers about the new regime, and about the importance of parents setting strong PIN codes in advance.

 

 

Desperate...

Drinks censor dismisses ludicrous whinge about the size of a can of Desperados Tequila


Link Here5th November 2018
Full story: UK Drinks Censor...Portman Group play PC censor for drinks

A complaint made against Desperados has not been upheld by the Independent Complaints Panel

The complainant, a member of the public, believed that the sale of Heineken's Desperados in a 250 ml can could appeal to under 18s due to it being in the same size can as energy drinks. The complainant also believed that the size of the can could mean that the product could be downed in one.

The Panel first considered whether the product had a particular appeal to under-18s. The Panel noted that the 250ml can size did not have a traditional association with soft drinks, and the size of the can alone did not necessarily lead the product to be problematic under the Code. The Panel considered the other elements of the can's design and noted that the colour palette, although it contained bright and contrasting colours, had a mature theme. The Panel also considered that the language used provided clarity around its alcoholic content. Accordingly, the Panel did not find the product in breach of Code rule 3.2(h)

The Panel then considered if the product directly or indirectly urged the consumer to drink rapidly or down the contents in one. The Panel noted that the can did not feature any text or other instruction that the contents should be downed-in-one. The Panel was also clear that a smaller one serve container was different to encouraging a rapid or down in one message. Accordingly, the Panel did not find the product in breach of the Code.

 

 

Updated: An unwanted gift...

Social media site Gab censored by internet companies


Link Here5th November 2018
Gab, the social media site that prides itself as being uncensored, has been forced offline by its service providers after it became clear that the alleged Pittsburgh shooter Robert Bowers had a history of anti-semitic postings on the site.

Formed in August 2016 after Twitter began cracking down on hate speech on its social network, Gab describes itself as a free speech website and nothing more. But the platform has proved popular among the alt-right and far right, including the man accused of opening fire on a synagogue in Pennsylvania on Saturday, killing 11.

In the hours following the attack, when the suspect's postings were discovered on the site, Gab's corporate partners abandoned it one by one. PayPal and Stripe, two of the company's payment providers, dropped it, arguing that it breached policies around hate speech.

Cloud-hosting company Joyent also withdrew service on Sunday, giving Gab 24 hours notice of its suspension, as did GoDaddy, the site's domain registrar, which provides the Gab.com address. Both companies said the site had breached their terms of service.

Gab responded in a statement:

We have been systematically no-platformed by App Stores, multiple hosting providers, and several payment processors, the company said in a statement posted to its site. We have been smeared by the mainstream media for defending free expression and individual liberty for all people and for working with law enforcement to ensure that justice is served for the horrible atrocity committed in Pittsburgh.

Update: A new home

5th November 2018. See  article from engadget.com

Gab is back online following censorship in the wake of the anti-Semitic shooting at a Pittsburgh synagogue. The social network had been banned by its hosting provider Joyent and domain registrar GoDaddy, and blacklisted by other services such as PayPal , Stripe and Shopify.

Now, Gab has come back online and has found a new hosting provider in Epik. According to a blog post published on November 3rd, Epik CEO Robert Monster spoke out against the idea of digital censorship and decided to provide hosting privileges to Gab because he looks forward to partnering with a young, and once brash, CEO who is courageously doing something that looks useful.

 

 

Committed to ending free speech...

The Law Commission seems to side with the easily offended and seeks to extend the criminalisation of internet insults


Link Here4th November 2018
Full story: Insulting UK Law...UK proesecutions of jokes and insults on social media
Reforms to the law are required to protect victims from online and social media-based abuse, according to a new Report by the Law Commission for England and Wales.

In its Scoping Report assessing the state of the law in this area, published today [1st November 2018] the Law Commission raises concerns about the lack of coherence in the current criminal law and the problems this causes for victims, police and prosecutors. It is also critical of the current law's ability to protect people harmed by a range of behaviour online including:

  • Receiving abusive and offensive communications
  • "Pile on" harassment, often on social media
  • Misuse of private images and information

The Commission is calling for:

  • reform and consolidation of existing criminal laws dealing with offensive and abusive communications online
  • a specific review considering how the law can more effectively protect victims who are subject to a campaign of online harassment
  • a review of how effectively the criminal law protects personal privacy online
Professor David Ormerod QC, Law Commissioner for Criminal Law said:
"As the internet and social media have become an everyday part of our lives, online abuse has become commonplace for many."

"Our report highlights the ways in which the criminal law is not keeping pace with these technological changes. We identify the areas of the criminal law most in need of reform in order to protect victims and hold perpetrators to account."

Responding to the Report, Digital Minister Margot James said:

"Behaviour that is illegal offline should be treated the same when it's committed online. We've listened to victims of online abuse as it's important that the right legal protections are in place to meet the challenges of new technology.

"There is much more to be done and we'll be considering the Law Commission's findings as we develop a White Paper setting out new laws to make the UK a safer place to be online.

Jess Phillips MP, Chair, and Rt Hon Maria Miller MP, Co-Chair, of the All-Party Parliamentary Group on Domestic Violence and Abuse and Katie Ghose, Chief Executive of Women's Aid, welcomed the Report saying:

"Online abuse has a devastating impact on survivors and makes them feel as though the abuse is inescapable. Online abuse does not happen in the 'virtual world' in isolation; 85% of survivors surveyed by Women's Aid experienced a pattern of online abuse together with offline abuse. Yet too often it is not taken as seriously as abuse 'in the real world'.

"The All-Party Parliamentary Group on Domestic Violence and Abuse has long called for legislation in this area to be reviewed to ensure that survivors are protected and perpetrators of online abuse held to account. We welcome the Law Commission's report, which has found that gaps and inconsistencies in the law mean survivors are being failed. We support the call for further review and reform of the law".

The need for reform

We were asked to assess whether the current criminal law achieved parity of treatment between online and offline offending. For the most part, we have concluded that abusive online communications are, at least theoretically, criminalised to the same or even a greater degree than equivalent offline offending. However, we consider there is considerable scope for reform:

  • Many of the applicable offences do not adequately reflect the nature of some of the offending behaviour in the online environment, and the degree of harm it can cause.
  • Practical and cultural barriers mean that not all harmful online conduct is pursued in terms of criminal law enforcement to the same extent that it might be in an offline context.
  • More generally, criminal offences could be improved so they are clearer and more effectively target serious harm and criminality.
  • The large number of overlapping offences can cause confusion.
  • Ambiguous terms such as "gross offensiveness" "obscenity" and "indecency" don't provide the required clarity for prosecutors.

Reforms would help to reduce and tackle, not only online abuse and offence generally but also:

  • "Pile on" harassment , where online harassment is coordinated against an individual. The Report notes that "in practice, it appears that the criminal law is having little effect in punishing and deterring certain forms of group abuse".
  • The most serious privacy breaches -- for example the Report highlights concerns about the laws around sharing of private sexual images. It also questions whether the law is adequate to deal with victims who find their personal information e.g. about their health or sexual history, widely spread online.
Impact on victims

The Law Commission heard from those affected by this kind of criminal behaviour including victims' groups, the charities that support them, MPs and other high-profile victims.

The Report analyses the scale of online offending and suggests that studies show that the groups in society most likely to be affected by abusive communications online include women, young people, ethnic minorities and LGBTQ individuals. For example, the Report finds that gender-based online hate crime, particularly misogynistic abuse, is particularly prevalent and damaging.

It also sets out the factors which make online abuse so common -- including the disinhibition of communicating with an unseen victim and the ease with which victims can be identified.

The Report highlights harms caused to the victims of online abuse which include:

  • psychological effects, such as depression and anxiety
  • emotional harms, such as feelings of shame, loneliness and distress
  • physiological harms, including suicide and self-harm in the most extreme cases
  • exclusion from public online space and corresponding feelings of isolation
  • economic harms
  • wider societal harms

It concludes that abuse by groups of offenders online, and the use of visual images to perpetrate abuse are two of the ways in which online abuse can be aggravated.

Next steps

The Department for Digital, Culture, Media and Sport (DCMS) will now analyse the Report and decide on the next steps including what further work the Law Commission can do to produce recommendations for how the criminal law can be improved to tackle online abuse.

Comment: Law Commission must safeguard freedom of expression

See  statement from indexoncensorship.org

Index on Censorship urges the Law Commission to safeguard freedom of expression as it moves towards the second phase of its review of abusive and offensive online communications.

The Law Commission published a report on the first phase of its review of criminal law in this area on 1 November 2018.

While Index welcomes the report's recognition that current UK law lacks clarity and certainty, the review is addressing questions that impact directly on freedom of expression and the Law Commission should now proceed with great caution.

Safeguarding the fundamental right to freedom of expression should be a guiding principle for the the Law Commission's next steps. Successive court rulings have confirmed that freedom of expression includes having and expressing views that offend, shock or disturb. As Lord Justice Sir Stephen Sedley said in a 1999 ruling:

"Free speech includes not only the inoffensive but the irritating, the contentious, the eccentric, the heretical, the unwelcome and the provocative provided it does not tend to provoke violence. Freedom only to speak inoffensively is not worth having".

Foreign Secretary Jeremy Hunt also reaffirmed the UK's commitment to the protection and promotion of freedom of expression this week , asserting the importance of a free media in particular as a cornerstone of democracy.

The next phase of the review should outline how the UK can show global leadership by setting an example for how to improve outdated legislation in a way that ensures freedom of expression, including speech that is contentious, unwelcome and provocative.

Index on Censorship chief executive Jodie Ginsberg said:

"Index will be studying the Law Commission's first phase report on its review of abusive and offensive online communications carefully. Future proposals could have a very negative impact on freedom of expression online and in other areas. Index urges the Law Commission to proceed with care."

 

 

Hunter Killer...

The movie is banned in Russia and Ukraine over its plot about a kidnapped Russian president


Link Here4th November 2018
Hunter Killer is a 2017 China / USA action thriller by Donovan Marsh.
Starring Gary Oldman, Gerard Butler and Michael Nyqvist. IMDb

An untested American submarine captain teams with U.S. Navy Seals to rescue the Russian president, who has been kidnapped by a rogue general.

Hollywood blockbuster Hunter Killer has been blocked from release in Russia just hours before its premiere.

Critics say the Kremlin banned it because it shows a Russian president being deposed and kidnapped in a coup before being rescued by American soldiers. Opposition politician Dmitry Gudkov suggested that the ministry could be blocking the movie for suggesting even a fictional possibility that a Russian President could be deposed. He commented on Facebook:

What are these bastards from Hollywood suggesting? That someday (Defence Minister Sergei) Shoigu... comes out quietly and tops Putin? That will not happen.

Moscow claimed the film's license was revoked because producers had not provided a good enough copy for its state archive.

The movie's troubles in Russia follow its mysterious failure to open in neighbouring Ukraine, with the distribution company telling AFP that the government is preventing the film from screening. (The state film committee) watched the film, it fell under some law and was banned, said a representative of Kinomania, a Ukrainian film distributor which in August announced the premiere of Hunter Killer on October 25.

Though there is no official explanation of the ban in Ukraine, reports have cited a Ukrainian law which bans films popularising a military of the aggressor state or creating a positive image of its employees.

 

 

Anti-Climax...

Gaspar Noe's Climax banned in Lebanon


Link Here4th November 2018
Climax is a 2018 France musical horror mystery by Gaspar Noé.
Starring Sofia Boutella, Romain Guillermic and Souheila Yacoub. BBFC link IMDb

Birth and death are extraordinary life experiences. Life is a fleeting pleasure. Following a successful and visually dazzling rehearsal, a dance troupe celebrates with a party. But when it becomes apparent that someone has spiked the Sangria, the dancers soon begin to turn on each other in an orgiastic frenzy.

The third edition of Maskoon, the first festival in the Arab region specializing in horror, fantasy, thriller, action and science fiction movies, has opened in Lebanon. However the festival is missing two items from its schedule.The country's censorship authorities have banned two films: one is a short Lebanese film titled Nocturnal Deconstruction by Laura El Alam and the second is a Gaspar Noe's Climax.

Nocturnal Deconstruction is a 16-minute film telling the story of a woman who has decided to overcome the void in her life by trying a drug that eliminates the problems of self-confidence, and allows everyone who takes it to love himself again.

Myriam Sassine, the festival's director, said in a speech that censorship had decided to ban the two films for vague and unclear reasons.

The festivals' Artistic director Antoine Waked expressed his regret for the ban of The Climax film, saying this movies was made to be shown in cinemas, and its artistic value appears on the big screen, but now everyone will see it via DVD, or download it from the internet, so all the censorship has done was depriving people from the chance to see it on the big screen, he said.

 

 

The House That Jack Built...

Will be screened as a Director's Cut and also an MPAA R rated Theatrical Version


Link Here4th November 2018
The House That Jack Built is a 2018 Denmark / France / Germany / Sweden horror thriller by Lars von Trier.
Starring Matt Dillon, Bruno Ganz and Uma Thurman. IMDb

USA in the 1970s. We follow the highly intelligent Jack over a span of 12 years and are introduced to the murders that define Jack's development as a serial killer. We experience the story from Jack's point of view, while he postulates each murder is an artwork in itself. As the inevitable police intervention is drawing nearer, he is taking greater and greater risks in his attempt to create the ultimate artwork. Along the way we experience Jack's descriptions of his personal condition, problems and thoughts through a recurring conversation with the unknown Verge - a grotesque mixture of sophistry mixed with an almost childlike self-pity and psychopathic explanations. The House That Jack Built is a dark and sinister story, yet presented through a philosophical and occasional humorous tale.

Lars von Trier's The House That Jack Built caused a little controversy at its premiere at the Cannes Film Festival. Critics were divided as a few walked out in disgust at the violence and cruelty, whilst others gave it a standing ovation.

[ Spoilers! hover or click text below]

A particularly controversial moment was Jack as a young boy snipping off the leg of a duck. But even that got a mixed response as animal rights campaigners PETA defended the film i praising its accurate portrayal of the link between adolescent animal abuse and psychopathy. Of course the effect was achieved in CGI and no ducks were harmed in the making of the film.

Anyway the controversy seems to have resulted in the creation of a different version for general distribution in the US. The version shown at Cannes is now tagged as the Director's Cut and will be screened for one day only on 28th November. The film will then be put on wider release on 14th December in an MPAA R rated Theatrical Version. However it will be the Director' Cut that is released online also on 14th December. It is not yet clear what has been cut for the Theatrical Version.

Update: IFC responds to the MPAA

2nd December 2018.  See  article from darkhorizons.com

IFC issues a statement responding to the MPAA whinge:

IFC Films has not received any written notice from the MPAA regarding sanctions in connection with THE HOUSE THAT JACK BUILT. It has always been IFC Films' priority to maintain the artistic vision of our filmmakers and we do not believe that the one-day screening of the Director's Cut unrated version has violated the MPAA's Classification and Rating Rules.

 

 

Looking out easy offence...

Advert censor ASA launches a new strategy document announcing more proactive censorship of online advertising


Link Here2nd November 2018
The advert censors of ASA have published a five year strategy, with a focus on more censorship of online advertising including exploring the use of machine learning in regulation.

The strategy will be officially launched at an ASA conference in Manchester, entitled The Future of Ad Regulation.

ASA explains the highlights of its strategy:

We will prioritise the protection of vulnerable people and appropriately limiting children and young people's exposure to age-restricted ads in sectors like food, gambling and alcohol We will listen in new ways, including research, data-driven intelligence gathering and machine learning 203 our own or that of others - to find out which other advertising-related issues are the most important to tackle We will develop our thought-leadership in online ad regulation, including on advertising content and targeting issues relating to areas like voice, facial recognition, machine-generated personalised content and biometrics We will explore lighter-touch ways for people to flag concerns We will explore whether our decision-making processes and governance always allow us to act nimbly, in line with people's expectations of regulating an increasingly online advertising world We will explore new technological solutions, including machine learning, to improve our regulation

Online trends are reflected in the balance of our workload - 88% of the 7,099 ads amended or withdrawn in 2017 following our action were online ads, either in whole or in part. Meanwhile, two-thirds of the 19,000 cases we resolved last year were about online ads.

Our guiding principle is that people should benefit from the same level of protection against irresponsible online ads as they do offline. The ad rules apply just as strongly online as they do to ads in more traditional media.

Our recent rebalancing towards more proactive regulation has had a positive impact, evidenced by steep rises in the number of ads withdrawn or changed (7,009 last year, up 47% on 2016) and the number of pieces of advice and training delivered to businesses (on course to exceed 400,000 this year). This emphasis on proactive regulation -- intervening before people need to complain about problematic ads -- will continue under the new strategy.

The launch event - The Future of Ad Regulation conference - will take place at Manchester Central Convention Complex on 1 November. Speakers will include Professor Tanya Byron, Reg Bailey, BBC Breakfast's Tina Daheley, Marketing Week's Russell Parsons, ASA Chief Executive Guy Parker and ASA Chairman David Currie.

Online ASA Chief Executive, Guy Parker said:

We're a much more proactive regulator as a result of the work we've done in the last five years. In the next five, we want to have even more impact regulating online advertising. Online is already well over half of our regulation, but we've more work to do to take further steps towards our ambition of making every UK ad a responsible ad.

Lord Currie, Chairman of the ASA said:

The new strategy will ensure that protecting consumers remains at the heart of what we do but that our system is also fit for purpose when regulating newer forms of advertising. This also means harnessing new technology to improve our ways of working in identifying problem ads.

 

 

It would prevent them from censoring conservative voices...

Google claims that is impractical to require it to implement US constitutional free speech


Link Here 2nd November 2018
Full story: Google Censorship...Google censors adult material froms its websites
Prior to Google's bosses being called in to answer for its policy to silence conservative voices, it has filed a statement to court saying that even if it does discriminate on the basis of political viewpoints. It said:

Not only would it be wrong to compel a private company to guarantee free speech in the way that government censorship is forbidden by the Constitution, but it would also have disastrous practical consequences.

Google argued that the First Amendment appropriately limits the government's ability to censor speech, but applying those limitations to private online platforms would undermine important content regulation. If they are bound by the same First Amendment rules that apply to the government, YouTube and other service providers would lose much of their ability to protect their users against offensive or objectionable content -- including pornography, hate speech, personal attacks, and terrorist propaganda.

 

 

Messing around the fans...

Sky Sport has developed technology to mute football crowds from chanting 'Sky TV is fucking shit'


Link Here2nd November 2018
Leeds United football fans are a little pissed off at Sky TV for messing around with kick off times to suit TV viewers rather than the football fans attending the matches. Leeds fans seem more displeased than most due t the number of their matches televised.

From September 28 to November 10, seven of their eight matches have been, or are scheduled to be, televised. Of Leeds's 15 league matches so far this season, only four have kicked off at 3pm on a Saturday, often creating transport problems for supporters.

The fans have found a way of giving voice to their complaints by chanting "Sky TV is fucking shit" during televised matches.

Sky Sports have now responded by using technology to mute the chants. During last Saturday's fixture with Nottingham Forest at Elland Road, the chants were hushed by the broadcaster on more than one occasion using a process known as dampening.

 

 

Updated: The Grinch...

Cinema release cut for a 'U' rating


Link Here2nd November 2018

The Grinch is a 2018 China / USA children's cartoon comedy by Yarrow Cheney and Scott Mosier.
Starring Benedict Cumberbatch. BBFC link IMDb

BBFC category cuts were required for a U rated cinema releases in 2018.

Cut SmallU Small UK: 2D and 3D versions including Short Yellow is the New Black passed U for mild slapstick, very mild bad language after 1s of BBFC category cuts ( 89:55s ) :
  • 2018 cinema release
The BBFC commented:
  • The work had Cuts for Category cuts made. Company chose to remove a use of mild bad language in order to obtain a U classification. An uncut PG was available.

Summary Notes

For their eighth fully animated feature, Illumination and Universal Pictures present The Grinch, based on Dr. Seuss' beloved holiday classic. The Grinch tells the story of a cynical grump who goes on a mission to steal Christmas, only to have his heart changed by a young girl's generous holiday spirit. Funny, heartwarming, and visually stunning, it's a universal story about the spirit of Christmas and the indomitable power of optimism. Academy Award® nominee Benedict Cumberbatch lends his voice to the infamous Grinch, who lives a solitary life inside a cave on Mt. Crumpet with only his loyal dog, Max, for company. With a cave rigged with inventions and contraptions for his day-to-day needs, the Grinch only sees his neighbors in Whoville when he runs out of food. Each year at Christmas they disrupt his tranquil solitude with their increasingly bigger, brighter, and louder celebrations. When the Whos declare they are going to make Christmas three times bigger this year, the Grinch ...

Update: Assinine decision

2nd November 2018. Thanks to Hadyn

The BBFC commented on Twitter:

A few uses of the word 'ass' in a song were removed.

 

 

Offsite Article: Bad taste...


Link Here2nd November 2018
Mocking vegans should not be a sackable offence. William Sitwell has been given the heave-ho over a private joke. By Brendan O'Neill

See article from spiked-online.com


 2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024 
Jan   Feb   Mar   April   May   June   July   Aug   Sep   Oct   Nov   Dec   Latest  

Censor Watch logo
censorwatch.co.uk

 

Top

Home

Links
 

Censorship News Latest

Daily BBFC Ratings

Site Information