Melon Farmers Original Version

Censor Watch


2020: March

 2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec    

 

Take your medicine, stay home for 3 months, and don't worry about the depression...

UK government to censor quack cures for coronavirus


Link Here 31st March 2020
The UK government is reported to be actively working with social media to remove coronavirus fake news and harmful content.

Social media companies have responded by introducing several sweeping rule changes that crack down on any dissenting opinions and push users to what they deem to be authoritative or credible sources of information. And now the BBC is reporting that the UK government will be working with these social media companies to remove what it deems to be fake news, harmful content, and misinformation related to the coronavirus.

The report doesn't specify how the UK government will determine what qualifies as fake news or harmful content.

Twitter has updated rules around the coronavirus targeting users that deny expert guidance. The company has also forced some users to remove jokes about the virus.

 

 

Offsite Article: The US government should disclose how it's using location data to fight the coronavirus...


Link Here31st March 2020
There's a good case for using smartphone data in the COVID-19 response, but Americans deserve an explanation. By Casey Newton

See article from theverge.com

 

 

Offsite Article: Europeans face lost privacy...


Link Here31st March 2020
Brussels considers pan-EU police searches of ID photos

See article from politico.eu

 

 

Blocking streams...

Australia reveals a new internet censorship, mechanism targeted at terrorist events


Link Here30th March 2020
Full story: Internet Censorship in Australia...Wide ranging state internet censorship
Australia has issued a new internet censorship mechanism initially targeted at blocking terrorist content along the lines of streaming of the Christchurch mosque murders.

Australian ISPs will block websites hosting graphic terrorist videos following an online crisis event under the direction of the eSafety Commissioner. Websites may be blocked for around five days.

eSafety Commissioner, Julie Inman Grant, said a high threshold must be reached before a website can be blocked.

Minister for Communications Paul Fletcher said the protocol was an important new mechanism that will help keep Australians safe online. He added"

Now we have a framework in place to enable a rapid, coordinated and decisive response to contain the rapid spread of terrorist or extreme violent material.

The censorship protocol was created by the eSafety Commissioner's office and the Communications Alliance, which represents the country's telecommunication industry.

According to the new guidelines, an online crisis event occurs when an act of terrorism or violent crime takes place, and material depicting or promoting that incident is shared quickly online. To be blocked, the material must be likely to cause significant harm to the community by causing trauma or be content that promotes, incites or instructs in terrorist acts or violent crimes, among other considerations.

The Government plans to legislatively back the new protocol as part of a forthcoming Online Safety Act.

 

 

Offsite Article: Robin Hood: Prince Of Thieves...


Link Here30th March 2020
The story of its extended cut. By Simon Brew

See article from filmstories.co.uk

 

 

Texas Chainsaw Massacre 2...

A 1986 chainsaw dual with the BBFC


Link Here29th March 2020
The Texas Chainsaw Massacre 2 is a 1986 US horror by Tobe Hooper.
With Dennis Hopper, Caroline Williams and Jim Siedow. YouTube icon BBFC link IMDb

Scott writes:

I've been researching The Texas Chainsaw Massacre II's history with the BBFC. I've found in a book called The Texas Chainsaw Massacre Companion , by Stefan Jaworzyn. Here's what I've learnt from the book:

The film was submitted to the BBFC by Columbia-Cannon-Warner (CCW) in 1986. The Board viewed the uncut film three times. Even after the third viewing, on October 23rd 1986, they still couldn't decide whether to cut heavily or outright Reject.

The distributors then submitted a pre-cut version, known as the "Northern European Cannon Cut" . The technical manager of CCW, Steve Southgate, wrote to the BBFC on October 29th to detail some of the cuts to this version:

  • "The opening scene with the boys in a car, when they are attacked, now only consists of one shot of an open head wound."

  • "Scene in radio station where L.G. is being hit on his head with a hammer now has been reduced to only three blows. (The scene with the girl sitting with her legs astride with Leather Face in front of her with chainsaw remains the same.)"

  • "The scene with Dennis Hopper going into the underground cabin for the first time, where he discovers blood and entrails coming out of a wall has been shortened."

  • "The complete scene where Leather Face uses an electric knife on L.G. removing flesh from legs, chest and face has been removed, and also Leather Face placing skin mask on girl's face has been removed."

  • The scene where Leather Face has chainsaw put through his chest has been shortened to establishing shot only. The scene with Chop-Top cutting his throat has been shortened. The scene with Chop-Top slashing girls back with cutthroat razor has been reduced.

Despite the heavy pre-cuts, this still failed to get through.

By May 21st 1987, numerous cut versions had been attempted, and all had failed, yet the BBFC felt it was worth cutting further. At this point, the distributors gave up.

The unreasonable dithering by the BBFC has been widely interpreted as an unofficial ban where no amount of cuts would have actually been acceptable to the BBFC.

The distributors had to wait until James Ferman left the board before trying again. Under the stewardship of Robin Duval the film was finally passed 18 uncut in 2001.

 

 

The Australian Censorship Wasteland...

Previously banned game, Wasteland 3, is cut for an R18+ rating


Link Here29th March 2020
Full story: Banned Games in Australia...Games and the Australian Censorship Board
  Wasteland 3 is a 2020 US multi-player role playing game by inXile

The game was banned by the Australian Censorship Board in February 2020. The censors did not provide any meaningful reason for the ban, but the censor's usual bugbear is that something considered naughty is used as an incentive in the game mechanism.

Well the game makers have elected to make cuts and to resubmit the game. This time round the censors passed the cut game as R18+ for Sexual activity related to incentives and rewards, online interactivity.

 

 

Blacked Out...

Barbara Taylor Bradford forced to change the name of upcoming novel, Blackie and Emma, over political correctness fears


Link Here29th March 2020
The literary character Shane 'Blackie' O'Neill, from Co. Kerry, is a popular one, created by novelist Barbara Taylor Bradford. But his name has meant that the proposed title of her latest book Blackie and Emma has now had to be changed.

The prequel to the highly successful A Woman of Substance was due for imminent release. But at the last minute her publishers feared that the title Blackie and Emma might offend political correctness and asked her to come up with an alternative.

A quote used in the promotional material rather demonstrates how key the nickname is:

I am that, to be sure. Shane O'Neill's the name, but the whole world calls me Blackie.

The author spoke about the last minute change in the title saying that the book will now be called Shane O'Neill and Emma Harte .

 

 

Offsite Article: The woke war on Brazil's Carnival...


Link Here29th March 2020
Is there nothing middle-class puritans won't try to suck the joy out of? By Raphael Tsavkko Garcia

See article from spiked-online.com

 

 

Propaganda fines...

High Court confirms Ofcom's fines for RT


Link Here28th March 2020
A High Court justice has dismissed a Russian Today complaint that a massive £200,000 fine imposed by Ofcom last year was disproportionate. The court endorsed the TV censor's decision to fine RT for a breach of its impartiality rules.

RT had issued legal complains that Ofcom's decisions were a disproportionate interference with RT's right to freedom of expression and said other stations had received smaller fines for more serious breaches.

Following an investigation in 2018, Ofcom found that RT had broken TV impartiality rules in seven programmes discussing the Salisbury nerve agent attacks. Ofcom said RT had failed to give due weight to a wide range of voices on a matter of major political controversy.

RT has yet to respond to the ruling.

 

 

Shouting fire in a crowded theatre...

Alex Jones' Infowars app gets ejected from Google's Play Store


Link Here28th March 2020
Alex Jones' InfoWars is already widely banned from social media on political grounds for disputing politically correct dictates. However disputing the need for social distancing during the covid crisis was maybe a step too far leading to Infowar's Android app being ejected from Goggle's Play Store.

The app was apparently removed because of a video posted by radio host and conspiracy theorist Alex Jones that, according to Wired, disputed the need for social distancing, shelter in place, and quarantine efforts meant to slow the spread of the novel coronavirus.

A Google spokesperson said:

When we find apps that violate Play policy by distributing misleading or harmful information, we remove them from the store. Infowars was not immediately available for comment.

Last week, Alex Jones was ordered by New York Attorney General Letitia James to stop selling Infowars products that were marketed as a treatment or cure for the coronavirus.

 

 

Contagion...

When a fictional movie is more upsetting than the real thing


Link Here28th March 2020
160 iewers were 'outraged' to discover the movie Contagion was on during a pandemic. It stars Gwyneth Paltrow as Beth Emhoff, the first person to contract the deadly infection.

But the critically liked film didn't sit well with viewers currently in coronavirus lockdown - as it hit a bit too close to home. And inevitably the Sun rounded up a few comments from Twitter, eg:

"Sorry @ITV but who thought Contagion was a good movie to broadcast last night?"

 

 

Let's hope it helps...

European mobile phone networks agree to share user location data to track coronavirus


Link Here27th March 2020
Eight major mobile carriers have agreed to share customer location data with the European Commission in a bid to track the spread of COVID-19.

The announcement arrives after Deutsche Telekom, Orange, Telefonica, Telecom Italia, Telenor, Telia, A1 Telekom Austria and Vodafone discussed tracking options with European Commissioner for Internal Market and Services Thierry Breton.

Government officials attempted to allay the fears of critics by noting all collected data will be anonymized and destroyed once the pandemic is squashed.

 

 

Obituary: Surely now re-animated in movie heaven...

Film director Stuart Gordon dies aged 72


Link Here26th March 2020
Beloved genre director Stuart Gordon has passed away at age 72.

His magnificent career began with his remarkable 1985 debut feature Re-Animator. The film spawned two sequels as well as a 2011 stage adaptation Re-Animator: The Musical .

The filmmaker followed up this success with two more adaptations of Lovecraft's writing with 1986's From Beyond and 1995's Castle Freak . Other Lovecraft adaptations include 2001's Dagon and the Masters of Horror episode Dreams In the Witch-House.

Other notable films include the 1990 sci-fi film Robot Jox , 1992's Fortress , 1996 sci-fi comedy Space Truckers and the 2005 drama Edmond.

 

 

Coronavirus and surveillance technology...

How far will governments go? Governments mobilized digital surveillance to contain the spread of the virus


Link Here26th March 2020

 Since the COVID 19 outbreak became a fast-spreading pandemic, governments from across the globe have implemented new policies to help slow the spread of the virus.

In addition to closing borders to non-citizens, many governments have also mobilized digital surveillance technologies to track and contain visitors and citizens alike.

On Wednesday, the Hong Kong government announced that all new arrivals to the city must undergo two weeks of self-quarantine, while wearing an electronic wristband that connects to a location tracking app on their phones.

If the app detects changes in the person's location, it will alert the Department of Health and the police. Prior to this new policy, only people who had recently visited Hubei province in China were required to wear a monitoring wristband during their quarantine period.

While surveillance technologies and measures may give the public a sense of security in controlling the spread of the virus, we must remain mindful and vigilant of their continued use after the pandemic subsides.

European and North American countries like Italy, Spain, and the US are currently being hit hard by the coronavirus. Meanwhile, Asian countries have been praised by international media for their swift responses and use of surveillance technologies to control the outbreak.

The Singaporean government, for example, implemented policies that can effectively and rigorously trace a complex chain of contacts . As of February, anyone entering a government or corporate building in Singapore will have to provide their contact information.

In addition, the government has been gathering a substantial amount of data detailing not only each known case of infection but also where the person lives, works and the network of contacts they are connected to.

While these measures have thus far seemed to yield positive results, they have highlighted the technological capacity and power of the government to monitor the movements and lives of every individual.

In China, where Covid-19 was first detected, the government has been deploying not only drastic lockdown policies but also a variety of surveillance technologies to ensure public compliance with self-quarantine and isolation.

In addition to using drones to monitor people's movements and ensure they are staying home, police in five Chinese cities have taken to patrolling the streets wearing smart helmets equipped with thermal screening technologies that sound an alarm if a person's temperature is higher than the threshold.

The government has also collaborated with the company Hanwang Technology Limited to finesse their existing facial recognition technology, so that it can work even when the person is wearing a face mask

When connected to a temperature sensor and the Chinese government's existing database as well as state-level intel, this technology allows authorities to immediately identify the name of each person whose body temperature is above 38 degrees Celcius.

According to Hanwang Technology, this refined facial recognition technology can identify up to 30 people within a second.

While the use of surveillance technologies like these has been effective in lowering the number of confirmed cases in China, it is not without risks.

Beyond the pandemic, both the Chinese government and the company have substantial interests in further developing and deploying this technology: the government can make use of it to track and suppress political dissidents, and the company has much to gain financially.

This technology can also be co-opted by China's counterterrorism forces to further monitor and regulate the movement of the Uighur people, who are categorised as terrorists by the Chinese government and are currently being forced into mass detention camps and subjected to forced labour.

Outside of Asia, Middle Eastern countries like Israel and Iran have also been deploying similar surveillance technologies , citing the need to control the spread of the coronavirus.

The Israeli government now makes use of technologies developed for counterterrorism to collect cellphone data, so that the government can trace people's contact network, and identify those who need to be quarantined.

The geolocation data gathered via people's phones will then be used to alert the public where not to go based on the pattern of infection.

Not only is it unprecedented for Israel to deploy counterterrorism data to combat a public health crisis, but the existence of this data trove has also, according to the New York Times , not been reported prior to this.

On March 6, researcher Nariman Gharib revealed that the Iranian government had been tracking its citizens' phone data through an app disguised as a coronavirus diagnostic tool.

Security expert Nikolaos Chrysaidos confirmed that the app collected sensitive personal information unrelated to the outbreak -- for example, the app recorded the bodily movements of the user the way a fitness tracker would.

Google has since removed the app from Google Play, but this case demonstrates the need for ongoing public vigilance over government use of surveillance technologies in the name of public health.

Safeguarding public health has historically been used as a justification for mainstream institutions and government authorities to stigmatise, monitor, and regulate the lives of marginalised people -- such as immigrants, racial minorities, LGBTQ+ people, and people living in poverty.

If we do not hold our government accountable for its use of surveillance technologies during the current pandemic and beyond, we will be putting those who are already marginalised at further risks of regulation, suppression, and persecution.

 

 

Protect Speech and Security Online...

Calling on Americans to reject the Graham-Blumenthal Proposal


Link Here25th March 2020
Full story: Internet Censorship in USA...Domain name seizures and SOPA

Senators Lindsey Graham and Richard Blumenthal are quietly circulating a serious threat to your free speech and security online. Their proposal would give the Attorney General the power to unilaterally write new rules for how online platforms and services must operate in order to rely on Section 230, the most important law protecting free speech online. The AG could use this power to force tech companies to undermine our secure and private communications.

We must stop this dangerous proposal before it sees the light of day. Please tell your members of Congress to reject the so-called EARN IT Act.   

The Graham-Blumenthal bill would establish a National Commission on Online Child Exploitation Prevention tasked with recommending best practices for providers of interactive computer services regarding the prevention of online child exploitation conduct. But the Attorney General would have the power to override the Commission's recommendations unilaterally. Internet platforms or services that failed to meet the AG's demands could be on the hook for millions of dollars in liability.

It's easy to predict how Attorney General William Barr would use that power: to break encryption. He's said over and over that he thinks the best practice is to weaken secure messaging systems to give law enforcement access to our private conversations. The Graham-Blumenthal bill would finally give Barr the power to demand that tech companies obey him or face overwhelming liability from lawsuits based on their users' activities. Such a demand would put encryption providers like WhatsApp and Signal in an awful conundrum: either face the possibility of losing everything in a single lawsuit or knowingly undermine their own users' security, making all of us more vulnerable to criminals. The law should not pit core values--Internet users' security and expression--against one another.

The Graham-Blumenthal bill is anti-speech, anti-security, and anti-innovation. Congress must reject it.

 

 

Updated: Should government track covid-19 contacts using mobile phone location data?...

Seems sensible but the EFF is not convinced


Link Here 25th March 2020

Governments Haven't Shown Location Surveillance Would Help Contain COVID-19 

Governments around the world are demanding new dragnet location surveillance powers to contain the COVID-19 outbreak. But before the public allows their governments to implement such systems, governments must explain to the public how these systems would be effective in stopping the spread of COVID-19. There's no questioning the need for far-reaching public health measures to meet this urgent challenge, but those measures must be scientifically rigorous, and based on the expertise of public health professionals.

Governments have not yet met that standard, nor even shown that extraordinary location surveillance powers would make a significant contribution to containing COVID-19. Unless they can, there's no justification for their intrusions on privacy and free speech, or the disparate impact these intrusions would have on vulnerable groups. Indeed, governments have not even been transparent about their plans and rationales.

The Costs of Location Surveillance

EFF has long opposed location surveillance programs that can turn our lives into open books for scrutiny by police, surveillance-based advertisers, identity thieves, and stalkers. Many sensitive inferences can be drawn from a visit to a health center, a criminal defense lawyer, an immigration clinic, or a protest planning meeting.

Moreover, fear of surveillance chills and deters free speech and association. And all too often , surveillance disparately burdens people of color. What's more, whatever personal data is collected by government can be misused by its employees, stolen by criminals and foreign governments, and unpredictably redirected by agency leaders to harmful new uses .

Emerging Dragnet Location Surveillance

China reportedly responded to the COVID-19 crisis by building new infrastructures to track the movements of massive numbers of identifiable people. Israel tapped into a vast trove of cellphone location data to identify people who came into close contact with known virus carriers. That nation has sent quarantine orders based on this surveillance. About a dozen countries are reportedly testing a spy tool built by NSO Group that uses huge volumes of cellphone location data to match the location of infected people to other people in their vicinity (NSO's plan is to not share a match with the government absent such a person's consent).

In the United States , the federal government is reportedly seeking, from mobile app companies like Facebook and Google, large volumes of location data that is de-identified (that is, after removal of information that identifies particular people) and aggregated (that is, after combining data about multiple people). According to industry executives, such data might be used to predict the next virus hotspot. Facebook has previously made data like this available to track population movements during natural disasters.

But re-identification of de-identified data is a constant infosec threat . De-identification of location data is especially hard, since location data points serve as identification of their own. Also, re-identification can be achieved by correlating de-identified data with other publicly available data like voter rolls, and with the oceans of information about identifiable people that are sold by data brokers . While de-identification might in some cases reduce privacy risks , this depends on many factors that have not yet been publicly addressed, such as careful selection of what data to aggregate, and the minimum thresholds for aggregation. In the words of Prof. Matt Blaze, a specialist in computer science and privacy:

One of the things we have learned over time is that something that seems anonymous, more often than not, is not anonymous , even if it's designed with the best intentions.

Disturbingly, most of the public information about government's emerging location surveillance programs comes from anonymous sources , and not official explanations. Transparency is a cornerstone of democratic governance, especially now , in the midst of a public health crisis. If the government is considering such new surveillance programs, it must publicly explain exactly what it is planning, why this would help, and what rules would apply. History shows that when government builds new surveillance programs in secret , these programs quickly lead to unjustified privacy abuses. That's one reason EFF has long demanded transparent democratic control over whether government agencies may deploy new surveillance technology.

Governments Must Show Their Work

Because new government dragnet location surveillance powers are such a menace to our digital rights, governments should not be granted these powers unless they can show the public how these powers would actually help, in a significant manner, to contain COVID-19. Even if governments could show such efficacy, we would still need to balance the benefit of the government's use of these powers against the substantial cost to our privacy, speech, and equality of opportunity. And even if this balancing justified government's use of these powers, we would still need safeguards, limits, auditing, and accountability measures. In short, new surveillance powers must always be necessary and proportionate .

But today, we can't balance those interests or enumerate necessary safeguards, because governments have not shown how the proposed new dragnet location surveillance powers could help contain COVID-19. The following are some of the points we have not seen the government publicly address.

1. Are the location records sought sufficiently granular to show whether two people were within transmittal distance of each other? In many cases, we question whether such data will actually be useful to healthcare professionals.

This may seem paradoxical. After all, location data is sufficiently precise for law enforcement to place suspects at the scene of a crime, and for juries to convict largely on the basis of that evidence. But when it comes to tracking the spread of a disease that requires close personal contact, data generated by current technology generally can't reliably tell us whether two people were closer than the CDC-recommended radius of six feet for social distancing.

For example, cell site location information (CSLI)--the records generated by mobile carriers based on which cell towers a phone connects to and when--is often only able to place a phone within a zone of half a mile to two miles in urban areas. The area is even wider in areas with less dense tower placement. GPS sensors built directly into phones can do much better, but even GPS is only accurate to a 16-foot radius . These and other technologies like Bluetooth can be combined for better accuracy, but there's no guarantee that a given phone can be located with six-foot precision at a given time.

2. Do the cellphone location records identify a sufficiently large and representative portion of the overall population? Even today, not everyone has a cellphone, and some people do not regularly carry their phones or connect them to a cellular network. The population that carries a networked phone at all times is not representative of the overall population; for example, people without phones skew towards lower-income people and older people.

3. Has the virus already spread so broadly that contact tracing is no longer a significant way to reduce transmission? If community transmission is commonplace, contact tracing may become impractical or divert resources from more effective containment methods.

There might be scenarios other than precise, person-to-person contact tracing where location data could be useful. We've heard it suggested, for example, that this data could be used to track future flare-ups of the virus by observing general patterns of people's movements in a given area. But even when transmission is less common, widespread testing may be more effective at containment, as may be happening in South Korea .

4. Will health-based surveillance deter people from seeking health care? Already, there are reports that people subject to COVID-based location tracking are altering their movements to avoid embarrassing revelations. If a positive test result will lead to enhanced location surveillance, some people may avoid testing.

Conclusion

As our society struggles with COVID-19, far narrower big data surveillance proposals may emerge. Perhaps public health professionals will show that such proposals are necessary and proportionate. If so, EFF would seek safeguards, including mandatory expiration when the health crisis ends, independent supervision, strict anti-discrimination rules, auditing for efficacy and misuse, and due process for affected people.

But for now, government has not shown that new dragnet location surveillance powers would significantly help to contain COVID-19. It is the government's job to show the public why this would work.

Update: In fight against coronavirus, European governments embrace surveillance

25th March 2020. See article from politico.eu

 

 

 

Rating from home...

The BBFC closes its office


Link Here23rd March 2020
The BBFC tweeted

Following Government advice about COVID-19, our premises will remain closed until further notice. We have activated our business continuity plans & we are currently running with a reduced capacity. Our classification work remains our highest priority & we will keep you updated.

 

 

Won't somebody think of the mothers and daughters?...

Egypt bill introduced to increase penalties for strong language in art works


Link Here22nd March 2020
Full story: Music Censorship in Egypt...authorities persecute singers for slightly sexy music videos
Earlier in March, the Egyptian parliament started discussing a draft amendment to the Penal Code that aims to provide harsher penalties including imprisonment for using lewd or offensive words, especially in artworks. The suggested amendment may send the offender to prison for three years for offending public sensibilities through lewd language, instead of a fine of 500 Egyptian pounds ($32) currently determined by law.

The draft law needs to go through parliamentary subcommittees, but no date has yet been set.

The bill comes in the wake of a major controversy over mahraganat , a hybrid music genre that combines folk with electronic music and uses colloquialism in its lyrics. This genre of music, whose name literally means festivals in Arabic, originated in the Cairo slums in the early 2000s. Its beat resembles that of American rap and, like rap, its lyrics contain sexual innuendos, racy words and obscenities.

These songs have entered every household in Egypt through the internet and smartphones, Amer told Al-Monitor. A mother, a sister, a wife or daughter should never be exposed to such words because they are offensive and often sexist.

The lyrics of one of these songs -- Bent el-Geran (The Neighbor's Daughter) by Hassan Shakosh and Omar Kamal -- ignited on Feb. 14 the debate on mahraganat. The song's lyrics suggest alcohol and hashish -- both of which are forbidden in Islam -- to get over a heartbreak.

The suggestion of alcohol and hashish angered many critics, the powerful Egyptian Musicians Syndicate and parliamentarians, including Amer. They argued that the song was an attack on the public taste and encouragement of immorality.

 

 

No longer glossy...

Playboy produces its last printed edition and goes online only


Link Here21st March 2020
Playboy CEO Ben Kohn has announced that the magazine is ending print publication, and will continue in digital format only. He explained in an open letter:

We are also immensely proud of our revamped quarterly magazine that is inarguably one of the most beautifully designed print offerings on the market today. But it's no surprise that media consumption habits have been changing for some time203and while the stories we produce and the artwork we showcase is enjoyed by millions of people on digital platforms, our content in its printed form reaches the hands of only a fraction of our fans.

Last week, as the disruption of the coronavirus pandemic to content production and the supply chain became clearer and clearer, we were forced to accelerate a conversation we've been having internally: the question of how to transform our U.S. print product to better suit what consumers want today, and how to utilize our industry-leading content production capabilities to engage in a cultural conversation each and every day, rather than just every three months. With all of this in mind, we have decided that our Spring 2020 Issue, which arrives on U.S. newsstands and as a digital download this week, will be our final printed publication for the year in the U.S. We will move to a digital-first publishing schedule for all of our content including the Playboy Interview, 20Q, the Playboy Advisor and of course our Playmate pictorials. In 2021, alongside our digital content offerings and new consumer product launches, we will bring back fresh and innovative printed offerings in a variety of new forms203through special editions, partnerships with the most provocative creators, timely collections and much more. Print is how we began and print will always be a part of who we are.

Over the past 66 years, we've become far more than a magazine. And sometimes you have to let go of the past to make room for the future. So we're turning our attention to achieving our mission in the most effective and impactful way we can: to help create a culture where all people can pursue pleasure.

 

 

Lost Girls and lost minds at the BBFC...

Passed 12 uncut for sexual threat, language, self-harm, sexual violence references and over 20 instances of the word 'fuck'


Link Here17th March 2020
Lost Girls is a 2020 USA mystery thriller by Liz Garbus.
Starring Amy Ryan, Thomasin McKenzie and Gabriel Byrne. BBFC link IMDb

When Mari Gilbert's (Academy Award® nominee Amy Ryan) daughter disappears, police inaction drives her own investigation into the gated Long Island community where Shannan was last seen. Her search brings attention to over a dozen murdered sex workers Mari will not let the world forget. From Academy Award® nominated filmmaker Liz Garbus, LOST GIRLS is inspired by true events detailed in Robert Kolker's "Lost Girls: An Unsolved American Mystery."

Lost Girls is major offering from Netflix that demonstrated a major failing at the BBFC with its automated random rating generator used for Netflix ratings.

A ludicrous 12 rating was posted on the BBFC site, and people started to question it. As described by Neil

It was originally rated 12 and a few of us flagged that the system had failed because the content was above and beyond the 12 bracket (dead prostitutes, domestic abuse, over 20 instances of the word fuck (some directed and aggressively used) along with a continual menacing tone.

Funny because they had just done a press release about their new approach to classifying domestic abuse on screen at the beginning of last week!

Anyway - first thing Monday morning, some poor BBFC examiner went and re-rated it. The original 12 rating was deleted and replace d with 15 for strong language, sex references.

Here's the thread from twitter where the BBFC confesses to how their classifying system works without a BBFC examiner.

The BBFC started the conversation rolling with an ill-judged self promotional tweet implicitly boasting about the importance of its ratings:

BBFC @BBFC · As the weekend approaches, @NetflixUK have released lots of binge-worthy content. What will you be tuning in to watch? Whatever you choose, check the age rating on our website: http:// bbfc.co.uk

  • Straight Outta Compton 36.1%

  • Love Is Blind 8.2%

  • Locke & Key 9.8%

  • A Quiet Place 45.9%

Well Scott took them at their word and checked out their ratings for Lost Girls. He wasn't impressed:

You need to go back to actually classifying Netflix material formally, rather than getting an algorithm to do it. This is rated R Stateside for language throughout, which in your terms means frequent strong language, so definitely not a 12!:

The BBFC responded, perhaps before  realising the extent of the failing

Hi Scott, thanks for flagging, we are looking into this. Just to explain, a person at Netflix watches the content from start to end, and tags the content as they view. Everyone who is tagging content receives appropriate training so they know what to look out for.

Scott noted that the BBFC explanation rather makes for a self proving mistruth as there was obviously at least a step in the process that didn't have a human in the driving seat, He tweeted:

Yeah, the BBFC and the OFLC in Aus now use an automated programme for Netflix content - nobody actually sits and watches it. I get that there's lots of material to go through, but this obviously isn't the best idea. Age ratings you trust is the BBFC's tagline - the irony.

Neil adds:

This film needs reviewing with your new guidance about domestic abuse & triggers in mind. Over 20 uses of f***, some very aggressive and directed. Descriptions of violent domestic abuse (titanium plates, etc) and dead sex workers, sustained threatening tone. Certainly not a 12.

At this point it looks as if the BBFC hasn't quite grasped that their system has clearly spewed bollox and tried to justify that the system as infallible even when it is clearly badly wrong:

These tags are then processed by an algorithm that sets out the same high standards as our classification guidelines. Then, this automatically produces a BBFC age rating for the UK, which is consistent with other BBFC rated content.

Scott adds

Ah, I stand corrected - didn't realise there was a middle man who watches the content. Nevertheless, there's still nobody at the BBFC watching it, which I think is an oversight - this film in particular is a perfect example.

Next thing spotted was the erroneous 12 rating deleted and replaced by a human crafted 15 rating.

And one has to revisit he BBFC statement: processed by an algorithm that sets out the same high standards as our classification guidelines. Perhaps we should read the BBFC statement at face value and conclude that the BBFC's high standards are the same standard as the bollox 12 rating awarded to Lost Girls.

 

 

A censorship struggle...

Amazon UK bans Hitler's book, Mein Kampf


Link Here17th March 2020
Full story: Mein Kampf...Censorship issues with Hitler's book
  Amazon UK has banned the sale of most editions of Hitler's Mein Kampf and other Nazi propaganda books from its store following campaigning by Jewish groups.

Booksellers were informed in recent days that they would no longer be allowed to sell a number of Nazi-authored books on the website.

In one email seen by the Guardian individuals selling secondhand copies of Mein Kampf on the service have been told by Amazon that they can no longer offer this book as it breaks the website's code of conduct. The ban impacts the main editions of Mein Kampf produced by mainstream publishers such as London-based Random House and India's Jaico, for whom it has become an unlikely bestseller .

Other Nazi publications including the children's book The Poisonous Mushroom written by Nazi publisher Julius Streicher, who was later executed for crimes against humanity.

Amazon would not comment on what had prompted it to change its mind on the issue but a recent intervention to remove the books by the London-based Holocaust Educational Trust received the backing of leading British politicians.

 

 

Fined for a failed forgottening...

The Swedish government internet censor fines Google for not taking down links and for warning the targeted website about the censorship


Link Here17th March 2020
Full story: The Right to be Forgotten...Bureaucratic censorship in the EU

The Swedish data protection censor, Datainspektionen has fined Google 75 million Swedish kronor (7 million euro) for failure to comply with the censorship instructions.

According to the internet censor, which is affiliated with Sweden's Ministry of Justice, Google violated the terms of the right-to-be-forgotten rule, a EU-mandated regulation introduced in 2014 allowing individuals to request the removal of potentially harmful private information from popping up in internet searches and directories.

Datainspektionen says an internal audit has shown that Google has failed to properly remove two search results which were ordered to be delisted back in 2017, making either too narrow an interpretation of what content needed to be removed, or failing to remove a link to content without undue delay.

The watchdog has also slapped Google with a cease-and-desist order for its practice of notifying website owners of a delisting request, claiming that this practice defeats the purpose of link removal in the first place.

Google has promised to appeal the fine, with a spokesperson for the company saying that it disagrees with this decision on principle.

 

 

Offsite Article: Brussels considers pan-EU police searches of ID photos...


Link Here 16th March 2020
Law enforcement would seek matches with surveillance photos using facial recognition technology.

See article from politico.eu

 

 

Offsite Article: 20 Diabolically Clever Ways Creators Fooled The Censors...


Link Here 15th March 2020
Entertaining selection of censorship anecdotes

See article from cracked.com

 

 

Offsite Article: Data mining: Minecraft inspires crafty way around government censorship...


Link Here 15th March 2020
Within the hugely popular open-world game, press-freedom nonprofit Reporters Without Borders creates a library to house censored journalism.

See article from cnet.com

 

 

Offsite Article: Ten woke ways to shut down debate...


Link Here15th March 2020
Some people would prefer you didn't talk back -- here's how they'll try to cancel you. By Peter Franklin

See article from unherd.com

 

 

Worthy but blinkered...

Independent report on child abuse material recommends strict age/identity verification for social media


Link Here14th March 2020
  The Independent Inquiry into Child Sexual Abuse, chaired by Professor Alexis Jay, was set up because of serious concerns that some organisation had failed and were continuing to fail to protect children from sexual abuse. It describes its remit as:

Our remit is huge, but as a statutory inquiry we have unique authority to address issues that have persisted despite previous inquiries and attempts at reform.

The inquiry has just published its report with the grandiose title: The Internet.

It has consider many aspects of child abuse and come up with the following short list of recommendation:

  1. Pre-screening of images before uploading

    The government should require industry to pre-screen material before it is uploaded to the internet to prevent access to known indecent images of children.
     
  2. Removal of images

    The government should press the WeProtect Global Alliance to take more action internationally to ensure that those countries hosting indecent images of children implement legislation and procedures to prevent access to such imagery.
     
  3. Age verification

    The government should introduce legislation requiring providers of online services and social media platforms to implement more stringent age verification techniques on all relevant devices.
     
  4. Draft child sexual abuse and exploitation code of practice

    The government should publish, without further delay, the interim code of practice in respect of child sexual abuse and exploitation as proposed by the Online Harms White Paper (published April 2019).

But it should be noted that the inquiry gave not even a passing mention to some of the privacy issues that would have far reaching consequences should age verification be required for children's social media access.

Perhaps the authorities should recall that age verification for porn failed because the law makers were only thinking of the children, and didn't give even a moment of passing consideration for the privacy of the porn users. The lawmaker's blinkeredness resulted in the failure of their beloved law.

Has anyone even considered the question what will happen if they ban kids from social media. An epidemic of tantrums? Collapse of social media companies? kids go back to hanging around on street corners?, the kids find more underground websites to frequent? they play violent computer games all day instead?

 

 

The Invisible Man...

The film was cut for a 15 rating at cinemas but has been passed 18 uncut for VoD


Link Here13th March 2020
The Invisible Man is a 2020 Australia / USA Sci-Fi horror thriller by Leigh Whannell.
Starring Elisabeth Moss, Aldis Hodge and Oliver Jackson-Cohen. BBFC link IMDb

The film follows Cecilia, who receives the news of her abusive ex-boyfriend's suicide. She begins to re-build her life for the better. However, her sense of reality is put into question when she begins to suspect her deceased lover is not actually dead.

BBFC advised category cuts were required for a 15 rated cinema release in 2020. The Irish cinema releases looks to be uncut but 16 rated.

Now the uncut version was passed 18 uncut by the BBFC for VoD with the consumer advice: strong injury detail, bloody violence, domestic abuse.

The DVD and Blu-ray ratings have not yet been published but a region 0 Blu-ray release suggests that this could also be 18 uncut.

The cinema release was previously passed 15 for strong bloody violence, threat, language, domestic abuse after BBFC advised pre-cuts:

The BBFC commented:

  • This film was originally seen for advice. The company was advised it was likely to be classified 18 uncut but that their preferred 15 classification could be obtained by making small changes to one scene to remove bloody injury detail during an attempted suicide. When the film was submitted for formal classification, the shots in question had been removed and the film was classified 15.

 

 

Updated: Rated 18 for moderate violence and a strong bloody shower scene...

Death Ship age rating increased from 15 to 18


Link Here 13th March 2020
Death Ship is a 1980 UK / Canada / USA horror mystery adventure by Alvin Rakoff.
Starring George Kennedy, Richard Crenna and Nick Mancuso. BBFC link IMDb

Survivors of a tragic shipping collision are rescued by a mysterious black ship which appears out of the fog. Little do they realize that the ship is actually a Nazi torture ship which has sailed the seas for years, luring unsuspecting sailors aboard and killing them off one by one.

The 1980 cinema release was X rated followed by 18 rated VHS in 1987. But the film was reduced to 15 for 2007 DVD with the consumer advice:

Contains infrequent strong nudity, moderate bloody violence and horror

The film has just been resubmitted for video release late in the year but the age rating has been raised back up to 18 for:

strong nudity, bloody images

There are variant versions of the film but I don't the differences are relevant to the age rating. The age defining scene seems to be where a naked and busty woman is showering only, for the water to turn to blood, (not her blood). The woman gets stuck in the shower by a jammed door and she is eventually killed off screen by the ghostly ship's captain.

The 15 rating surely fits the bill, and the 2007 consumer advice seems accurate. So why has it been bumped up to 18, and why has the BBFC changed the consumer advice so as to no longer mention the 'moderate' violence? It seems that the consumer advice has been phrased to justify the over exaggerated age rating rather than to provide informative advice to viewers.

Update: BBFC Response

The BBFC explained the rating increase in a tweet

We reclassified the latest version of Death Ship 18 due to extended material that included much stronger nudity and bloody images:

An interesting comment as I have never heard before that the theatrical version was cut.

Update: An uncut version

Thanks to Andy, Tim, Mark, Rob and Bendy.

The film was originally cut in the US to obtain an MPAA R rating. In the UK this Theatrical Version was passed X for 1980 cinema release, 18 rated for VHS, and 15 rated for 2007 DVD. This 2007 DVD package was released with an 18 rating due to DVD extras, presumably meaning the DVD extra titled Uncensored bloody shower scene.

In 2018 Scorpion Releasing in the US issued a Blu-ray with a restored and extended version it refers to as the Original Longer Cut. It seems that this version included the Uncensored bloody shower scene and is now set for UK release on Nucleus Blu-ray with an increased BBFC 18 rating.

 

 

Offsite Article: Tim Berners-Lee calls for urgent action to make cyberspace safer for women and girls...


Link Here12th March 2020
Why can't we have policies that protect everyone equally. Identitarian one sided rules have achieved little beyond unfairness, injustice and society wide aggrievement

See article from theguardian.com

 

 

Offensive to viruses...

Advert censor bans bed advert referring to flu bugs as nasty imports


Link Here11th March 2020

A regional press ad for Vic Smith Beds, seen in the Enfield and Haringey Independent newspaper on 12 February 2020. The ad included a cartoon image of an upright mattress, which had a Union Jack on the front, and which was wearing a green surgical mask. Text stated, BRITISH BUILD [sic] BEDS PROUDLY MADE IN THE UK. NO NASTY IMPORTS.

Two complainants challenged whether the ad was likely to cause serious and widespread offence by linking concern about the ongoing coronavirus health emergency to nationality and/or race.

ASA decision: Complaints upheld

The ad was seen in the context of widespread news coverage of a developing major outbreak of novel coronavirus 2019-nCov, or COVID-19 (coronavirus), in China, with a small but growing number of cases having been confirmed in the UK. News outlets had also reported some groups being physically and verbally targeted because of their nationality and/or race in relation to fears about coronavirus. The ASA understood that, in particular, a number of Asian people had reported receiving abuse as a result of wearing face masks.

The CAP Code required marketers to ensure that ads did not contain anything that was likely to cause serious or widespread offence, with particular care to be taken to avoid causing offence on various grounds of protected characteristics, including race. We noted the reference to BRITISH BUILD [sic] beds, and the image of the Union Jack, and we understood that the advertiser's intention was to draw attention to the fact that their beds were made in the UK. However, we also considered that the phrase NO NASTY IMPORTS, in combination with the image of the surgical mask, was likely to be taken as a reference to the coronavirus outbreak. We considered that in combination with the image, the reference to nasty imports was likely to be read as a negative reference to immigration or race, and in particular as associating immigrants with disease.

We therefore concluded that the ad was likely to cause serious and widespread offence. The ad breached CAP Code (Edition 12) rule 4.1 4.1 Marketing communications must not contain anything that is likely to cause serious or widespread offence. Particular care must be taken to avoid causing offence on the grounds of race, religion, gender, sexual orientation, disability or age. Compliance will be judged on the context, medium, audience, product and prevailing standards. Marketing communications may be distasteful without necessarily breaching this rule. Marketers are urged to consider public sensitivities before using potentially offensive material. The fact that a product is offensive to some people is not grounds for finding a marketing communication in breach of the Code. (Harm and offence). =

The ad must not appear again. We told Vic Smith Bedding Ltd t/a Vic Smith Beds to ensure they avoided causing serious and/or widespread offence on the grounds of nationality or race.

 

 

Offsite Article: Now they're censoring free-speech societies...


Link Here11th March 2020
Full story: University Censorship...Universities vs Free Speech
If you thought students' unions had more sense than to go after pro-free speech student groups, think again.

See article from spiked-online.com

 

 

The outrage mob rants about dated Doctor Who episodes on BritBox...

But are we so sure we're on the moral high ground? The 1970's didn't have an unfair pecking order defining which identitarian groups are allowed and encouraged to attack and bully other groups, resulting in widespread societal aggrievement


Link Here 10th March 2020
An outrage mob has accused he BritBox of racism after it put up an episode of Doctor Who where Chinese people are played by western actors.

The pay streaming service run by the BBC and ITV was accused of failing to put a trigger warning on the 1977 six-part series titled The Talons of Weng-Chiang where white actors are shown wearing make-up and putting on accents as they play Asian characters.

Britbox later added a content warning to the episode since the kerfuffle, and said its compliance team are still working to review all programmes. A warning to the series now says that it contains some offensive language of the time and upsetting scenes.

A spokeswoman for the British East Asians in Theatre & on Screen told The Times the episode is really hard to watch because yellowface is so unacceptable now.

The episode stars Tom Baker playing the Doctor as he battles a Chinese stage magician villain called Li H'sen Chang, played by white British actor John Bennett.

 

 

Onward christian whingers...

Disney cartoon Onward banned in Kuwait, Oman, Qatar, Saudi Arabia and cut in Russia


Link Here9th March 2020
Onward is a 2020 USA children's cartoon comedy by Dan Scanlon.
Starring Tom Holland, Chris Pratt and Julia Louis-Dreyfus. IMDb

Set in a suburban fantasy world, two teenage elf brothers, Ian and Barley Lightfoot, go on an journey to discover if there is still a little magic left out there in order to spend one last day with their father, who died when they were too young to remember him.

Disney's latest Pixar cartoon Onward has been banned by several Middle Eastern countries because of a reference to lesbian parents. The film will not be shown in Kuwait, Oman, Qatar and Saudi Arabia.

Police officer Specter, voiced by Lena Waithe, has been heralded as Disney-Pixar's first openly gay character. Her lines include: It's not easy being a parent... my girlfriend's daughter got me pulling my hair out, OK?

Other Middle East countries, Bahrain, Lebanon and Egypt are showing the film.

And according to Deadline, Russia censored the scene in question by changing the word girlfriend to partner and avoiding mentioning the gender of Specter, who is a supporting character.

Meanwhile in the US the christian website LifeSiteNews has launched a petition calling for a boycott of the movie. Gualberto Garcia Jones Director of Advocacy for LifeSite whinged:

It's a relentless onslaught against our children's innocence. And, we parents have got to be just as relentless in rejecting Disney's attempt to sexualize our children.

The petition has been signed by about 55,000 people and states:

By forcing the LGBT agenda on us, you are seriously disrespecting our values. The days are now over where we would give you our hard-earned dollars just so you can turn around and offend us and our children's innocence. Please do not pursue this agenda again in the future.

 

 

Nothing to read here...

Woody Allen's autobiography is censored by his publisher Hachette


Link Here9th March 2020
 Woody Allen's memoir, Apropos of Nothing, was acquired last week by the publisher Hachette in the US.

The move was quickly condemned by the author's daughter Dylan Farrow, who has alleged that Allen sexually abused her as a child, allegations that Allen has denied. These allegations have twice been investigated by the authorities but have not led to arrest, charge or prosecution.

Allen's son Ronan Farrow, whose book Catch and Kill --also published by Hachette -- details his investigations into institutional sexual abuse in the media and Hollywood, also blasted the decision and announced he would no longer work with Hachette.

The Hachette censorship was  initiated by Hachette staff in the US who staged a walkout at its New York offices over the memoir. The publisher then pulled the book, claming that the decision was a difficult one.

Woody Allen's memoir will still be published in France despite its US publisher dropping it, with his French publisher saying that the film director is not Roman Polanski and that the American situation is not ours.

 

Offsite Comment: This is the behaviour of censors, not publishers

9th March 2020. See article from theguardian.com   by Jo Glanville of English Pen

I do not want to read books that are good for me or that are written by people whose views I always agree with or admire. I am always afraid when a mob, however small and well read, exercises power without any accountability, process or redress. That frightens me much more than the prospect of Woody Allen's autobiography hitting the bookstores.

...Read the full article from theguardian.com

 

 

Offsite Article: How to stop your smart home spying on you...


Link Here9th March 2020
A few practical tips but I would prefer something a little more absolute

See article from theguardian.com

 

 

UK and US play silly games with backdoors for encrypted messaging...

The Chinese will be probing your backdoors as soon as they are introduced


Link Here 8th March 2020
   
 Haha he thought he was protected by a level 5 lock spell,
but every bobby on the street has a level 6 unlock spell,
and the bad guys have level 10.

The Government is playing silly games trying to suggest ways that snooping backdoors on people's encrypted messaging could be unlocked by the authorities whilst being magically safe from bad guys especially those armed with banks of Chinese super computers.

The government wants to make backdoors mandatory for messaging and offers a worthless 'promise' that authority figures would need to agree before the police are allowed to use their key to unlock messages.

Andersen Cheng, chief executive of Post-Quantum, a specialist encryption firm working with Nato and Government agencies, said a virtual key split into five parts - or more - could unlock messages when all five parties agreed and the five key fragments were joined together.

Those five parties could include the tech firm like Facebook, police, security service or GCHQ, an independent privacy advocate or specialist similar to the independent reviewer of terror legislation and the judge authorising the warrant.

Cheng's first company TRL helped set up the secure communications system used by 10 Downing Street to talk with GCHQ, embassies and world leaders, but I bet that system did not include backdoor keys.

The government claims that official access would only be granted where, for example, the police or security service were seeking to investigate communications between suspect parties at a specific time, and where a court ruled it was in the public or nation's interest.

However the government does not address the obvious issue of bad guys getting hold of the keys and letting anyone unlock the messages for a suitable fee. And sometimes those bad guys are armed with the best brute force code cracking powers in the world.

 

 

Protecting the age of innocence...

Whilst endangering everyone else. Australian parliamentary committee recommends age verification from porn


Link Here8th March 2020
Full story: Age Verification for Porn...Endangering porn users for the sake of the children

Protecting the age of innocence

Report of the inquiry into age verification for online wagering and online pornography

House of Representatives Standing Committee on Social Policy and Legal Affairs

Executive Summary

The Committee’s inquiry considered the potential role for online age verification in protecting children and young people in Australia from exposure to online wagering and online pornography.

Evidence to the inquiry revealed widespread and genuine concern among the community about the serious impacts on the welfare of children and young people associated with exposure to certain online content, particularly pornography.

The Committee heard that young people are increasingly accessing or being exposed to pornography on the internet, and that this is associated with a range of harms to young people’s health, education, relationships, and wellbeing. Similarly, the Committee heard about the potential for exposure to online wagering at a young age to lead to problem gambling later in life.

Online age verification is not a new concept. However, the Committee heard that as governments have sought to strengthen age restrictions on online content, the technology for online age verification has become more sophisticated, and there are now a range of age-verification services available which seek to balance effectiveness and ease-of-use with privacy, safety, and security.

In considering these issues, the Committee was concerned to see that, in so much as possible, age restrictions that apply in the physical world are also applied in the online world.

The Committee recognised that age verification is not a silver bullet, and that protecting children and young people from online harms requires government, industry, and the community to work together across a range of fronts. However, the Committee also concluded that age verification can create a significant barrier to prevent young people—and particularly young children—from exposure to harmful online content.

The Committee’s recommendations therefore seek to support the implementation of online age verification in Australia.

The Committee recommended that the Digital Transformation Agency lead the development of standards for online age verification. These standards will help to ensure that online age verification is accurate and effective, and that the process for legitimate consumers is easy, safe, and secure.

The Committee also recommended that the Digital Transformation Agency develop an age-verification exchange to support a competitive ecosystem for third-party age verification in Australia.

In relation to pornography, the Committee recommended that the eSafety Commissioner lead the development of a roadmap for the implementation of a regime of mandatory age verification for online pornographic material, and that this be part of a broader, holistic approach to address the risks and harms associated with online pornography.

In relation to wagering, the Committee recommended that the Australian Government implement a regime of mandatory age verification, alongside the existing identity verification requirements. The Committee also recommended the development of educational resources for parents, and consideration of options for restricting access to loot boxes in video games, including though the use of age verification.

The Committee hopes that together these recommendations will contribute to a safer online environment for children and young people.

Lastly, the Committee acknowledges the strong public interest in the inquiry and expresses its appreciation to the individuals and organisations that shared their views with the Committee.

 

 

Oxford University feminists no-platform Amber Rudd...

Once academics spoke from their seat of learning, now they speak from their arsehole of dogma


Link Here8th March 2020
The UNWomen Oxford UK Society had invited former Home Secretary Amber Rudd to speak on the totally uncontroversial topic of women's equality. She was due to be interviewed on her former roles as minister for women and equalities and chair of the All-Party Parliamentary Group for Sex Equality.

But less than an hour before the event, the UNWomen society had a change of heart and decded to no-platform their invited speaker.

It seems that the issue that offended the students was that the Windrush scandal happened under Rudd's watch at the Home Office and negated all her other achievements.

Amber Rudd described the decision to cancel the event as badly judged and rude'

The students have now tried to justify their censorship in article from theguardian.com

 

 

Getting the censor's goat...

Drink censors of the Portman Group ban Lawless Lager over its rule banning association with illegal behaviour


Link Here8th March 2020

The Portman Group, a trade group for the drinks industry have banned Lawless Unfiltered Lager from Purity Brewing. The company markets the beer with the description: 'Lawless is a maverick beer' and 'is a law unto itself'.

The Portman Group panel considered its rule 3.2(b):

A drink, its packaging and any promotional material or activity should not in any direct or indirect way suggest any association with bravado, or with violent, aggressive, dangerous, anti-social or illegal behaviour.

The Panel expressed concern that the name Lawless suggested an association with illegal behaviour.

The Panel noted the text on the back of the can, which said Lawless is our maverick beer named after our farmyard fiend, Bruno the goat. Just like our lager he is law unto himself, full of character with a sharp kick!. The Panel acknowledged that the producers were a small company challenging the established order in their industry and understood that they intended to convey that spirit. The Panel considered, however, that the text on the back of the can was not enough to prevent the name Lawless from being seen as a reference to illegal behaviour.

The Panel noted the producer's argument that the name of the product was Lawless Unfiltered Lager and that Lawless never appeared in isolation. The Panel also noted the wording on the back of the can, however, which stated Lawless is our maverick beer and Lawless is brewed with a big dose of El Dorado hop. The Panel rejected the argument that Lawless was only ever used as part of the phrase Lawless Unfiltered Lager.

The Panel discussed the image of the goat and the explanation around the name. Whilst the line drawing of the goat on the front of the can was not a problem in itself, the connection between the animal and the name as a motivation for the Lawless branding was not sufficient. The Panel concluded that the name would still be problematic, even with more extensive text linking the artwork to the name.

The Panel agreed that to be a maverick or breaking the mould was not the same as breaking the law, and considered that positioning a beer as maverick or highlighting quirky or mould-breaking qualities could be acceptable under the Code, whereas a reference to breaking the law was unacceptable.

After considering all the arguments put forward, the Panel maintained their view that Lawless was fundamentally incompatible with the rule that alcohol products should not suggest any association with illegal behaviour. They emphasised that their concern was about the product name alone and they believed the Code breach to be unintentional. The Panel considered that the name Lawless directly implied breaking the law, which was by definition illegal behaviour. Therefore, it could not be justifiable through content given the nature of the Code. The complaint was therefore upheld under Code rule 3.2(b).

 

 

But it's probably still OK to hate Trump supporters?...

Twitter updates its censorship rules about hateful content


Link Here8th March 2020
Full story: Twitter Censorship...Twitter offers country by country take downs

Twitter updated its rules about hateful content on 5th March 2020. The changes are in the area of dehumanizing remarks, which are remarks that treat others as less than human, on the basis of age, disability, or disease. The rules now read:

Hateful conduct policy

Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.

Hateful imagery and display names: You may not use hateful images or symbols in your profile image or profile header. You also may not use your username, display name, or profile bio to engage in abusive behavior, such as targeted harassment or expressing hate towards a person, group, or protected category.

Violent threats

We prohibit content that makes violent threats against an identifiable target. Violent threats are declarative statements of intent to inflict injuries that would result in serious and lasting bodily harm, where an individual could die or be significantly injured, e.g., "I will kill you".

Wishing, hoping or calling for serious harm on a person or group of people

We prohibit content that wishes, hopes, promotes, or expresses a desire for death, serious and lasting bodily harm, or serious disease against an entire protected category and/or individuals who may be members of that category. This includes, but is not limited to:

  • Hoping that someone dies as a result of a serious disease, e.g., "I hope you get cancer and die."

  • Wishing for someone to fall victim to a serious accident, e.g., "I wish that you would get run over by a car next time you run your mouth."

  • Saying that a group of individuals deserve serious physical injury, e.g., "If this group of protesters don't shut up, they deserve to be shot."

References to mass murder, violent events, or specific means of violence where protected groups have been the primary targets or victims

We prohibit targeting individuals with content that references forms of violence or violent events where a protected category was the primary target or victims, where the intent is to harass. This includes, but is not limited to sending someone:

  • media that depicts victims of the Holocaust;

  • media that depicts lynchings.

Inciting fear about a protected category

We prohibit targeting individuals with content intended to incite fear or spread fearful stereotypes about a protected category, including asserting that members of a protected category are more likely to take part in dangerous or illegal activities, e.g., "all [religious group] are terrorists".

Repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone

We prohibit targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.

We also prohibit the dehumanization of a group of people based on their religion, age, disability, or serious disease.

Hateful imagery

We consider hateful imagery to be logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, gender identity or ethnicity/national origin. Some examples of hateful imagery include, but are not limited to:

  • symbols historically associated with hate groups, e.g., the Nazi swastika;

  • images depicting others as less than human, or altered to include hateful symbols, e.g., altering images of individuals to include animalistic features; or

  • images altered to include hateful symbols or references to a mass murder that targeted a protected category, e.g., manipulating images of individuals to include yellow Star of David badges, in reference to the Holocaust.

 

 

Offsite Article: Get that location data turned off...


Link Here8th March 2020
Google tracked his bike ride past a burglarized home. That made him a suspect.

See article from nbcnews.com

 

 

Dolled Up...

The BBC responds to whinges about a Pussycats Dolls dance routine on The One Show


Link Here7th March 2020

  The One Show
26 February 2020

Summary of complaint

We received complaints about the dance routine of The Pussycat Dolls.

Our response

The Pussycat Dolls are well known for their dance routines and outfits and we announced at the start of the show that they would be appearing. Their performance then came towards the end of the programme, just before 8pm

As with all performers, we worked with the band to ensure their performance was suitable for the programme. We felt it was appropriate for the time slot and wouldn't fall outside the expectations of most viewers. However, we appreciate that some viewers didn't agree.

The programme also included a film which looked at cosmetic procedures which are being purchased by children, without the need for parental consent or appropriate checks. We believe this film highlighted an important issue. We have noted that some viewers felt that these two items shouldn't have been included in the same programme.

 

 

One Million Moms recommend...

Eternals, for its openly gay superhero


Link Here7th March 2020
Full story: One Million Moms...Moralist whingers against anything sexy
Eternals is a 2020 USA action Sci-Fi fantasy by Chloé Zhao.
Starring Angelina Jolie, Richard Madden and Salma Hayek. IMDb

Following the events of Avengers: Endgame (2019), an unexpected tragedy forces the Eternals, ancient aliens who have been living on Earth in secret for thousands of years, out of the shadows to reunite against mankind's most ancient enemy, the Deviants.

One Million Moms is a US morality campaign group. It has called for the boycott of an upcoming Marvel Studios movie because it features an openly gay superhero. The Eternals is said to an include a kiss between superhero Phastos and his husband.

The movie's release is still eight months away, but the campaigners getting early. The group wrote:

Warning! An upcoming Marvel Studios movie will include a homosexual superhero and a same-sex kiss in the film The Eternals , set to hit theaters on November 6. One Million Moms needs your help to make sure as many people as possible are aware of Marvel pushing the LGBTQ agenda on families in the upcoming superhero movie The Eternals , which will be distributed by Walt Disney Studios.

Marvel has decided to be politically correct instead of providing family friendly programming. Marvel should stick to entertaining, not pushing an agenda. As moms, we all want to know when Marvel is attempting to desensitize our family by normalizing the LGBTQ lifestyle.

 

 

Yet another example demonstrating the dangers identifying yourself as a porn user...

Virgin Media details customer porn access data that it irresponsibly made openly available on the internet


Link Here6th March 2020
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
A customer database left unsecured online by Virgin Media contained details linking some customers to pornography and explicit websites.

The researchers who first discovered the database told the BBC that it contained more information than Virgin Media suggested. Such details could be used by cyber-criminals to extort victims.

Virgin revealed on Thursday that one of its marketing databases containing details of 900,000 people was open to the internet and had been accessed on at least one occasion by an unknown user.

On Friday, it confirmed that the database contained details of about 1,100 customers who had used an online form to ask for a particular website to be blocked or unblocked. It said it was in the process of contacting customers again about specific data that may have been stolen.

When it first confirmed the data breach on Thursday, Virgin Media warned the public that the database contained phone numbers, home addresses and emails, however the company did not disclose that database contained more intimate details.

A representative of TurgenSec, the research company said Virgin Media's security had been far from adequate. The information was in plain text and unencrypted, which meant anyone browsing the internet could clearly view and potentially download all of this data without needing any specialised equipment, tools, or hacking techniques.

A spokeswoman for the ICO said it was investigating, and added:

People have the right to expect that organisations will handle their personal information securely and responsibly. When that doesn't happen, we advise people who may have been affected by data breaches to be vigilant when checking their financial records.

Virgin Media said it would be emailing those affected, in order to warn them about the risks of phishing, nuisance calls and identity theft. The message will include a reminder not to click on unknown links in emails, and not to provide personal details to unverified callers.

 

 

Censor Speak. The BBFC changes terminology for domestic violence...

The trouble with using PC terminology is that the primary message conveyed is that the speaker is virtue signalling PC credentials. Then the intended message is of secondary interest and needs scaling down to counter the inherent PC exaggeration


Link Here 5th March 2020

The British Board of Film Classification (BBFC) is changing the way it highlights domestic abuse in ratings info for films and episodic content, after working with Women's Aid and Respect on new research.

The research - which focused on both female and male survivors of domestic abuse, experts and the general public - showed that the BBFC is getting it right when it comes to classification decisions in both films and episodic content featuring domestic abuse. The regulator already takes domestic abuse portrayals seriously, and the respondents agreed that the BBFC rightly classifies these issues at a higher category.

The research showed that less is more, and going into too much detail in the ratings info is a minefield as people's sensitivities and triggers are complex - this is already taken into account in the classification decision. It was highlighted that the widely understood catch-all term of domestic abuse was much better placed to describe such scenes, as it is considered broad enough to include psychological and economic abuse, gaslighting and non sexual abuse of children.

Therefore, the BBFC will now use domestic abuse instead of domestic violence in the ratings info it issues to accompany its ratings. The BBFC will also stop using the term themes of, which the research showed people felt trivialised the issue.

The research flagged that survivors can be triggered by scenes of domestic abuse, especially if it is unexpected. This can be traumatising, and can lead to people avoiding certain types of content. Responding to these findings, the BBFC will now flag domestic abuse in every case, even if the scenes are not category defining.

David Austin, Chief Executive of the BBFC, said:

This timely and important research is shining a light on people's attitudes towards domestic abuse, and it's important that our classifications reflect what people think. It's very encouraging to see that we're getting our classification decisions right when it comes to domestic abuse, which already can be category defining. But what it has shown, is that we should bring our ratings info more in line with what people expect and understand, which is exactly what we're going to be doing. These changes will give people the information they need to choose content well. Most particularly in this case, the ratings info will highlight the issues to those that have been personally affected by domestic abuse, so they are forewarned of content which could trigger distress.

While there were few factors that would reduce the impact of watching a scene of domestic abuse, a series of aggravating factors among survivors were flagged, including: the sound of a key turning in a lock; the silence before an attack; the sound of a slap or a punch; and seeing fear in someone's face or eyes.

Adina Claire, Acting co-Chief Executive of Women's Aid, said:

This research has given an important insight into what survivors, experts and the general public think about depictions of domestic abuse in films and episodic content. We're pleased that the BBFC have responded to the report, and have reflected the attitudes in their classification policies - meaning that anyone affected by domestic abuse will now have the clear and consistent information they need about what triggers content may contain.

The research also found that the term child abuse was widely associated with sexual abuse, rather than domestic abuse, and having a child present in a scene depicting domestic abuse often meant that the scene was more triggering for audiences. Therefore, the BBFC will limit the use of child abuse to scenes where child sexual abuse is depicted only, with non sexual child abuse also described as domestic abuse.

People agreed it's very important to educate audiences about the issue and to encourage awareness and discussion. As such, the research strongly underpins the BBFC's policy of being less restrictive on public information campaigns than on commercial trailers and ads, rating them at the lowest reasonable classification.

 

 

Meaningless Words...

Ludicrous campaigners call for a dictionary to be a list of recommended words not a definition of how word are used and what they mean


Link Here5th March 2020

Did you know that if you are a woman, the dictionary will refer to you as a bitch or a maid? And that a man is a person with the qualities associated with males, such as bravery, spirit, or toughness or a man of honour and the man of the house?

These are, according to the dictionary, the synonyms for woman alongside a wealth of derogatory and equally sexist examples 203 I told you to be home when I get home, little woman or Don't be daft, woman!

Synonyms and examples such as these, when offered without context, reinforce negative stereotypes about women and centre men. That's dangerous because language has real world implications, it shapes perceptions and influences the way women are treated.

Dictionaries are essential reference tools, and the Oxford Dictionary of English is an essential learning tool, used in libraries and schools around the world. It is also the source licensed by Apple and Google, namely the most read online dictionary in the world.

Its inclusion of derogatory terms used to describe women should aim at exposing everyday sexism, not perpetuating it.

Bitch is not a synonym for woman. It is dehumanising to call a woman a bitch. It is but one sad, albeit extremely damaging, example of everyday sexism. And that should be explained clearly in the dictionary entry used to describe us.

We are calling on Oxford University Press, which publishes the Oxford Dictionary of English, as well as the online Oxford Dictionaries (www.lexico.com to change their entry for the word woman. It might not end everyday sexism or the patriarchy but it's a good start.

Maria Beatrice Giovanardi and the campaign team
Mandu Reid, leader of Women's Equality Party
Deborah Cameron, professor of language and communication, Oxford University
Nicki Norman, acting CEO of Women's Aid Federation of England
Fiona Dwyer, CEO at Solace Women's Aid
Estelle du Boulay, Director of Rights of Women
Laura Coryton, tampon tax petition starter, Period Poverty Task Force Member at the Government Equalities Office, alumni of University of Oxford (MSt in Women's Studies)
Gabby Edlin, CEO and Founder of Bloody Good Period
The Representation Project
Zoe Dronfield, trustee at Paladin National Stalking Advocacy Service
Gwen Rhys, founder and CEO of Women in the City
David Adger, professor of linguistics, Queen Mary University of London
Dr Christine Cheng, author and lecturer in war studies at King's College
Dr Christina Scharff, author and reader in gender, media and culture at King's College
Judith Large, senior research fellow

 

 

Misguided Censorship...

ASA bans sexy fashion poster from Misguided


Link Here4th March 2020
Full story: PC censorship in the UK...ASA introduce politically correct censorship rules for adverts

Two posters for Missguided, a clothing company:

  • a. The first poster, seen on the London Underground on 14 November 2019, featured a model wearing a pink wrap mini-dress, which showed her legs and cleavage.

  • b. The second poster, seen on 24 November on a train station platform, featured the same model leaning against a side table wearing an unbuttoned jacket with nothing underneath, sheer tights and high heels.

Issue The complainants, who believed the images were overly sexualised and objectified women, challenged whether:

  1. ad (a); and

  2. ad (b) were offensive.

  3. One of the complainants also challenged whether ad (a) was appropriate for display where it could be seen by children.

ASA Decision

1. Not upheld

The ASA considered that the pose adopted by the model in ad (a) was no more than mildly sexual. The wrap style of the dress and her pose, with one arm slightly behind her, meant that it fell open just by her breast, which we considered was likely to be in keeping with how the dress would ordinarily be worn, but featured no explicit nudity. We also considered the focus of the ad was on the model in general and on the featured dress, rather than on a specific part of her body. While we acknowledged that some people might find the ad distasteful and the clothing revealing, we considered that the ad was unlikely to be seen as overtly sexual or as objectifying either the model in the ad or women in general and we therefore concluded the ad was unlikely to cause serious or widespread offence.

2. Upheld The model in ad (b) was wearing a blazer with nothing underneath, which exposed the side of her breast, and which was coupled with sheer tights, sheer gloves and underwear. We considered she would be seen as being in a state of undress and that the focus was on her chest area and lower abdomen rather than the clothing being advertised. We also noted that her head was tilted back, with her mouth slightly open, and her leg was bent and raised, which we considered was likely to be seen as a sexually suggestive pose. We considered that the sexually suggestive styling and pose would be seen as presenting women as sexual objects. Because the ad objectified women, we concluded that ad (b) was likely to cause serious offence.

3. Not upheld Ad (a) was seen on the London Underground and we accepted that children were likely to have seen the ad. However, for the reasons stated in point 1 above, we considered the image was not overtly sexual, and therefore concluded that it had not been placed inappropriately.

Ad (b) must not appear again in its current form. We told Missguided Ltd not to use advertising that objectified women and which was likely to cause serious offence.

 

 

Irreconcilable differences...

EU Copyright Filters Are On a Collision Course With EU Data Privacy Rules


Link Here4th March 2020
Full story: Copyright in the EU...Copyright law for Europe

The European Union's controversial new copyright rules are on a collision course with EU data privacy rules. The GDPR guards data protection, privacy, and other fundamental rights in the handling of personal data. Such rights are likely to be affected by an automated decision-making system that's guaranteed to be used, and abused, under Article 17 to find and filter out unauthorized copyrighted material. Here we take a deep dive examining how the EU got here and why Member States should act now to embrace enforcement policies for the Copyright Directive that steer clear of automated filters that violate the GDPR by censoring and discriminating against users.

Platforms Become the New Copyright Police

Article 17 of the EU's Cop yright Directive (formerly Article 13) makes online services liable for user-uploaded content that infringes someone's copyright. To escape liability, online service operators have to show that they made best efforts to obtain rightsholders' authorization and ensure infringing content is not available on their platforms. Further, they must show they acted expeditiously to remove content and prevent its re-upload after being notified by rightsholders.

Prior to passage of the Copyright Directive, user rights advocates alerted lawmakers that operators would have to employ upload filters to keep infringing content off their platforms. They warned that then Article 13 will turn online services into copyright police with special license to scan and filter billions of users' social media posts and videos, audio clips, and photos for potential infringements.

While not everyone agreed about the features of the controversial overhaul of outdated copyright rules, there was little doubt that any automated system for catching and blocking copyright infringement would impact users, who would sometimes find their legitimate posts erroneously removed or blocked. Instead of unreservedly safeguarding user freedoms, the compromise worked out focuses on procedural safeguards to counter over-blocking. Although complaint and redress mechanisms are supposed to offer a quick fix, chances are that censored Europeans will have to join a long queue of fellow victims of algorithmic decision-making and await the chance to plead their case.

Can't See the Wood For the Trees: the GDPR

There's something awfully familiar about the idea of an automated black-box judgment system that weighs user-generated content and has a significant effect on the position of individuals. At recent EU copyright dialogue debates on technical and legal limits of copyright filters, EU data protection rules--which restrict the use of automated decision-making processes involving personal data--were not put on the agenda by the EU officials. Nor were academic experts on the GDPR who have raised this issue in the past (read this analysis by Sophie Stalla-Bourdillon or have a look at this year's CPDP panel on copyright filters ).

Under Article 22 of the GDPR , users have a right "not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." Save for exceptions, which will be discussed below, this provision protects users from detrimental decisions made by algorithms, such as being turned down for an online loan by a service that uses software, not humans, to accept or reject applicants. In the language of the regulation, the word "solely" means a decision-making process that is totally automated and excludes any real human influence on the outcome.

The Copyright-Filter Test Personal Data

The GDPR generally applies if a provider is processing personal data, which is defined as any information relating to an identified or identifiable natural person ("data subject," Article 4(1) GDPR ). Virtually every post that Article 17 filters analyze will have come from a user who had to create an account with an online service before making their post. The required account registration data make it inevitable that Copyright Directive filters must respect the GDPR. Even anonymous posts will have metadata, such as IP addresses ( C-582/14, Breyer v Germany), which can be used to identify the poster. Anonymization is technically fraught, but even purportedly anonymization will not satisfy the GDPR if the content is connected with a user profile, such as a social media profile on Facebook or YouTube.

Defenders of copyright filters might counter that these filters do not evaluate metadata. Instead, they'll say that filters merely compare uploaded content with information provided by rightsholders. However, the Copyright Directive's algorithmic decision-making is about much more than content-matching. It is the decision whether a specific user is entitled to post a specific work. Whether the user's upload matches the information provided by rightsholders is just a step along the way. Filters might not always use personal data to determine whether to remove content, but the decision is always about what a specific individual can do. In other words: how can monitoring and removing peoples' uploads, which express views they seek to share, not involve a decision about based on that individual?

Moreover, the concept of "personal data" is very broad. The EU Court of Justice (Case C-434/16 Nowak v Data Protection Commissioner ) held that "personal data" covers any information "provided that it 'relates' to the data subject," whether through the content (a selfie uploaded on Facebook), through the purpose (a video is processed to evaluate a person's preferences), or through the effect (a person is treated differently due to the monitoring of their uploads). A copyright filter works by removing any content that matches materials from anyone claiming to be a rightsholder. The purpose of filtering is to decide whether a work will or won't be made public. The consequence of using filtering as a preventive measure is that users' works will be blocked in error, while other (luckier) users' works will not be blocked, meaning the filter creates a significant effect or even discriminates against some users.

Even more importantly, the Guidelines on automated decision-making developed by the WP29 , an official European data protection advisory body (now EDPB ) provide a user-focused interpretation of the requirements for automated individual decision-making. Article 22 applies to decisions based on any type of data. That means that Article 22 of the GDPR applies to algorithms that evaluate user-generated content that is uploaded to a platform.

Adverse Effects

Do copyright filters result in "legal" or "significant" effects as envisioned in the GDPR? The GDPR doesn't define these terms, but the guidelines endorsed by the European Data Protection Board enumerate some "legal effects," including denial of benefits and the cancellation of a contract.

The guidelines explain that even where a filter's judgment does not have legal impact, it still falls within the scope of Article 22 of the GDPR if the decision-making process has the potential to significantly affect the behaviour of the individual concerned, has a prolonged impact on the user, or leads to discrimination against the user. For example, having your work erroneously blocked could lead to adverse financial circumstances or denial of economic opportunities. The more intrusive a decision is and the more reasonable expectations are frustrated, the higher the likelihood for adverse effects.

Consider a takedown or block of an artistic video by a creator whose audience is waiting to see it (they may have backed the creator's crowdfunding campaign). This could result in harming the creator's freedom to conduct business, leading to financial loss. Now imagine a critical essay about political developments. Blocking this work is censorship that impairs the author's right of free expression. There are many more examples that show that adverse effects will often be unavoidable.

Legitimate Grounds for Automated Individual Decision-Making

There are three grounds under which automated decision-making may be allowed under the GDPR's Article 22(2). Users may be subjected to automated decision-making if one of three exceptions apply:

  • it's necessary for entering into or performance of a contract,

  • authorized by the EU or member state law, or

  • based on the user's explicit consent.

Necessity

Copyright filters cannot justly be considered "necessary" under this rule . "Necessity" is narrowly construed in the data protection framework, and can't merely be something that is required under terms of service. Rather, a "necessity" defence for automated decision-making must be in line with the objectives of data protection law, and can't be used if there are more fair or less intrusive measures available. The mere participation in an online service does not give rise to this "necessity," and thus provides no serious justification for automated decision-making.

Authorization

Perhaps proponents of upload filters will argue that they will be authorized by the EU member state's law that implement the Copyright Directive. Whether this is what the directive requires has been ambiguous from the very beginning.

Copyright Directive rapporteur MEP Axel Voss insisted that the Copyright Directive would not require upload filters and dismissed claims to the contrary as mere scare-mongering by digital rights groups. Indeed, after months of negotiation between EU institutions, the final language version of the directive conspicuously avoided any explicit reference to filter technologies. Instead, Article 17 requires "preventive measures" to ensure the non-availability of copyright-protected content and makes clear that its application should not lead to any identification of individual users, nor to the processing of personal data, except where provided under the GDPR.

Even if the Copyright Directive does "authorize" the use of filters, Article 22(2)(b) of the GDPR says that regulatory authorization alone is not sufficient to justify automated decision-making. The authorizing law--the law that each EU Member State will make to implement the Copyright Directive--must include "suitable" measures to safeguard users' rights, freedoms, and legitimate interests. It is unclear whether Article 17 provides enough leeway for member states to meet these standards.

Consent

Without "necessity" or "authorization," the only remaining path for justifying copyright filters under the GDPR is explicit consent by users. For data processing based on automated decision-making, a high level of individual control is required. The GDPR demands that consent be freely given, specific, informed, and unambiguous. As take-it-or-leave-it situations are against the rationale of true consent, it must be assessed whether the decision-making is necessary for the offered service. And consent must be explicit, which means that the user must give an obvious express statement of consent. It seems likely that few users will be interested in consenting to onerous filtering processes.

Article 22 says that even if automated decision-making is justified by user consent or by contractual necessity, platforms must safeguard user rights and freedoms. Users always have the right to obtain "human intervention" from platforms, to express their opinion about the content removal, and to challenge the decision. The GDPR therefore requires platforms to be fully transparent about why and how users' work was taken down or blocked.

Conclusion: Copyright-Filters Must Respect Users' Privacy Rights

The significant negative effects on users subjected to automated decision-making, and the legal uncertainties about the situations in which copyright-filters are permitted, should best be addressed by a policy of legislative self-restraint. Whatever decision national lawmakers take, they should ensure safeguards for users' privacy, freedom of speech and other fundamental rights before any uploads are judged, blocked or removed.

If Member States adopt this line of reasoning and fulfill their legal obligations in the spirit of EU privacy rules, it could choke off any future for EU-mandated, fully-automated upload filters. This will set the groundwork for discussions about general monitoring and filtering obligations in the upcoming Digital Service Act.

(Many thanks to Rossana Ducato for the exchange of legal arguments, which inspired this article).

 

 

Random Rating Generator...

Australian Freedom of Information request reveals a humiliating 2016 report about the inaccuracy of the Classification Boards automated game and app rating tool


Link Here4th March 2020
Full story: Game Censorship in Australia...Classification board, video game, cuts
Several times last year Australian games ratings have been reported for arbitrary ratings assigned under the Australian Classification Board's IARC automated game and app rating tool.

Variants of the same game on different platforms appeared in the classification database with wildly different outcomes. One game achieved being 15 rated, 18 rated and banned. Inevitably when the shit hit the fan and the incompetent ratings gained the attention of publicity, human censors stepped in and sorted out the rating (down to 15), and expunged all the embarrassing misfires from the database.

Well it seems that the shoddy system has been discussed for a while and a damning report from 2016 has just been published as a result of a Freedom of Information request.

The report reveals that a selection of ratings from the tool were audited by compared them with an assessment from a human censor.

Results were particularly atrocious fro the higher ratings. A table on page 13 reveals that:

  • 56% of M (PG-15) ratings assigned by the tool were wrong
  • 72% of MA 15+ ratings were wrong
  • 100% of R 18+ ratings were wrong
  • 99% of RC (banned) ratings were wrong

In all of these categories the automated ratings were nearly always lowered by the audit.

The failure of the system was attributed to the inaccuracy of data input but surely this is a systemic failure to define tight enough definitions of date required.

 

 

The Invisible Man...

The latest cinema release to be cut for a lower rating


Link Here2nd March 2020

The Invisible Man is a 2020 Australia / USA Sci-Fi horror thriller by Leigh Whannell.
Starring Elisabeth Moss, Aldis Hodge and Oliver Jackson-Cohen. BBFC link IMDb

The film follows Cecilia, who receives the news of her abusive ex-boyfriend's suicide. She begins to re-build her life for the better. However, her sense of reality is put into question when she begins to suspect her deceased lover is not actually dead.

BBFC advised category cuts were required for a 15 rated cinema release in 2020. The Irish cinema releases looks to be uncut bit 16 rated.

Versions

BBFC uncut
uncut
run: 124:21s
pal: 119:23s
Ireland 16
Ireland
Ireland : Passed 16 uncut for " Strong psychological threat and violence. Scene of self-harm "
  • 2020 cinema release

The Irish cinema is 3s longer and references the cut scene in the consumer advice so it seems likely that this is the uncut version.

BBFC cut
advised category
cuts 
cut: 3s
run: 124:18s
pal: 119:20s
15UK: Passed 15 for strong bloody violence, threat, language, domestic abuse after BBFC advised pre-cuts:
  • 2020 cinema release
The BBFC commented:
  • This film was originally seen for advice. The company was advised it was likely to be classified 18 uncut but that their preferred 15 classification could be obtained by making small changes to one scene to remove bloody injury detail during an attempted suicide. When the film was submitted for formal classification, the shots in question had been removed and the film was classified 15.

 

 

Gap years...

Australian film distributors call for a PG-13 rating


Link Here2nd March 2020
An Australian film industry coalition is calling for new classification between PG and M (which is a PG-15 rating).

Major and independent film distributors and exhibitors are urging the federal government to adopt a new PG13 classification which they say would benefit family-friendly Australian and international films that get M ratings.

Echoing calls by Screen Producers Australia and the Australian Children's Television Foundation, the Film Industry Associations (FIA) also advocates a uniform classification system across all delivery platforms, with self-classification by the industry, overseen by a government regulator.

The say the  current review system is no longer fit-for-purpose. It is expensive and unfeasibly time-consuming in an environment where digital distribution has minimised the time between the delivery of a film and its release date, the FIA says in its submission to the government classification review.

 

 

Old text...

Rajan Zed complains about Veda India Pale Ale from Three Hills Brewing


Link Here2nd March 2020
Full story: Rajan Zed...Taking easy offence at hindu imagery
veda beerThe perennial hindu whinger Rajan Zed is urging Three Hills Brewing in Northamptonshire to apologize and withdraw its Veda India Pale Ale; calling it highly inappropriate.

He said that inappropriate usage of Hindu scriptures or deities or concepts or symbols or icons for commercial or other agenda was not okay as it hurt the devotees. Vedas were revealed Sanskrit texts considered as eternal-uncreated-divine-direct transmission from Absolute. Vedas were foundation of Hinduism and included Rig-Veda, world's oldest extant scripture. Zed claimed:

Using Vedas to sell beer was highly insensitive and trivializing of the immensely revered body of sacred and serious knowledge.

Shavasana Ale

Rajan Zed is also urging Newport (Oregon) based Rogue Ales & Spirits brewery to apologize and rename its "Shavasana" (Imperial, Granola Blonde Ale) beer; calling it highly inappropriate.

Zed stated that Shavasana, a highly important posture in yoga, was the ultimate act of conscious surrender and was also used in Yoganidra meditation. Yogis slipped into blissful neutrality in Shavasana. "

 

 

Endemic censorship...

Plague Inc game banned in China


Link Here1st March 2020
Full story: Games censorship in China...A wide range of censorship restrictions
A game   which challenges players to spread a deadly virus around the world has been banned in China, its makers have said.

Plague Inc. has been pulled from the Chinese app store for including illegal content, British-based developer Ndemic Creations said. In a statement, Ndemic Creations said:

We have some very sad news to share with our China-based players. We've just been informed that Plague Inc. 'includes content that is illegal in China as determined by the Cyberspace Administration of China' and has been removed from the China App Store. This situation is completely out of our control.

Plague Inc. has become a huge hit since it was launched eight years ago. It now has 130 million players worldwide and soared in popularity in China amid the coronavirus outbreak, becoming the bestselling app in the country in January. Some players suggested they were downloading the game as a way to cope with fears surrounding the virus.

 

 

Offsite Article: Doctor Sleep...


Link Here1st March 2020
MovieCensorship.com compares the Theatrical Version with the Director's Cut

See article from movie-censorship.com


 2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec    

Censor Watch logo
censorwatch.co.uk

 

Top

Home

Links
 

Censorship News Latest

Daily BBFC Ratings

Site Information