Melon Farmers Original Version

Censor Watch


 2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024 
Feb   Jan   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec   Latest  


Best Short Docmentary...

Hong Kong bans broadcast of the Oscars Award Ceremony as China is sensitive about a nominated film about the Hong Kong democracy protests

Link Here29th March 2021
Full story: Film Censorship in China...All Chinese films censored to be suitable for kids
Do Not Split is a 2020 Norway / USA documentary short film by Anders Hammer
Starring Lucie Fouble and Colette Marin-Catherine IMDb

In 2019 Hong Kong was rocked by the largest protests since Britain handed back the area to China in 1997. This is the story of the protests, told through a series of demonstrations by local protesters that escalate into conflict when highly armed police appear on the scene.

For the first time in more than half a century, Hong Kong movie-lovers won't be able to watch the Oscars. The city's largest TV network TVB won't broadcast next month's ceremony after China asked media to play down the awards, following the nomination of a documentary on Hong Kong's pro-democracy protests and concern over the political views of Best Director contender Chloe Zhao .

Bloomberg reported earlier this month that the Communist Party's propaganda department told all local media outlets to scrap live broadcasts of the Oscars and focus coverage on awards that aren't seen as controversial.

China is wound up by Do Not Split, which chronicles the 2019 demonstrations against China's tightening grip over the former British territory and was nominated for best short documentary.



Caveman attitudes...

Captain Underpants author withdraws graphic novel after claims of racism

Link Here29th March 2021
Captain Underpants author Dav Pilkey has apologised for suuposed harmful racial stereotypes and passively racist imagery in one of his graphic novels for children, which has now been withdrawn by his publisher.

The Adventures of Ook and Gluk: Kung-Fu Cavemen from the Future , first published in 2010, follows two cavemen who travel to the year 2222 and meet Master Wong, a martial arts instructor.

The book's publisher Scholastic announced that it would stop distributing the book and remove all mention of it from its website. Scholastic said:

Together, we recognise that this book perpetuates passive racism. We are deeply sorry for this serious mistake.

In a letter, Pilkey apologised:

But this week it was brought to my attention that this book also contains harmful racial stereotypes and passively racist imagery. I wanted to take this opportunity to publicly apologise for this. It was and is wrong and harmful to my Asian readers, friends, and family, and to all Asian people.

I hope that you, my readers, will forgive me, and learn from my mistake that even unintentional and passive stereotypes and racism are harmful to everyone. I apologise, and I pledge to do better.



Updated: A censorship mystery...

The Australian Censorship Board has banned the video game Disco Elysium: The Final Cut

Link Here29th March 2021
Full story: Games Censorship in Australia...Censorship rules for games
Disco Elysium: The Final Cut is the latest video game in a long line of censorship casualties in Australia.

The game launches on March 30th 2021 for PlayStation and Stadia owners but the Australian government has banned it from sale in the country.

The Australian Censorship Board hasn't specified exactly why Disco Elysium's been banned and developer ZA/UM has yet to publicly respond on this. However the core gameplay mechanics prominently include drugs and alcohol and which is a bit of a no-no for the country's censors.


Update: Criticising Australia's archaic games censorship

29th March 2021. Thanks to Daniel. See article from

The banning of video game Disco Elysium from sale in Australia has renewed calls for the Australian government to overhaul the classification system to move away from the moral panic associated with video games.

The chief executive of the Interactive Games & Entertainment Association, Ron Curry, told Guardian Australia:

Games are treated differently and the classification guidelines do not hide it. In spite of the government's own research to the contrary, when an R18+ classification was introduced for games they still insisted on making interactivity a determinant in classifying games, unlike film and publications.

There are also other restrictions levelled at games around violence, sex, drug use and incentives that aren't applied to film.

The sad reality is that the national classification system applies a stricter set of rules for video games than it does for pretty much every other kind of content, reflecting the early 1990s era in which those rules were written, when video games were associated with a moral panic and certainly not treated as the mainstream medium and artistic discipline that they are.

The Australian Lawyers Alliance said in a submission to a public consultation on the government's upcoming internet censorship bill named the Online Safety Bill:

The online classification system needed review, which should be done before the online safety bill passes. This bill should not be reliant on such an outdated classification system. The ALA therefore submits that this legislation should not proceed until such a review into the [classification scheme], incorporating community consultation, has been undertaken.



No comments...

Government notes that porn websites without user comments or uploads will not be within the censorship regime of the upcoming Online Safety Bill

Link Here27th March 2021
Written Question, answered on 24 March 2021

Baroness Grender Liberal Democrat Life peer Lords

To ask Her Majesty's Government which commercial pornography companies will be in scope of the Online Safety Bill; and whether commercial pornography websites which

  1. do not host user-generated content, or

  2. allow private user communication, will also be in scope.

Baroness Barran Conservative

The government is committed to ensuring children are protected from accessing online pornography through the new online safety framework. Where pornography sites host user-generated content or facilitate online user interaction such as video and image sharing, commenting and live streaming, they will be subject to the new duty of care. Commercial pornography sites which allow private user to user communication will be in scope. Where commercial pornography sites do not have user-generated functionality they will not be in scope. The online safety regime will capture both the most visited pornography sites and pornography on social media, therefore covering the majority of sites where children are most likely to be exposed to pornography.

We expect companies to use age assurance or age verification technologies to prevent children from accessing services which pose the highest risk of harm to children, such as online pornography. We are working closely with stakeholders across industry to establish the right conditions for the market to deliver age assurance and age verification technical solutions ahead of the legislative requirements coming into force.



Dangerous religious studies...

UK blasphemy rules enforced by implicit intimidation results in a teacher being suspended for teaching a factual lesson about the Mohammed cartoons

Link Here27th March 2021
Full story: Mohammed Cartoons...Cartoons outrage the muslim world
One of the most illogical, unjust and unreasonable of the rules of PC culture is that muslims are granted the privilege of the authorities turning a blind eye to violence and threats of violence. Credible fear of violence is very much a trump card in governing people's behaviour, and so an informal modern day blasphemy prohibition has been allowed to trump historic rights to free speech.

The latest example from Batley in York is described by campaigners petitioning in support of a well meaning teacher who was caught up in a supposed transgression of the UK's de facto blasphemy law. The petitioners explain:

Keep the Religious Studies Teacher at Batley Grammar School.

The teacher was trying to educate students about racism and blasphemy. He warned the students before showing the images and he had the intent to educate them. He does not deserve such large repercussions. He is not racist and did not support the Islamiphobic cartoons in any manner.

This has got out of hand and due to this, students have missed out on lessons because of peaceful protestors . Them blocking off entrances did not allow teachers to work or enter the school. Think of those who would be affected due to this lesson spiralled out of hand? Teachers, The School, The Community, Children, the RS Teacher's family and his own financial stability since he will no longer be able to land a job due to the fact that his reputation has been tarnished.

See petition from



Chairman of Australia...

Three artworks taken down in Canberra gallery due to Chinese complaints

Link Here27th March 2021
Full story: China International Censors...China pressures other countries into censorship
A Canberra art gallery has removed three artworks relating to Chinese leaders after receiving complaints and hundreds of angry messages in what appears to be an attack coordinated by China.

The Ambush Gallery at the Australian National University (ANU) in Canberra removed three pieces -- including one depicting Communist China's founding leader Mao Zedong as Batman and another depicting him as Winnie the Pooh.

The works were part of a 25-piece exhibition exploring the pressures people are facing during the COVID-19 pandemic. The whole show is a comment on the abuse of power, artist Luke Cornish said.

In response to criticism of supposed racism, Cornish previously apologised for the work depicting Chairman Mao as Batman, which he said was an attempt to mock conspiracy theories around coronavirus origins. He said the other two works were taking the piss out of an authoritarian regime.



Offsite Article: Free speech friendly video sharing platforms...

Link Here27th March 2021
Full story: YouTube Censorship...YouTube censor videos by restricting their reach
A few suggestions that are not controlled by a Big Tech giant and that support free expression.

See article from



Ofcom thinks it can 'regulate' cancel culture, PC lynch mobs and the kangaroo courts of wokeness...

The new internet censor sets outs its stall for the censorship of video sharing platforms

Link Here24th March 2021
Full story: Ofcom Video Sharing Censors...Video on Demand and video sharing
Ofcom has published its upcoming censorship rules for video sharing platforms and invites public responses up until 2nd June 2021. For a bit of self justification for its censorship, Ofcom has commissioned a survey to find that YouTube users and the likes are calling out for Ofcom censorship. Ofcom writes:

A third of people who use online video-sharing services have come across hateful content in the last three months, according to a new study by Ofcom.

The news comes as Ofcom proposes new guidance for sites and apps known as 'video-sharing platforms' (VSPs), setting out practical steps to protect users from harmful material.

VSPs are a type of online video service where users can upload and share videos with other members of the public. They allow people to engage with a wide range of content and social features.

Under laws introduced by Parliament last year, VSPs established in the UK must take measures to protect under-18s from potentially harmful video content; and all users from videos likely to incite violence or hatred, as well as certain types of criminal content. Ofcom's job is to enforce these rules and hold VSPs to account.

The  draft guidance is designed to help these companies understand what is expected of them under the new rules, and to explain how they might meet their obligations in relation to protecting users from harm.

Harmful experiences uncovered

To inform our approach, Ofcom has researched how people in the UK use VSPs, and their claimed exposure to potentially harmful content. Our major findings are: 

  • Hate speech. A third of users (32%) say they have witnessed or experienced hateful content. Hateful content was most often directed towards a racial group (59%), followed by religious groups (28%), transgender people (25%) and those of a particular sexual orientation (23%).

  • Bullying, abuse and violence. A quarter (26%) of users claim to have been exposed to bullying, abusive behaviour and threats, and the same proportion came across violent or disturbing content.

  • Racist content. One in five users (21%) say they witnessed or experienced racist content, with levels of exposure higher among users from minority ethnic backgrounds (40%), compared to users from a white background (19%). 

  • Most users encounter potentially harmful videos of some sort. Most VSP users (70%) say they have been exposed to a potentially harmful experience in the last three months, rising to 79% among 13-17 year-olds.

  • Low awareness of safety measures. Six in 10 VSP users are unaware of platforms' safety and protection measures, while only a quarter have ever flagged or reported harmful content.

Guidance for protecting users

As Ofcom begins its new role regulating video-sharing platforms, we recognise that the online world is different to other regulated sectors. Reflecting the nature of video-sharing platforms, the new laws in this area focus on measures providers must consider taking to protect their users -- and they afford companies flexibility in how they do that.

The massive volume of online content means it is impossible to prevent every instance of harm. Instead, we expect VSPs to take active measures against harmful material on their platforms. Ofcom's new guidance is designed to assist them in making judgements about how best to protect their users. In line with the legislation, our guidance proposes that all video-sharing platforms should provide:

  • Clear rules around uploading content. VSPs should have clear, visible terms and conditions which prohibit users from uploading the types of harmful content set out in law. These should be enforced effectively.

  • Easy flagging and complaints for users. Companies should implement tools that allow users to quickly and effectively report or flag harmful videos, signpost how quickly they will respond, and be open about any action taken. Providers should offer a route for users to formally raise issues or concerns with the platform, and to challenge decisions through dispute resolution. This is vital to protect the rights and interests of users who upload and share content.

  • Restricting access to adult sites. VSPs with a high prevalence of pornographic material should put in place effective age-verification systems to restrict under-18s' access to these sites and apps.

Enforcing the rules

Ofcom's approach to enforcing the new rules will build on our track record of protecting audiences from harm, while upholding freedom of expression. We will consider the unique characteristics of user-generated video content, alongside the rights and interests of users and service providers, and the general public interest.

If we find a VSP provider has breached its obligations to take appropriate measures to protect users, we have the power to investigate and take action against a platform. This could include fines, requiring the provider to take specific action, or -- in the most serious cases -- suspending or restricting the service.Consistent with our general approach to enforcement, we may, where appropriate, seek to resolve or investigate issues informally first, before taking any formal enforcement action.

Next steps

We are inviting all interested parties to comment on our proposed draft guidance, particularly services which may fall within scope of the regulation, the wider industry and third-sector bodies. The deadline for responses is 2 June 2021. Subject to feedback, we plan to issue our final guidance later this year. We will also report annually on the steps taken by VSPs to comply with their duties to protect users.


Ofcom has been given new powers to regulate UK-established VSPs. VSP regulation sets out to protect users of VSP services from specific types of harmful material in videos. Harmful material falls into two broad categories under the VSP Framework, which are defined as:

  • Restricted Material , which refers to videos which have or would be likely to be given an R18 certificate, or which have been or would likely be refused a certificate. It also includes other material that might impair the physical, mental or moral development of under-18s.

  • Relevant Harmful Material , which refers to any material likely to incite violence or hatred against a group of persons or a member of a group of persons based on particular grounds. It also refers to material the inclusion of which would be a criminal offence under laws relating to terrorism; child sexual abuse material; and racism and xenophobia.

The Communications Act sets out the criteria for determining jurisdiction of VSPs, which are closely modelled on the provisions of the Audiovisual Media Services Directive. A VSP will be within UK jurisdiction if it has the required connection with the UK. It is for service providers to assess whether a service meets the criteria and notify to Ofcom that they fall within scope of the regulation. We recently published guidance about the criteria to assist them in making this assessment. In December 2020, the Government confirmed its intention to appoint Ofcom as the regulator of the future online harms regime . It re-stated its intention for the VSP Framework to be superseded by the regulatory framework in new Online Safety legislation.



Bamby H2O raps: 'Drugged up at the function slurring all my words!'...

And the advert censor gets all 'concerned' about the rapper's 'apparent disregard' for its censorship code!

Link Here 24th March 2021

A pre-roll ad on YouTube for rapper Bamby H2O's single, titled Over It , seen on 16 December 2020, featured an instrumental played over scenes of a powdery substance being cut with a razor, a rolled up cigarette being lit and a rolled up bank note being used to consume a powdery substance. The opening lyrics stated Fucked up over you and Drugged up at the function slurring all my words!. One scene included the outline of a woman pulling down her top to reveal her breasts. The ad included lyrics such as Shawty wanna fuck with me she gotta feed me first, Double D's I'm drowning in disbelief and You chase a bitch and repeated references to the consumption of drugs throughout.

The ad was seen before a synth wave music playlist.

A complainant challenged whether the ad was offensive and irresponsible because it featured references to drug use and paraphernalia, nudity and explicit language.

Bamby H2O did not respond to the ASA's enquiries.

ASA Assessment: Complaint upheld

The ASA was concerned by Bamby H2O's lack of response and apparent disregard for the Code, which was a breach of CAP Code. We reminded them of their responsibility to provide a substantive response to our enquiries and told them to do so in future.

The ad was for a full-length rap music video and was seen before an unrelated music playlist on YouTube. It featured a number of scenes depicting the consumption of illegal drugs, including a bottle of amphetamine tablets and a powder consumed through a rolled up bank note. We considered that the ad, which featured illegal drugs and drug-use was irresponsible for depicting the use of drugs in this context. The lyrics also referenced being drugged up and featured expletives such as the word fucking which was likely to seriously offend many people. The ad also featured the outline of a woman exposing her breasts which we considered was gratuitous and objectified women.

While the video was shot using visual effects, its content was graphic and explicit. Furthermore, we considered that viewers of an unrelated music playlist would not expect to be served an ad that featured drug references, nudity and strong language. We concluded that the ad was irresponsible and likely to cause serious and widespread offence and therefore breached the Code.

The ad must not appear again in its current form. We told Bamby H2O to ensure that their ads did not cause serious or widespread offence, and to ensure their ads were not socially irresponsible.



MPs identified as totally uncaring for the safety of internet users...

MPs who don't like being insulted on Twitter line up to call for all users to hand over identifying personal details to the likes of Google, Facebook and the Chinese government

Link Here 24th March 2021
Online Anonymity was debated in the House of Commons debated on Wednesday 13 January 2021.

The long debate was little more than a list of complaints from MPs aggrieved at aggressive comments on social media, often against themselves.

As always seems to be the case with parliamentary debate, it turned into a a long calls of 'something must be done', and hardly comment thinking about the likely and harmful consequences of what they are calling for.

As an example here is part of the complaint from debate opener, Siobhan Bailiie:

The new legislative framework for tech companies will create a duty of care to their users. The legislation will require companies to prevent the proliferation of illegal content and activity online, and ensure that children who use their services are not exposed to harmful content. As it stands, the tech companies do not know who millions of their users are, so they do not know who their harmful operators are, either. By failing to deal with anonymity properly, any regulator or police force, or even the tech companies themselves, will still need to take extensive steps to uncover the person behind the account first, before they can tackle the issue or protect a user.

The Law Commission acknowledged that anonymity often facilitates and encourages abusive behaviours. It said that combined with an online disinhibition effect, abusive behaviours, such as pile-on harassment, are much easier to engage in on a practical level. The Online Harms White Paper focuses on regulation of platforms and the Law Commission's work addresses the criminal law provisions that apply for individuals. It is imperative, in my view, that the Law Commission's report and proposals are fully debated prior to the online harms Bill passing through Parliament. They should go hand in hand.

Standing in Parliament, I must mention that online abuse is putting people off going into public service and speaking up generally. One reason I became interested in this subject was the awful abuse I received for daring to have a baby and be an MP. Attacking somebody for being a mum or suggesting that a mum cannot do this job is misogynistic and, quite frankly, ridiculous, but I would be lying if I said that I did not find some of the comments stressful and upsetting, particularly given I had just had a baby.

Is there a greater impediment to freedom of expression than a woman being called a whore online or being told that she will be raped for expressing a view? It happens. It happens frequently and the authors are often anonymous. Fantastic groups like 50:50 Parliament, the Centenary Action Group, More United and Compassion in Politics are tackling this head on to avoid men and women being put off running for office. One of the six online harm asks from Compassion in Politics is to significantly reduce the prevalence and influence of anonymous accounts online.

The Open Rights Group said more about consequences in a short email than the MPs said in a hour of debate:

Mandatory ID verification would open a Pandora's Box of unintended consequences. A huge burden would be placed on site administrators big and small to own privatised databases of personally identifiable data. Large social media platforms would gain ever more advantage over small businesses, open source projects and startups that lack the resources to comply.

Requirements for formal documentation, such as a bank account, to verify ID would disenfranchise those on low incomes, the unbanked, the homeless, and people experiencing other forms of social exclusion. Meanwhile, the fate of countless accounts and astronomical amounts of legal content would be thrown into jeopardy overnight.



Virtue signalling in law...

Utah Governor signs law requiring internet devices sold locally to be pre-loaded with Net Nanny like porn blocking software

Link Here24th March 2021
The Republican governor of Utah has signed silly legislation requiring all cellphones and tablets sold in the conservative state to be sold with software that automatically blocks pornography.

Governor Spencer Cox claims the measure would send an important message about preventing children from accessing explicit online content.

In fact the legislation is mere virtue signalling and makes no meaningful proposals how its requirements can be implemented in practice. So there is a get out clause that says no immediate steps toward implementation will be made unless five other states enact similar laws, a provision introduced to address concerns that it would be difficult to implement.

The American Civil Liberties Union of Utah said the constitutionality of the bill was not adequately considered and that it will likely be argued in court.



Fossil fuels outrage...

Lee Hurst briefly suspended from Twitter over a joke about Greta Thornberg

Link Here24th March 2021
Full story: Twitter Censorship...Twitter offers country by country take downs
Lee Hurst was briefly suspended from Twitter over a tweeted joke about Greta Thunberg.

The comedian wound up the easily after posting his joke about the 18-year-old environmental activist. He tweeted:

As soon as Greta discovers cock, she'll stop complaining about the single use plastic it's wrapped in.

But his account - which he headlines desperately trying to be relevant is now back up and running again.



Offsite Article: The Daily Mail starts a conversation with the BBFC...

Link Here24th March 2021
Woke trans propaganda from the BBFC draws an inevitably critical response from the Daily Mail

See article from



More cunning linguistics...

Mrs. Doubtfire director speaks of unreleased R rated jokes

Link Here22nd March 2021
Mrs. Doubtfire is a 1993 US comedy by Chris Columbus
With Robin Williams & Sally Field. Melon Farmers linkYouTube icon BBFC link 2020 IMDb

The film was originally released in cinemas with an uncut 12 rating in 1984. However the rating did not go down well with viewers. Family groups were turned away and some local councils overruled the BBFC rating to allow for family viewing. So distributors had a rethink and BBFC category cuts were then made for a PG rated 1994 cinema re-release, and for the subsequent VHS release.

The film was later passed PG uncut for 2000 DVD but this wasn't released and the distributors continued with the cut PG version until 2003 when the film was released uncut on DVD with a PG rating. In 2013 the classification was uprated to 12 uncut for Blu-ray.

Director Chris Columbus says he's open to making a documentary about the creation of the film, showcasing some of Robin Williams' hilariously funny R-rated material. Speaking with Entertainment Weekly, Columbus said:

There are three different versions of it, including an R-rated cut. The reality is that there was a deal between Robin and myself, which was, he'll do one or two, three scripted takes. And then he would say, 'Then let me play.' And we would basically go on anywhere between 15 to 22 takes, I think 22 being the most I remember, the helmer recalls.

As a result, Williams came up with new versions and new lines in every take. He would sometimes go into territory that wouldn't be appropriate for a PG-13 movie, but certainly appropriate and hilariously funny for an R-rated film.



An ailing BBC seeks legal protection...

The BBC seeks a law change to force all UK TV platforms to carry the BBC

Link Here19th March 2021
There was a time a few years ago when the BBC was considered an essential and integral component of any TV platform that sought to get established in the UK.

More recently the British Wokecasting Corporation decided to alienate much of its viewership with biased news and programming that was more about woke morality preaching than entertainment.

So now in the face of increasing unpopularity, the BBC is having to seek the help of government in forcing TV platforms to carry the BBC by law.

The BBC is calling for legislation to ban the sale of plug-in devices, such as Amazon's Fire TV Stick, that don't carry the BBC prominently on its platform. Clare Sumner, BBC director of policy, called for for urgent legislation to update the 2003 Communications Act to modernise the regulatory framework to ensure public service broadcasters (PSBs) are prominent and available on all major TV platforms.

The new law would prevent providers of TV user interfaces (for example, smart TV manufacturers or global tech providers) from releasing products in the UK without complying with these rules. TV censor Ofcom is said to support such a law change to give services like the BBC iPlayer and ITV Hub guaranteed prominence on Smart TVs.




Russian speaks tough about Twitter refusing to play ball with local censorship requirements

Link Here19th March 2021
Full story: Internet Censorship in Russia 2020s...Russia and its repressive state control of media
This week Russian authorities warned that if Twitter doesn't fall into line of responding to Russian censorship demands then it could find itself blocked in the country in a month's time. Anticipating the possible fallout, including Russian users attempting to bypass the ban, a government minister has warned that blocking VPNs will be the next step.

For some time, local telecoms censor Roscomnadzor has criticized Twitter for not responding to its calls for prohibited content to be taken down. Roscomnadzor says that more than 3,100 takedown demands have gone unheeded so far.

In what appeared to be a retaliatory move, last week authorities attempted to slow down Twitter access in Russia, but this seems to have caused widespread disruption to many other websites, perhaps those that hang through waiting for linked Twitter content.



Age Appropriate Instagram?...

Facebook is creating an Instagram for kids

Link Here19th March 2021
Full story: ICO Age Appropriate Design...ICO calls for age assurance for websites accessed by children
Facebook is planning to build a version of the popular photo-sharing app Instagram that can be used by children under the age of 13, according to an internal company post obtained by BuzzFeed News.

Vishal Shah, Instagram's vice president of product, wrote on an employee message board:

I'm excited to announce that going forward, we have identified youth work as a priority for Instagram and have added it to our H1 priority list. We will be building a new youth pillar within the Community Product Group to focus on two things:
  • (a) accelerating our integrity and privacy work to ensure the safest possible experience for teens and
  • (b) building a version of Instagram that allows people under the age of 13 to safely use Instagram for the first time.

Instagram currently 'forbids' children under the age of 13 from using the service, but it is widely used by children anyway.

Maybe this announcement ties in with the UK's requirement for age appropriate data sharing that comes into force in September 2021.



Age of censorship...

An internet porn age verification bill progresses in Canada

Link Here19th March 2021
A bill has passed 2nd reading in the Canadian Senate that would require porn websites to implement age verification for users.

Bill S-203, An Act to restrict young persons' online access to sexually explicit material, will now be referred to the Standing Senate Committee on Legal and Constitutional Affairs.



Updated: Dangerous legislation...

A diverse group of organisations criticise Australia's hastily drafted and wide ranging internet censorship bill

Link Here19th March 2021
Full story: Internet Censorship in Australia...Wide ranging state internet censorship
A number of legal, civil and digital rights, tech companies and adult organisations have raised significant concerns with Australia's proposed internet censorship legislation, and its potential to impact those working in adult industries, to lead to online censorship, and the vast powers it hands to a handful of individuals.

Despite this, the legislation was introduced to Parliament just 10 days after the government received nearly 400 submissions on the draft bill, and the senate committee is expected to deliver its report nine days after submissions closed. Stakeholders were also given only three working days to make a submission to the inquiry.

In a submission to the inquiry, Australian Lawyers Alliance (ALA) president Graham Droppert said the government should not proceed with the legislation because it invests excessive discretionary power in the eSafety Commissioner and also the Minister with respect to the consideration of community expectations and values in relation to online content. Droppert said:

The ALA considers that the bill does not strike the appropriate balance between protection against abhorrent material and due process for determining whether content comes within that classification.

Digital Rights Watch has been leading the charge against the legislationn. Digital Rights Watch programme director Lucie Krahulcova said:

The powers to be handed to the eSafety Commissioner, which was established in 2015 to focus on keeping children safe online, is a continuation of its broadly expanding remit, and should be cause for concern.

The new powers in the bill are discretionary and open-ended, giving all the power and none of the accountability to the eSafety Office. They are not liable for any damage their decisions may cause and not required to report thoroughly on how and why they make removal decisions. This is a dramatic departure from democratic standards globally.

Jarryd Bartle is a lecturer in criminal law and adult industry consultant, and is policy and campaigns advisor at the Eros Association. He said:

The bill as drafted is blatant censorship, with the eSafety commissioner empowered to strip porn, kink and sexually explicit art from the internet following a complaint, with nothing in the scheme capable of distinguishing moral panic from genuine harm.

Twitter and live streaming service Twitch have joined the mounting list of service providers, researchers, and civil liberties groups that take issue with Australia's pending Online Safety Bill.

Of concern to both Twitter and Twitch is the absence of due regard to different types of business models and content types, specifically around the power given to the relevant minister to determine basic online safety expectations for social media services, relevant electronic services, and designated internet services. Twitter said:

In order to continue to foster digital growth and innovation in the Australian economy, and to ensure reasonable and fair competition, it is critically important to avoid placing requirements across the digital ecosystem that only large, mature companies can reasonably comply with,

Likewise, Twitch believes it is important to consider a sufficiently flexible approach that gives due regard to different types of business models and content types.

Update: Fast tracked

19th March 2021. See article from

The Online Safety Bill 2021 will likely get an easy ride into law after a senate environment and communications committee gave it the nod of approval last week.

Under the government's proposed laws, the eSafety Commissioner will be given expanded censorship powers to direct social media platforms and other internet services to take down material and remove links to content it deems offensive or abusive.



Offsite Article: #SaveAnonymity: Together we can defend anonymity...

Link Here19th March 2021
Open Rights Group responds to a petition calling for identity verification for social media users

See article from



Group think...

Facebook announces new censorship measures for Facebook groups

Link Here17th March 2021
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'

It's important to us that people can discover and engage safely with Facebook groups so that they can connect with others around shared interests and life experiences. That's why we've taken action to curb the spread of harmful content, like hate speech and misinformation, and made it harder for certain groups to operate or be discovered, whether they're Public or Private. When a group repeatedly breaks our rules, we take it down entirely.

We're sharing the latest in our ongoing work to keep Groups safe, which includes our thinking on how to keep recommendations safe as well as reducing privileges for those who break our rules. These changes will roll out globally over the coming months.

We are adding more nuance to our enforcement. When a group starts to violate our rules, we will now start showing them lower in recommendations, which means it's less likely that people will discover them. This is similar to our approach in News Feed, where we show lower quality posts further down, so fewer people see them.

We believe that groups and members that violate our rules should have reduced privileges and reach, with restrictions getting more severe as they accrue more violations, until we remove them completely. And when necessary in cases of severe harm, we will outright remove groups and people without these steps in between.

We'll start to let people know when they're about to join a group that has Community Standards violations, so they can make a more informed decision before joining. We'll limit invite notifications for these groups, so people are less likely to join. For existing members, we'll reduce the distribution of that group's content so that it's shown lower in News Feed. We think these measures as a whole, along with demoting groups in recommendations, will make it harder to discover and engage with groups that break our rules.

We will also start requiring admins and moderators to temporarily approve all posts when that group has a substantial number of members who have violated our policies or were part of other groups that were removed for breaking our rules. This means that content won't be shown to the wider group until an admin or moderator reviews and approves it. If an admin or moderator repeatedly approves content that breaks our rules, we'll take the entire group down.

When someone has repeated violations in groups, we will block them from being able to post or comment for a period of time in any group. They also won't be able to invite others to any groups, and won't be able to create new groups. These measures are intended to help slow down the reach of those looking to use our platform for harmful purposes and build on existing restrictions we've put in place over the last year.



Updated: All men are rapists...

So peer Floella Benjamin attempts to revive porn age verification censorship because porn viewing is just one step away from park murder

Link Here17th March 2021
The pro-censorship member of the House of Lords has tabled the following amendment to the Domestic Abuse Bill to reintroduce internet porn censorship and age verification requires previously dropped by the government in October 2019.

Amendment 87a introduces a new clause:

Impact of online pornography on domestic abuse

  1. Within three months of the day on which this Act is passed, the Secretary of State must commission a person appointed by the Secretary of State to investigate the impact of access to online pornography by children on domestic abuse.

  2. Within three months of their appointment, the appointed person must publish a report on the investigation which may include recommendations for the Secretary of State.

  3. As part of the investigation, the appointed person must consider the extent to which the implementation of Part 3 of the Digital Economy Act 2017 (online pornography) would prevent domestic abuse, and may make recommendations to the Secretary of State accordingly.

  4. Within three months of receiving the report, the Secretary of State must publish a response to the recommendations of the appointed person.

  5. If the appointed person recommends that Part 3 of the Digital Economy Act 2017 should be commenced, the Secretary of State must appoint a day for the coming into force of that Part under section 118(6) of the Act within the timeframe recommended by the appointed person."

Member's explanatory statement

This amendment would require an investigation into any link between online pornography and domestic abuse with a view to implementing recommendations to bring into effect the age verification regime in the Digital Economy Act 2017 as a means of preventing domestic abuse.

Update: Defeated

17th March 2021. See article from

The amendment designed to resurrect the Age Verification clauses of the Digital Economy Act 2017 was defeated by 242 to 125 vodets in the House of Lords.

The government minister concluding the debate noted that the new censorship measures included in the Online Harms Bill are more comprensive than the measures under Digital Economy Act 2017. He also noted that although upcoming censorship measures would take significant time to implement but also noted that reviving the old censorship measures would also take time.

In passing the minister also explained one of the main failings of the act was that site blocking would not prove effective due to porn viewers being easily able to evade ISP blocks by switching to encrypted DNS servers via DNS over Https (DoH). Presumably government internet snooping agencies don't fancy losing the ability to snoop on the browsing habits of all those wanting to continue viewing a blocked porn site such as Pornhub.



Surely someone has an idea somewhere...

Government seeks ideas on how to impose or implement age verification in retail

Link Here17th March 2021

Both on and off licenced retailers, bars and restaurants have been invited to put forward proposals to trial new technology when carrying out age verification checks.

The call for proposals has been launched by the Home Office and the Office for Product Safety and Standards, and retailers who are successful will be able to pilot new technology to improve the process of ID check during the sale of alcohol and other age restricted items.

The pilots will explore how technology can strengthen current measures in place to prevent those under 18 from buying alcohol, reduce violence or abuse towards shop workers and ensure there are robust age checks on the delivery, click and collect or dispatch of alcohol.

It will be up to applicants to suggest products to trial within their proposals, but technology that may potentially be tested include a holographic or ultraviolet identification feature on a mobile phone.

Retailers will be able to submit applications online on and will be required to provide detail on how the technology works and how they plan to test it.

The pilots will allow a wide range of digital age verification technology to be tested, and the findings will be used to understand the impact of this technology and inform future policy, as part of the government's ambition to create an innovative digital economy.

Retailers will still be required to carry out physical age verification checks alongside any digital technology in line with the current law, which requires a physical identification card with a holographic mark or ultraviolet feature upon request in the sale of alcohol.

Trials by successful applicants will begin in the summer and must be completed by April 2022.

Retailers can submit their proposals to trial digital age verification technology on Submissions close on 31 May and successful applicants be notified by 2 July.



Searching for a false sense of security...

Google to be sued for misleadingly naming Chrome's 'incognito mode' when in fact Google continues to snooping on browsing history and hands over the data to ne'er do wells such as advertisers and police

Link Here17th March 2021
A US federal judge has decided that an attempt to launch a class action lawsuit against Google can proceed. The filing concerns the incognito (or private) mode in Google's Chrome that five plaintiffs say is misleading users into expecting that their personal data would be protected while using the browser in this way.

Chrome informs incognito users that they'd gone incognito: now you can browse privately. This might lead many to believe they are free from Google's own invasive and omnipresent tracking and data collection, but in reality it only means other people who use the device won't see your browsing activity. But Google does not inform users that it will continue to collect data for targeted ads.

The lawsuit also alleges that Google unlawfully intercepted data under the US Wiretap Act while users were in incognito.



Offsite Article: The extreme metal musician fighting Poland's blasphemy laws...

Link Here 17th March 2021
Full story: Blasphemy in Poland...Under duress for minor comments about religion
The acclaimed and controversial Behemoth frontman Nergal faces possible prison time for a photo of his foot on a religious icon. He argues that Poland must become secular to evolve

See article from



Devotion to sales...

Video game under censorship duress from China opens its own international sales website

Link Here14th March 2021
The Taiwan games developer Red Candle fell fowl of China's censorship reach over the inclusion of a Winnie the Pooh meme poking fun at the Chinese president Xi Jinping.

That got the horror themed game Devotion banned from games distributors, most notably Steam. The games has been mostly blocked ever since.

After running into nothing but trouble on other people's platforms, the game's developers have decided to just sell the game themselves, opening up an online store for international customers that is selling digital, DRM-free copies of Devotion, and also their previous game Detention.

Developers Red Candle say that all future releases will also be on their own shop, and will also be free of DRM.



Believers in extreme state control...

Victorian MPs recommend a state ban on Nazi insignia

Link Here14th March 2021

Displaying the Nazi swastika would be a criminal offence in Victoria if the government implements a recommendation of a cross-party parliamentary inquiry into anti-vilification laws.

After a year-long investigation the final report from the legal and social issues committee urges the government to legislate tougher laws against hate speech and racist insignia.

Premier Daniel Andrews said that the government was open to criminalising Nazi symbols. It would be the first jurisdiction in Australia to impose such a ban.

The committee said it believed limitations on freedom of speech were justified when it impinged on the human rights of others, citing recent events, including the storming of the US Capitol by far-right groups and social media giants banning former US president Donald Trump's accounts.



Don't mention the gay scene...

Russian film distributors cut gay scene from the Colin Firth movie Supernova

Link Here12th March 2021
Full story: Film Censorship in Russia...Censorship in the guise of banning strong language
Supernova is a 2020 UK gay drama by Harry Macqueen.
Starring Stanley Tucci and Colin Firth and Pippa Haywood. IMDb

Sam and Tusker partners of 20 years, who are traveling across England in their old RV visiting friends, family and places from their past. Since Tusker was diagnosed with early-onset dementia two years ago, their time together is the most important thing they have.

A gay sex scene was cut from Supernova in Russian cinemas. The film was self-censored by film distributors there. At least one scene where the characters try to have sex after a dramatic dialogue has disappeared from the story.

World Pictures, the film's Russian distributor, cut the scene due to concerns that theaters would not screen Supernova and it may spark controversy due to excesses, according to critic Konstantin Kropotkin. These fears are rooted in Russia's gay propaganda law, which prohibits LGBTQ+ visibility in venues accessible to minors. This law has been used to penalize people and productions for a broad and often vague range of violations.

In addition to cutting a scene, World Pictures reportedly asked critics to remove any mention of gay from reviews. That intent backfired, the Times noted, as critics stressed how the censorship only further enhanced the film's love story and the heartfelt performances of its actors.




Ofcom escalates censorship of China's propaganda channel CGTN by adding 225k fine to the previously announced ban

Link Here12th March 2021

Ofcom has fined China's propaganda channel CGTN ú225k for biased news reports about the Hong Kong democracy protests. Two fines were levied with one being explained as follows:

Ofcom has imposed a financial penalty of ú125,000 on Star China Media Limited in relation to its service CGTN for failing to comply with our broadcasting rules.

Between 11 August 2019 and 21 November 2019, CGTN broadcast the following five programmes:

  • The World Today, 11 August 2019, 17:00

  • The World Today, 26 August 2019, 08:00

  • The World Today, 31 August 2019, 07:00

  • The World Today, 2 September 2019, 16:00

  • China 24, 21 November 2019, 12:15

Each programme was concerned with the protests which were ongoing in Hong Kong during this period. These protests were initially in response to the Hong Kong Government's Extradition Law Amendment Bill that would have allowed criminal suspects in Hong Kong to be sent to mainland China for trial.

In Ofcom's Decisions published on 26 May 2020, in Issue 403 of the Broadcast and On Demand Bulletin (PDF, 706.0 KB), Ofcom found that each of the five programmes had failed to maintain due impartiality and had breached Rules 5.1, 5.11 and 5.12 of the Broadcasting Code.



Scrabbled minds vs scrambled minds...

Oxford English Dictionary continues to include factual usage definitions such as 'bitch' and 'bint' that offend woke sensibilities

Link Here12th March 2021
One of the recent targets of the cancel culture lynch mobs is to ban derogatory terms for women from dictionaries.

However Oxford University Press [OUP], publishers of the Oxford English Dictionary have said that terms such as bint and bitch, will remain in the Oxford Dictionary of English because to remove them would amount to censorship. Speaking at an event to mark International Women's Day, Katherine Martin, head of product for Oxford Languages,said:

'Bitch' is quite a common word in English. Part of what we do as lexicographers is to show the full range of meanings that it has.

To not show any aspect of a word's use would be akin to censorship. Lexicographers want to make facts available to the public, and the more synonyms and information [a word] has, the better.

Eleanor Maier, OUP's executive editor, said context was all-important:

As dictionary makers, we have a responsibility to accurately describe how language is used and that means we should include sexist and racist terms. But it's really important for us to contextualise them. So if a term is derogatory or highly offensive, we should say it.




Twitch seems to be rating streamers as suitable or not for brand advertising

Link Here12th March 2021
Live-streaming platform Twitch seems to be testing a new program to match streamers with 'appropriate' brands. It's understood that the tool, named the Brand Safety Score, automatically assigns streamers a rating based on analysis of a number of factors (including age, partnership status, and suspension history). This rating is then used to pair content creators with relevant advertising opportunities.

Presumably this phraseology means that anyone doing anything a bit adult will be banned from being able to monetise their content.

The new tool is yet to be officially confirmed by Twitch. A spokesperson for the platform has revealed that they are making efforts to better match the appropriate ads to the right communities, but asserted that company will keep our community informed of any updates.



Offsite Article: Authoritarianism...

Link Here12th March 2021
Full story: Internet Censorship in India...India considers blanket ban on internet porn
India's New Internet Rules Are a Step Toward Digital Authoritarianism. Here's What They Will Mean

See article from



Advantaging foreign companies...

If anyone is stupid enough to base a video sharing internet service in the UK, then you will have to sign up for censorship by Ofcom before 6th May 2021. After a year you will have to pay for the privilege too

Link Here10th March 2021
Full story: Ofcom Video Sharing Censors...Video on Demand and video sharing

Ofcom has published guidance to help providers self-assess whether they need to notify to Ofcom as UK-established video-sharing platforms.

Video-sharing platforms (VSPs) are a type of online video service which allow users to upload and share videos with the public.

Under the new VSP regulations , there are specific legal criteria which determine whether a service meets the definition of a VSP, and whether it falls within UK jurisdiction. Platforms must self-assess whether they meet these criteria, and those that do will be formally required to notify to Ofcom between 6 April and 6 May 2021. Following consultation, we have today published our final guidance to help service providers to make this assessment.



An Orwellian Society...

The Non-Crime Hate Incident is a recent and chilling restriction to free speech.

Link Here10th March 2021

The Non-Crime Hate Incident (NCHI) is a recent and chilling restriction to free speech. An NCHI means any non-crime incident which is perceived by the victim or any other person to be motivated by hostility or prejudice. NCHIs can be recorded against someone's name without their knowledge. NCHIs were never created by government, but by a limited company called the College of Policing (CoP), a quango which provides guidance to police forces. Parliament has never passed legislation calling for NCHIs. This is speech-control and surveillance created away from democratic oversight.

See An Orwellian Society. report [pdf] from



Google's FLoC Is a Terrible Idea...

Explaining Google's idea to match individuals to groups for targetting advertisng. By Bennett Cyphers

Link Here10th March 2021

The third-party cookie is dying, and Google is trying to create its replacement.

No one should mourn the death of the cookie as we know it. For more than two decades, the third-party cookie has been the lynchpin in a shadowy, seedy, multi-billion dollar advertising-surveillance industry on the Web; phasing out tracking cookies and other persistent third-party identifiers is long overdue. However, as the foundations shift beneath the advertising industry, its biggest players are determined to land on their feet.

Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn't learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC) , which is perhaps the most ambitious--and potentially the most harmful.

FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting.

Google's pitch to privacy advocates is that a world with FLoC (and other elements of the " privacy sandbox ") will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between "old tracking" and "new tracking." It's not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.

We stand at a fork in the road. Behind us is the era of the third-party cookie, perhaps the Web's biggest mistake. Ahead of us are two possible futures.

In one, users get to decide what information to share with each site they choose to interact with. No one needs to worry that their past browsing will be held against them--or leveraged to manipulate them--when they next open a tab.

In the other, each user's behavior follows them from site to site as a label, inscrutable at a glance but rich with meaning to those in the know. Their recent history, distilled into a few bits, is "democratized" and shared with dozens of nameless actors that take part in the service of each web page. Users begin every interaction with a confession: here's what I've been up to this week, please treat me accordingly.

Users and advocates must reject FLoC and other misguided attempts to reinvent behavioral targeting. We implore Google to abandon FLoC and redirect its effort towards building a truly user-friendly Web.

What is FLoC?

In 2019, Google presented the Privacy Sandbox , its vision for the future of privacy on the Web. At the center of the project is a suite of cookieless protocols designed to satisfy the myriad use cases that third-party cookies currently provide to advertisers. Google took its proposals to the W3C, the standards-making body for the Web, where they have primarily been discussed in the Web Advertising Business Group , a body made up primarily of ad-tech vendors. In the intervening months, Google and other advertisers have proposed dozens of bird-themed technical standards: PIGIN , TURTLEDOVE , SPARROW , SWAN , SPURFOWL , PELICAN , PARROT ... the list goes on. Seriously . Each of the "bird" proposals is designed to perform one of the functions in the targeted advertising ecosystem that is currently done by cookies.

FLoC is designed to help advertisers perform behavioral targeting without third-party cookies. A browser with FLoC enabled would collect information about its user's browsing habits, then use that information to assign its user to a "cohort" or group. Users with similar browsing habits--for some definition of "similar"--would be grouped into the same cohort. Each user's browser will share a cohort ID, indicating which group they belong to, with websites and advertisers. According to the proposal, at least a few thousand users should belong to each cohort (though that's not a guarantee).

If that sounds dense, think of it this way: your FLoC ID will be like a succinct summary of your recent activity on the Web.

Google's proof of concept used the domains of the sites that each user visited as the basis for grouping people together. It then used an algorithm called SimHash to create the groups. SimHash can be computed locally on each user's machine, so there's no need for a central server to collect behavioral data. However, a central administrator could have a role in enforcing privacy guarantees. In order to prevent any cohort from being too small (i.e. too identifying), Google proposes that a central actor could count the number of users assigned each cohort. If any are too small, they can be combined with other, similar cohorts until enough users are represented in each one.

According to the proposal, most of the specifics are still up in the air. The draft specification states that a user's cohort ID will be available via Javascript, but it's unclear whether there will be any restrictions on who can access it, or whether the ID will be shared in any other ways. FLoC could perform clustering based on URLs or page content instead of domains; it could also use a federated learning-based system (as the name FLoC implies) to generate the groups instead of SimHash. It's also unclear exactly how many possible cohorts there will be. Google's experiment used 8-bit cohort identifiers, meaning that there were only 256 possible cohorts. In practice that number could be much higher; the documentation suggests a 16-bit cohort ID comprising 4 hexadecimal characters. The more cohorts there are, the more specific they will be; longer cohort IDs will mean that advertisers learn more about each user's interests and have an easier time fingerprinting them.

One thing that is specified is duration. FLoC cohorts will be re-calculated on a weekly basis, each time using data from the previous week's browsing. This makes FLoC cohorts less useful as long-term identifiers, but it also makes them more potent measures of how users behave over time.

New privacy problems

FLoC is part of a suite intended to bring targeted ads into a privacy-preserving future. But the core design involves sharing new information with advertisers. Unsurprisingly, this also creates new privacy risks.


The first issue is fingerprinting. Browser fingerprinting is the practice of gathering many discrete pieces of information from a user's browser to create a unique, stable identifier for that browser. EFF's Cover Your Tracks project demonstrates how the process works: in a nutshell, the more ways your browser looks or acts different from others', the easier it is to fingerprint.

Google has promised that the vast majority of FLoC cohorts will comprise thousands of users each, so a cohort ID alone shouldn't distinguish you from a few thousand other people like you. However, that still gives fingerprinters a massive head start. If a tracker starts with your FLoC cohort, it only has to distinguish your browser from a few thousand others (rather than a few hundred million). In information theoretic terms, FLoC cohorts will contain several bits of entropy --up to 8 bits, in Google's proof of concept trial. This information is even more potent given that it is unlikely to be correlated with other information that the browser exposes. This will make it much easier for trackers to put together a unique fingerprint for FLoC users.

Google has acknowledged this as a challenge, but has pledged to solve it as part of the broader "Privacy Budget" plan it has to deal with fingerprinting long-term. Solving fingerprinting is an admirable goal, and its proposal is a promising avenue to pursue. But according to the FAQ , that plan is "an early stage proposal and does not yet have a browser implementation." Meanwhile, Google is set to begin testing FLoC as early as this month .

Fingerprinting is notoriously difficult to stop. Browsers like Safari and Tor have engaged in years-long wars of attrition against trackers, sacrificing large swaths of their own feature sets in order to reduce fingerprinting attack surfaces. Fingerprinting mitigation generally involves trimming away or restricting unnecessary sources of entropy--which is what FLoC is. Google should not create new fingerprinting risks until it's figured out how to deal with existing ones.

Cross-context exposure

The second problem is less easily explained away: the technology will share new personal data with trackers who can already identify users. For FLoC to be useful to advertisers, a user's cohort will necessarily reveal information about their behavior.

The project's Github page addresses this up front:

This API democratizes access to some information about an individual's general browsing history (and thus, general interests) to any site that opts into it. ... Sites that know a person's PII (e.g., when people sign in using their email address) could record and reveal their cohort. This means that information about an individual's interests may eventually become public.

As described above, FLoC cohorts shouldn't work as identifiers by themselves. However, any company able to identify a user in other ways--say, by offering "log in with Google" services to sites around the Internet--will be able to tie the information it learns from FLoC to the user's profile.

Two categories of information may be exposed in this way:

  • Specific information about browsing history. Trackers may be able to reverse-engineer the cohort-assignment algorithm to determine that any user who belongs to a specific cohort probably or definitely visited specific sites.

  • General information about demographics or interests. Observers may learn that in general , members of a specific cohort are substantially likely to be a specific type of person. For example, a particular cohort may over-represent users who are young, female, and Black; another cohort, middle-aged Republican voters; a third, LGBTQ+ youth.

This means every site you visit will have a good idea about what kind of person you are on first contact, without having to do the work of tracking you across the web. Moreover, as your FLoC cohort will update over time, sites that can identify you in other ways will also be able to track how your browsing changes. Remember, a FLoC cohort is nothing more, and nothing less, than a summary of your recent browsing activity.

You should have a right to present different aspects of your identity in different contexts. If you visit a site for medical information, you might trust it with information about your health, but there's no reason it needs to know what your politics are. Likewise, if you visit a retail website, it shouldn't need to know whether you've recently read up on treatment for depression. FLoC erodes this separation of contexts, and instead presents the same behavioral summary to everyone you interact with.

Beyond privacy

FLoC is designed to prevent a very specific threat: the kind of individualized profiling that is enabled by cross-context identifiers today. The goal of FLoC and other proposals is to avoid letting trackers access specific pieces of information that they can tie to specific people. As we've shown, FLoC may actually help trackers in many contexts. But even if Google is able to iterate on its design and prevent these risks, the harms of targeted advertising are not limited to violations of privacy. FLoC's core objective is at odds with other civil liberties.

The power to target is the power to discriminate. By definition, targeted ads allow advertisers to reach some kinds of people while excluding others. A targeting system may be used to decide who gets to see job postings or loan offers just as easily as it is to advertise shoes.

Over the years, the machinery of targeted advertising has frequently been used for exploitation , discrimination , and harm . The ability to target people based on ethnicity, religion, gender, age, or ability allows discriminatory ads for jobs, housing, and credit. Targeting based on credit history--or characteristics systematically associated with it-- enables predatory ads for high-interest loans. Targeting based on demographics, location, and political affiliation helps purveyors of politically motivated disinformation and voter suppression. All kinds of behavioral targeting increase the risk of convincing scams .

Google, Facebook, and many other ad platforms already try to rein in certain uses of their targeting platforms. Google, for example, limits advertisers' ability to target people in " sensitive interest categories ." However, these efforts frequently fall short; determined actors can usually find workarounds to platform-wide restrictions on certain kinds of targeting or certain kinds of ads .

Even with absolute power over what information can be used to target whom, platforms are too often unable to prevent abuse of their technology. But FLoC will use an unsupervised algorithm to create its clusters. That means that nobody will have direct control over how people are grouped together. Ideally (for advertisers), FLoC will create groups that have meaningful behaviors and interests in common. But online behavior is linked to all kinds of sensitive characteristics-- demographics like gender, ethnicity, age, and income; "big 5" personality traits ; even mental health . It is highly likely that FLoC will group users along some of these axes as well. FLoC groupings may also directly reflect visits to websites related to substance abuse, financial hardship, or support for survivors of trauma.

Google has proposed that it can monitor the outputs of the system to check for any correlations with its sensitive categories. If it finds that a particular cohort is too closely related to a particular protected group, the administrative server can choose new parameters for the algorithm and tell users' browsers to group themselves again.

This solution sounds both orwellian and sisyphean. In order to monitor how FLoC groups correlate with sensitive categories, Google will need to run massive audits using data about users' race, gender, religion, age, health, and financial status. Whenever it finds a cohort that correlates too strongly along any of those axes, it will have to reconfigure the whole algorithm and try again, hoping that no other "sensitive categories" are implicated in the new version. This is a much more difficult version of the problem it is already trying, and frequently failing , to solve.

In a world with FLoC, it may be more difficult to target users directly based on age, gender, or income. But it won't be impossible. Trackers with access to auxiliary information about users will be able to learn what FLoC groupings "mean"--what kinds of people they contain--through observation and experiment. Those who are determined to do so will still be able to discriminate. Moreover, this kind of behavior will be harder for platforms to police than it already is. Advertisers with bad intentions will have plausible deniability--after all, they aren't directly targeting protected categories, they're just reaching people based on behavior. And the whole system will be more opaque to users and regulators.

Google, please don't do this

We wrote about FLoC and the other initial batch of proposals when they were first introduced , calling FLoC "the opposite of privacy-preserving technology." We hoped that the standards process would shed light on FLoC's fundamental flaws, causing Google to reconsider pushing it forward. Indeed, several issues on the official Github page raise the exact same concerns that we highlight here. However, Google has continued developing the system, leaving the fundamentals nearly unchanged. It has started pitching FLoC to advertisers, boasting that FLoC is a " 95% effective " replacement for cookie-based targeting. And starting with Chrome 89, released on March 2 , it's deploying the technology for a trial run . A small portion of Chrome users--still likely millions of people--will be (or have been) assigned to test the new technology.

Make no mistake, if Google does follow through on its plan to implement FLoC in Chrome, it will likely give everyone involved "options." The system will probably be opt-in for the advertisers that will benefit from it, and opt-out for the users who stand to be hurt. Google will surely tout this as a step forward for "transparency and user control," knowing full well that the vast majority of its users will not understand how FLoC works, and that very few will go out of their way to turn it off. It will pat itself on the back for ushering in a new, private era on the Web, free of the evil third-party cookie--the technology that Google helped extend well past its shelf life, making billions of dollars in the process .

It doesn't have to be that way. The most important parts of the privacy sandbox, like dropping third-party identifiers and fighting fingerprinting, will genuinely change the Web for the better. Google can choose to dismantle the old scaffolding for surveillance without replacing it with something new and uniquely harmful.

We emphatically reject the future of FLoC. That is not the world we want, nor the one users deserve. Google needs to learn the correct lessons from the era of third-party tracking and design its browser to work for users, not for advertisers.



Offsite Article: Twitter vs Texas...

Link Here10th March 2021
Full story: Internet Censorship in USA...Domain name seizures and SOPA
Twitter sues Texas Attorney General to avoid investigation into its censorship practices in silencing right wing speech

See article from



Offsite Article: US copyright law...

Link Here10th March 2021
The Digital Copyright Act Will Chill Innovation and Harm The Internet

See article from



Age of nightmares...

ICO warns internet companies of the impending impossible to comply with Age Appropriate Design Code

Link Here7th March 2021
Full story: ICO Age Appropriate Design...ICO calls for age assurance for websites accessed by children
A survey by the Information Commissioner's Office (ICO) shows that three quarters of businesses surveyed are aware of the impending Children's Code. The full findings will be published in May but initial analysis shows businesses are still in the preparation stages.

And with just six months to go until the code comes into force, the ICO is urging organisations and businesses to make the necessary but onerous changes to their online services and products.

The Children's Code sets out 15 standards organisations must meet to ensure that children's data is protected online. The code will apply to all the major online services used by children in the UK and includes measures such as providing default settings which ensure that children have access to online services whilst minimising data collection and use.

Details of the code were first published in June 2018 and UK Parliament approved it last year. Since then, the ICO has been providing support and advice to help organisations adapt their online services and products in line with data protection law.



Uncancelling classic movies...

US cable channel assembles a collection of movies that have offended the politically correct

Link Here7th March 2021
The cable TV channel Turner Classic Movies has kicked off Reframed: Classic Films in the Rearview Mirror , a series where it says it examines the troubling and problematic aspects of the classics, which were released from the 1920s to the 1960s.

18 selected movies include Breakfast at Tiffany's, Psycho, Gone With the Wind, My Fair Lady, Stagecoach, The Jazz Singer, Seven Brides for Seven Brothers. 

Several hosts will take turns holding roundtable introductions before the start of each movie where they will discuss the history and cultural context of the movie. They will also provide trigger warnings about depictions of racism, sexism, and LGBTQ issues.

Among the 'issues will be white actors portraying non-white roles or blackface. This includes Mickey Rooney's performance as Mr Yunioshi in Breakfast at Tiffany's, Sam Jaffe playing the title role of Gunga Din, Al Jolson donning blackface for The Jazz Singer.

For Psycho, the hosts talk about transgender identity in the film and the implications of equating gender fluidity and dressing in women's clothes with mental illness and violence.

During the My Fair Lady conversation, they talk about why the film adaptation has a less feminist ending than the stage play, and Henry Higgins' physical and psychological abuse of Eliza Dolittle.

For the discussion of Guess Who's Coming to Dinner, the hosts will look at aspects of black actor Sidney Poitier's films that are oriented primarily to white audiences.

The movie list is:

  • Gone With the Wind - Romanticized portrayal of antebellum life before the Civil War and portrayal of slaves as happy and content
  • Seven Brides for Seven Brothers - Sexism controversy over plot of the film of kidnapping women and forcibly confining them to marry
  • Rope - Portrayal of two queer characters who have just committed a murder
  • The Four Feathers - Racist views including the term Fuzzy Wuzzies to denote Arabs and take on British imperialism in Arabia
  • Woman of the Year - Sexism and the idea that a woman can only be successful in the workplace if she lacks femininity
  • Guess Who's Coming to Dinner - Aspects of black actor Sidney Poitier's films that are oriented primarily to white audiences
  • Gunga Din - White actor Sam Jaffe playing the title role of Gunga Din, who is an Indian character
  • Sinbad, the Sailor - White actor Douglas Fairbanks Jr playing the Arab role of Sinbad and portrayal of Arabs
  • The Jazz Singer - Al Jolson's blackface routine
  • The Searchers - White actor Henry Braydon playing Native American character and abuse of Native American woman by white character.
  • Breakfast at Tiffany's - White actor Mickey Rooney's portrayal of Japanese character Mr Yunioshi
  • Swing Time - Fred Astaire's blackface routine
  • Stagecoach - Portrayal of Native Americans and their being seen as a threat
  • Tarzan, the Ape Man - Portrayal of Africans including one attack by ;a tribe of aggressive dwarfs'
  • My Fair Lady - Sexism and Henry Higgins' physical and psychological abuse of Eliza Dolittle
  • The Children's Hour - Portrayal of LGBTQ issues when two female teachers are accused of sinful, sexual knowledge of each other;
  • Psycho - Transgender identity and the implications of equating and dressing in women



Constitutionally challenged...

US politicians queue up to censor the internet

Link Here7th March 2021
Full story: Internet Censorship in USA...Domain name seizures and SOPA
US Republican state lawmakers are pushing for social media giants to face costly lawsuits for policing content on their websites, taking aim at a federal law that prevents internet companies from being sued for removing posts.

GOP (Grand Old Party) politicians in roughly two dozen states have introduced bills that would allow for civil lawsuits against platforms for the censorship of right leaning posts.

Democrats who also have called for greater scrutiny of big tech, are sponsoring the same measures in at least two states.

Experts argue the legislative proposals are doomed to fail while the federal law, Section 230 of the Communications Decency Act, is in place. They said state lawmakers are wading into unconstitutional territory by trying to interfere with the editorial policies of private companies.

Len Niehoff, a professor at the University of Michigan Law School, described the idea as a constitutional non-starter. He said:

If an online platform wants to have a policy that it will delete certain kinds of tweets, delete certain kinds of users, forbid certain kinds of content, that is in the exercise of their right as a information distributor. And the idea that you would create a cause of action that would allow people to sue when that happens is deeply problematic under the First Amendment.



'Democrats' reach for the Fox News off switch...

Congress Democrats are attempting to bully US networks into dropping Fox News and the like

Link Here5th March 2021
Last week, two 'Democrat' members of Congress -- Anna Eshoo and Jerry McNerney -- sent letters to AT&T, Verizon, Roku, Amazon, Apple, Comcast, Charter, Dish, Cox, Altice, Alphabet and Disney-owned Hulu 203 all the major cable carriers. They effectively demanded the carriers explain why they host conservative news networks such as Fox News, One America News Network (OAN) and Newsmax on their platforms. They wrote:

Experts have noted that the right-wing media ecosystem is much more disinformation, lies, and half-truths.Right-wing media outlets, like Newsmax, One America News Network (OANN), and Fox News all aired misinformation about the November 2020 elections.For example, both Newsmax and OANN ran incendiary reports of false information following the elections and continue to support an angry and dangerous subculture [that] will continue to operate semi-openly.As a violent mob was breaching the doors of the Capitol, Newsmax's coverage called the scene a sort of a romantic idea.Fox News, meanwhile, has spent years spewing misinformation about American politics.


It is for these reasons we ask that you provide us with responses to the following questions about AT&T's policies toward content carried on U-verse, DirecTV, and AT&T TV by March8, 2021:

  1. What moral or ethical principles (including those related to journalistic integrity, violence, medical information, and public health) do you apply in deciding which channels to carry or when to take adverse actions against a channel?

  2. Do you require, through contracts or otherwise, that the channels you carry abide by any content guidelines? If so, please provide a copy of the guidelines.

  3. How many of your subscribers tuned in to Fox News, Newsmax, and OANN on U-verse, DirecTV, and AT&T TV for each of the four weeks preceding the November 4 2020 elections and the January 6, 2021 attacks on the Capitol? Please specify the number of subscribers that tuned in to each channel.

  4. What steps did you take prior to, on, and following the November 3,2020 elections and the January 6, 2021 attacks to monitor, respond to, and reduce the spread of disinformation, including encouragement or incitement of violence by channels your company disseminates to millions of Americans? Please describe each step that you took and when it was taken.

  5. Have you taken any adverse actions against a channel, including Fox News, Newsmax, and OANN, for using your platform to disseminate disinformation related directly or indirectly to the November 3, 2020 elections, the January 6, 2021 Capitol insurrection, or COVID-19 misinformation? If yes, please describe each action, when it was taken, and the parties involved.

  6. Have you ever taken any actions against a channel for using your platform to disseminate any disinformation? If yes, please describe each action and when it was taken.

  7. Are you planning to continue carrying Fox News, Newsmax, and OANN on U-verse, DirecTV, and AT&T TV both now and beyond any contract renewal date? If so, why?



Woke nonsense...

Estate of Dr Suess estate cancels six books over ludicrous logic that would make the Cat n the Hat proud

Link Here5th March 2021
Books by Dr. Seuss have flooded Amazon's U.S. bestseller list after it was announced that six of the author's publications were being cancelled over ludicrous woke claims of racist imagery.

The Cat in the Hat is currently the bestselling book on Amazon's U.S. store, closely followed by One Fish Two Fish Red Fish Blue Fish and Green Eggs and Ham , along with several other titles by the late Theodor Seuss Geisel. In total, 15 Dr. Seuss publications were in Amazon's top 20 list on Friday morning.

This comes after Dr. Seuss Enterprises, the business running the late author's estate, said Tuesday it has cancelled publication and licensing of six of his books:
  • And to Think That I Saw It on Mulberry Street
  • If I Ran the Zoo
  • McElligot's Pool
  • On Beyond Zebra!
  • Scrambled Eggs Super!
  • The Cat's Quizzer.

Dr. Seuss Enterprises claimed in a statement that these books portray people in ways that are hurtful and wrong.

Chief of Woke Staff Joe Biden notably left any mention of Dr. Seuss out of his Read Across America Day proclamation on Monday. Former Presidents Donald Trump and Barack Obama both mentioned Dr. Seuss in their previous speeches.


Comment: Why the cancellation of Dr Seuss matters

5th March 2021. See article from

An example of the ludicrous claims about racism in Dr Suess:

As it happens, until recently Dr Seuss books have had a reputation for encouraging tolerance. The Sneetches and Other Stories , in particular, has been praised for its messages about overcoming differences. But today's anti-racist campaigners find even the Sneetches problematic. According to Learning for Justice , an 'education' advisory group:

As a result, they accept one another. This message of "acceptance" does not acknowledge structural power imbalances. It doesn't address the idea that historical narratives impact present-day power structures. And instead of encouraging young readers to recognise and take action against injustice, the story promotes a race-neutral approach.'

You read that right. The Sneetches are apparently racist because they look at the world in a race-neutral way, and because they learn to accept one another.



Offsite Article: Banning online junk food ads helps no one...

Link Here5th March 2021
The health benefits are tiny, but the economic damage will be huge. By Jason Reed

See article from



Censorship warning...

Twitter introduces a five strike rule for censoring covid tweets that Twitter does not like

Link Here3rd March 2021
Full story: Twitter Censorship...Twitter offers country by country take downs
Twitter has announced a new strikes system where accounts that repeatedly post what it deems to be COVID-19 "misinformation" will be permanently banned from the site.

Under the new system , users that tweet what Twitter deems to be a "high-severity" violation of its COVID-19 misleading information policy will be temporarily locked out of their accounts, forced to delete the tweets, and given two strikes.

Users that tweet rule-breaking content that isn't deemed to be a high severity violation of the policy will have their tweets labeled and will be given a strike if the labeled tweet is "determined to be harmful." Labeled tweets may also by shadow banned by having their visibility reduced, have their engagement metrics disabled, display a warning when people attempt to share or like the tweet, and contain a link to a curated landing page or Twitter's policies.

Accounts that have multiple strikes will be subject to the following sanctions:

  • Two or three strikes: 12-hour account lock
  • Four strikes: 7-day account lock
  • Five or more strikes: Permanent suspension



Paranoia Agent...

MVD announces that its forthcoming Blu-ray will be uncut after previous BBFC cuts have been waived

Link Here3rd March 2021

Paranoia Agent is a 2004 Japan TV animation horror
Starring Sh˘z˘ ╬zuka,Toshihiko, Seki,Michael, McConnohie Melon Farmers link BBFC link 2020 IMDb

Seemingly unconnected citizens of Tokyo are targeted for bludgeoning by a boy with a golden baseball bat. As detectives try to link the victims, they discover that following the assaults, the victims' lives have improved in some way.

The previous DVD release of Paranoia Agent was notably censored by the BBFC when in 2006 they demanded cuts to an adults only rated episode featuring a hanging scene depicting an unsuccessful attempt by a young girl to hang herself.

Now the distributor MVD has just tweeted that an April 19th 2021 Blu-ray will be uncut:

We're happy to announce our upcoming Collector's Edition Blu-Ray release of Satoshi Kon's Paranoia Agent will be fully uncut!

Previous BBFC cuts from 2006

See article from

The BBFC requested the compulsory cut to the eighth part of Paranoia Agent, called Happy Family Planning. This a mostly self-contained story, a macabre black comedy. The story involves three people who meet online, intending to kill themselves together. Two of the trio are adult men, but they're shocked to realise the third person is a preteen girl, who seems to regard suicide as a game.

In one scene, the three characters try to hang themselves in the mountain. In the unedited episode, the girl is shown bouncing happily up and down with the rope around her neck, chanting Swing! Swing! (finally, she breaks the tree branch, sending them all tumbling). However, this scene, lasting 80 seconds, was cut entirely from the UK DVD of the episode, following the ruling by the BBFC.

Responding to inquiries about the cut in the past, the BBFC has made its reasoning clear. Although the relevant DVD was rated 18, the BBFC still judged that the scene of a child enjoying being hanged was irresponsible and harmful, and that underage children could be influenced by the scene.



Flowery Twats...

The BBC continues to censor Fawlty Towers

Link Here3rd March 2021
Fawlty Towers is set to be re-aired on BBC but racial slurs made by characters such as Major Gowen will be omitted from the show mirror

The offending scene is when Major Gowan discusses the difference between 'niggers' and 'wogs'.

Last year writer and star John Cleese, who plays Basil Fawlty, branded Beeb bosses gutless for temporarily removing the episode with the Major's remarks from the UKTV streaming platform. He argued that the remarks were fine in the context of the major being shown as an old fossil:

We were not supporting his views, we were making fun of them.

A separate episode, The Anniversary , starts with the Fawlty Towers sign re-arranged into the anagram 'Flowery Twats', which is also cut.

The series is available on iPlayer from Monday.



Ethical snooping...

Google promises not to replace cookie based web browsing snooping with another privacy invasive method of snooping

Link Here3rd March 2021
David Temkin, Google's Director of Product Management, Ads Privacy and Trust has been commenting on Google's progress in reducing personalised advertising based on snooping of people's browsing history. Temkin commented:

72% of people feel that almost all of what they do online is being tracked by advertisers, technology firms or other companies, and 81% say that the potential risks they face because of data collection outweigh the benefits, according to a study by Pew Research Center. If digital advertising doesn't evolve to address the growing concerns people have about their privacy and how their personal identity is being used, we risk the future of the free and open web.

That's why last year Chrome announced its intent to remove support for third-party cookies, and why we've been working with the broader industry on the Privacy Sandbox to build innovations that protect anonymity while still delivering results for advertisers and publishers. Even so, we continue to get questions about whether Google will join others in the ad tech industry who plan to replace third-party cookies with alternative user-level identifiers. Today, we're making explicit that once third-party cookies are phased out, we will not build alternate identifiers to track individuals as they browse across the web, nor will we use them in our products.

We realize this means other providers may offer a level of user identity for ad tracking across the web that we will not -- like PII [Personally Identifying Information] graphs based on people's email addresses. We don't believe these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions, and therefore aren't a sustainable long term investment. Instead, our web products will be powered by privacy-preserving APIs which prevent individual tracking while still delivering results for advertisers and publishers.

People shouldn't have to accept being tracked across the web in order to get the benefits of relevant advertising. And advertisers don't need to track individual consumers across the web to get the performance benefits of digital advertising.

Advances in aggregation, anonymization, on-device processing and other privacy-preserving technologies offer a clear path to replacing individual identifiers. In fact, our latest tests of FLoC [Federated Learning of Cohorts] show one way to effectively take third-party cookies out of the advertising equation and instead hide individuals within large crowds of people with common interests. Chrome intends to make FLoC-based cohorts available for public testing through origin trials with its next release this month, and we expect to begin testing FLoC-based cohorts with advertisers in Google Ads in Q2. Chrome also will offer the first iteration of new user controls in April and will expand on these controls in future releases, as more proposals reach the origin trial stage, and they receive more feedback from end users and the industry.

This points to a future where there is no need to sacrifice relevant advertising and monetization in order to deliver a private and secure experience.



Ethical snooping...

GCHQ discusses the ethics of using AI and mass snooping to analyse people's internet use to detect both serious crime and no doubt political incorrectness

Link Here1st March 2021
The UK snooping agency GCHQ has published a paper discussing the ethics of using AI for analysing internet posts. GCHQ note that the technology will be put at the heart of its operations.

The paper, Ethics of AI: Pioneering a New National Security , comments on the technology as used to assist its analysts in spotting patterns hidden inside large - and fast growing - amounts of data. including:

  • trying to spot fake online messages used by other states spreading disinformation
  • mapping international networks engaged in human or drug trafficking
  • finding child sex abusers hiding their identities online

But it says it cannot predict human behaviour such as moving towards executing a terrorist attack.

GCHQ is now detailin how it will ensure it uses AI fairly and transparently, including:

  • an AI ethical code of practice
  • recruiting more diverse talent to help develop and govern its use

The BBC comments that this maybe a sign the agency wants to avoid a repeat of criticism people were unaware how it used data, following whistleblower Edward Snowden's revelations.

GCHQ reports that a growing number of states are using AI to automate the production of false content to affect public debate, including "deepfake" video and audio. The technology can individually target and personalise this content or spread it through chatbots or by interfering with social-media algorithms. But it could also help GCHQ detect and fact-check it and identify "troll farms" and botnet accounts.

GCHQ speaks of capabilities in terms of detecting child abuse, where functionalities include:

  • help analyse evidence of grooming in chat rooms
  • track the disguised identities of offenders across multiple accounts
  • discover hidden people and illegal services on the dark web
  • help police officers infiltrate rings of offenders
  • filter content to prevent analysts from being unnecessarily exposed to disturbing imagery

and on trafficking:

  • mapping the international networks that enable trafficking - identifying individuals, accounts and transactions
  • "following the money" - analysing complex transactions, possibly revealing state sponsors or links to terrorist groups
  • bringing together different types of data - such as imagery and messaging - to track and predict where illegal cargos are being delivered

Now doubt these functionalities will also be used for more mundane reasons.



User control...

Twitter is set to introduce a voluntary censored 'Safety Mode' that blocks accounts that Twitter thinks you won't like

Link Here1st March 2021
Full story: Twitter Censorship...Twitter offers country by country take downs
Twitter is planning to roll-out a new feature that would allow users to auto-block or mute certain accounts that Twitter deems as abusive.

The new safety mode will further the company's efforts in censoring under the guise of protecting users from supposedly offensive content.

Twitter announced the new feature during its Analyst Day presentation . Documents from the presentation suggest that the feature will be available through Safety Mode, a setting the company is yet to roll out.

Twitter explained that once a user turns on the feature, it automatically blocks accounts that appear to break the Twitter Rules, and mute accounts that might be using insults, name-calling, strong language, or hateful remarks.

Twitter already allows users to block or mute accounts that the user selects but the latest censorship extension allows Twitter itself to do the selecting.



Offsite Article: A dangerous rap...

Link Here1st March 2021
Like Pablo HasÚl, Spain wants me jailed for rap lyrics. By Valt˛nyc

See article from

 2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024 
Feb   Jan   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec   Latest  

Censor Watch logo





Censorship News Latest

Daily BBFC Ratings

Site Information