Melon Farmers Original Version

Facebook Censorship


Facebook quick to censor


 

Censorship for right reasons...

Italian court find in favour of far right party who claimed that being banned by Facebook is interfering in politics


Link Here 15th December 2019
Full story: Facebook Censorship...Facebook quick to censor
A civil court in Rome has ruled that Facebook must immediately reactivate the account of the Italian neo-fascist party CasaPound Italia and pay the group 800 euro for each day the account has been closed/

Facebook shut the party's account, which had 240,000 followers, along with its Instagram page in early September. A Facebook spokesperson told the Ansa news agency at the time: Persons or organisations that spread hatred or attack others on the basis of who they are will not have a place on Facebook and Instagram.

Facebook must also pay 15,000 euro in legal costs. The judge reportedly ruled that without Facebook, the party was excluded (or extremely limited) from the Italian political debate.

A Facebook spokesperson said the company was aware of the court's decision and we are reviewing it carefully.

 

 

Facebook excuses politicians from telling the truth in advertising...

If disinformation were to be banned there would no politicians, no religion, no Christmas and no railway timetables


Link Here7th October 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook has quietly rescinded a policy banning false claims in advertising, creating a specific exemption that leaves political adverts unconstrained regarding how they could mislead or deceive.

Facebook had previously banned adverts containing deceptive, false or misleading content, a much stronger restriction than its general rules around Facebook posts. But, as reported by the journalist Judd Legum , in the last week the rules have narrowed considerably, only banning adverts that include claims debunked by third-party fact-checkers, or, in certain circumstances, claims debunked by organisations with particular expertise.

A separate policy introduced by the social network recently declared opinion pieces and satire ineligible for verification, including any website or page with the primary purpose of expressing the opinion or agenda of a political figure. The end result is that any direct statement from a candidate or campaign cannot be fact-checked and so is automatically exempted from policies designed to prevent misinformation. (After the publication of this story, Facebook clarified that only politicians currently in office or running for office, and political parties, are exempt: other political adverts still need to be true.)

 

 

Words speak louder than facts...

Facebook justifiably decides that fact checking politicians isn't the way to go and that politicians at least will be given the right to free speech


Link Here 25th September 2019
Full story: Facebook Censorship...Facebook quick to censor
Nick Clegg, the Facbook VP of Global Affairs and Communications writes in a blog post:

Fact-Checking Political Speech

We rely on third-party fact-checkers to help reduce the spread of false news and other types of viral misinformation, like memes or manipulated photos and videos. We don't believe, however, that it's an appropriate role for us to referee political debates and prevent a politician's speech from reaching its audience and being subject to public debate and scrutiny. That's why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here .

Newsworthiness Exemption

Facebook has had a newsworthiness exemption since 2016 . This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads -- if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.

When we make a determination as to newsworthiness, we evaluate the public interest value of the piece of speech against the risk of harm. When balancing these interests, we take a number of factors into consideration, including country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards.

 

 

Offsite Article: Combating Hate and Extremism...


Link Here 18th September 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook reports on how it developing capabilities to combat terrorism and hateful content

See article from newsroom.fb.com

 

 

Facebook's Independent Oversight Board...

Facebook sets out a plans for a top level body to decide upon censorship policy and to arbitrate on cases brought by Facebook, and later, Facebook users


Link Here 18th September 2019
Full story: Facebook Censorship...Facebook quick to censor

Mark Zuckerberg has previously described plans to create a high level oversight board to decide upon censorship issues with a wider consideration than just Facebook interests. He suggested that national government interests should be considered at this top level of policy making. Zuckerberg wrote:

We are responsible for enforcing our policies every day and we make millions of content decisions every week. But ultimately I don't believe private companies like ours should be making so many important decisions about speech on our own. That's why I've called for governments to set clearer standards around harmful content. It's also why we're now giving people a way to appeal our content decisions by establishing the independent Oversight Board.

If someone disagrees with a decision we've made, they can appeal to us first, and soon they will be able to further appeal to this independent board. The board's decision will be binding, even if I or anyone at Facebook disagrees with it. The board will use our values to inform its decisions and explain its reasoning openly and in a way that protects people's privacy.

The board will be an advocate for our community -- supporting people's right to free expression, and making sure we fulfill our responsibility to keep people safe. As an independent organization, we hope it gives people confidence that their views will be heard, and that Facebook doesn't have the ultimate power over their expression. Just as our Board of Directors keeps Facebook accountable to our shareholders, we believe the Oversight Board can do the same for our community.

As well as a detailed charter, Facebook provided a summary of the design of the board.

Along with the charter, we are providing a summary which breaks down the elements from the draft charter , the feedback we've received, and the rationale behind our decisions in relation to both. Many issues have spurred healthy and constructive debate. Four areas in particular were:

  • Governance: The majority of people we consulted supported our decision to establish an independent trust. They felt that this could help ensure the board's independence, while also providing a means to provide additional accountability checks. The trust will provide the infrastructure to support and compensate the Board.

  • Membership: We are committed to selecting a diverse and qualified group of 40 board members, who will serve three-year terms. We agreed with feedback that Facebook alone should not name the entire board. Therefore, Facebook will select a small group of initial members, who will help with the selection of additional members. Thereafter, the board itself will take the lead in selecting all future members, as explained in this post . The trust will formally appoint members.

  • Precedent: Regarding the board, the charter confirms that panels will be expected, in general, to defer to past decisions. This reflects the feedback received during the public consultation period. The board can also request that its decision be applied to other instances or reproductions of the same content on Facebook. In such cases, Facebook will do so, to the extent technically and operationally feasible.

  • Implementation : Facebook will promptly implement the board's content decisions, which are binding. In addition, the board may issue policy recommendations to Facebook, as part of its overall judgment on each individual case. This is how it was envisioned that the board's decisions will have lasting influence over Facebook's policies, procedures and practices.

Process

Both Facebook and its users will be able to refer cases to the board for review. For now, the board will begin its operations by hearing Facebook-initiated cases. The system for users to initiate appeals to the board will be made available over the first half of 2020.

Over the next few months, we will continue testing our assumptions and ensuring the board's operational readiness. In addition, we will focus on sourcing and selecting of board members, finalizing the bylaws that will complement the charter, and working toward having the board deliberate on its first cases early in 2020.

 

 

Offsite Article: Government interference in the commercial arrangements between large companies and Facebook...


Link Here16th September 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook has some strong words for an Australian government inquiry looking into ideas to censor the internet

See article from businessinsider.com.au

 

 

Concerned friends...

Facebook reports on its policies and resources to prevent suicide and self-harm


Link Here12th September 2019
Full story: Facebook Censorship...Facebook quick to censor
Today, on World Suicide Prevention Day, we're sharing an update on what we've learned and some of the steps we've taken in the past year, as well as additional actions we're going to take, to keep people safe on our apps, especially those who are most vulnerable.

Earlier this year, we began hosting regular consultations with experts from around the world to discuss some of the more difficult topics associated with suicide and self-injury. These include how we deal with suicide notes, the risks of sad content online and newsworthy depictions of suicide. Further details of these meetings are available on Facebook's new Suicide Prevention page in our Safety Center.

As a result of these consultations, we've made several changes to improve how we handle this content. We tightened our policy around self-harm to no longer allow graphic cutting images to avoid unintentionally promoting or triggering self-harm, even when someone is seeking support or expressing themselves to aid their recovery. On Instagram, we've also made it harder to search for this type of content and kept it from being recommended in Explore. We've also taken steps to address the complex issue of eating disorder content on our apps by tightening our policy to prohibit additional content that may promote eating disorders. And with these stricter policies, we'll continue to send resources to people who post content promoting eating disorders or self-harm, even if we take the content down. Lastly, we chose to display a sensitivity screen over healed self-harm cuts to help avoid unintentionally promoting self-harm.

And for the first time, we're also exploring ways to share public data from our platform on how people talk about suicide, beginning with providing academic researchers with access to the social media monitoring tool, CrowdTangle. To date, CrowdTangle has been available primarily to help newsrooms and media publishers understand what is happening on Facebook. But we are eager to make it available to two select researchers who focus on suicide prevention to explore how information shared on Facebook and Instagram can be used to further advancements in suicide prevention and support.

In addition to all we are doing to find more opportunities and places to surface resources, we're continuing to build new technology to help us find and take action on potentially harmful content, including removing it or adding sensitivity screens. From April to June of 2019, we took action on more than 1.5 million pieces of suicide and self-injury content on Facebook and found more than 95% of it before it was reported by a user. During that same time period, we took action on more than 800 thousand pieces of this content on Instagram and found more than 77% of it before it was reported by a user.

To help young people safely discuss topics like suicide, we're enhancing our online resources by including Orygen's #chatsafe guidelines in Facebook's Safety Center and in resources on Instagram when someone searches for suicide or self-injury content.

The #chatsafe guidelines were developed together with young people to provide support to those who might be responding to suicide-related content posted by others or for those who might want to share their own feelings and experiences with suicidal thoughts, feelings or behaviors.

 

 

Quality control...

Facebook introduces new censorship for private groups and labels it as 'Group Quality'


Link Here16th August 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook has introduced a new censorship tool known as Group Quality to evaluate private groups and scrutinize them for any 'problematic content'.

For a long time now, Facebook was facing heat from the media for the fact that the private groups feature is harboring extremists and the spreading of 'fake news'. As a result, the company wrote an article from newsroom.fb.com introducing a new feature known as Group Quality:

Being in a private group doesn't mean that your actions should go unchecked. We have a responsibility to keep Facebook safe, which is why our Community Standards apply across Facebook, including in private groups. To enforce these policies, we use a combination of people and technology -- content reviewers and proactive detection. Over the last few years, we've invested heavily in both, including hiring more than 30,000 people across our safety and security teams.

Within this, a specialized team has been working on the Safe Communities Initiative: an effort that started two years ago with the goal of protecting people using Facebook Groups from harm. Made up of product managers, engineers, machine learning experts and content reviewers, this team works to anticipate the potential ways people can do harm in groups and develops solutions to minimize and prevent it. As the head of Facebook Groups, I want to explain how we're making private groups safer by focusing on three key areas: proactive detection, tools for admins, and transparency and control for members.

On the plus side Facebook has updated settings used in defining access and visibility of groups which are much clearer than previus incarnations.

Critics say that Facebook's move will not curb misinformation and fake news, but, on the contrary, it may further push it deeper underground making it hard for censor to filter or terminate such content from the site.

 

 

No right to know...

Facebook taken to court in Poland after it censored information about a nationalist rally in Warsaw


Link Here7th June 2019
Full story: Facebook Censorship...Facebook quick to censor
A Polish court has held a first hearing in a case brought against Facebook by a historian who says that Facebook engaged in censorship by suspending accounts that had posted about a nationalist rally in Warsaw.

Historian Maciej Swirski has complained that Facebook in 2016 suspended a couple of accounts that provided information on an independence day march organised by far-right groups. Swirski told AFP:

I'm not a member of the National Movement, but as a citizen I wanted to inform myself on the event in question and I was blocked from doing so,

This censorship doesn't concern my own posts, but rather content that I had wanted to see.

Facebook's lawyers argued that censorship can only be exercised by the state and that a private media firm is not obligated to publish any particular content.

The next court hearing will take place on October 30.

 

 

Updated: But isn't this a gender equivalence to 'blackface'?...

Artist Spencer Tunick and the National Coalition Against Censorship organise a Facebook challenging array of male nipples in New York


Link Here 6th June 2019
Full story: Facebook Censorship...Facebook quick to censor
P hotographer Spencer Tunick and  the National Coalition Against Censorship organise a nude art action outside the Facebook's New York headquarters on June 2, when some 125 people posed naked in front of Facebook's building as Tunick photographed them as part of the NCAC's #WeTheNipple campaign.

In response Facebook agreed to convene a group--including artists, art educators, museum curators, activists, and employees--to consider new nudity guidelines for images posted to its social-media platforms.

The NCAC said it will collaborate with Facebook in selecting participants for a discussion to look into issues related to nude photographic art, ways that censorship impacts artists, and possible solutions going forward.

However before artists get their expectations up, they should know that it is standard policy that whenever Facebook get caught out censoring something, they always throw their arms up in feigned horror, apologise profusely and say they will do better next time.

They never do!

 

 

Just to recap, free speech once made America great...

Facebook censors artist who rather hatefully herself reworks 'Make America Great Again' hats into symbols of hate


Link Here 28th May 2019
Full story: Facebook Censorship...Facebook quick to censor
An artist who redesigns President Trump's Make America Great Again (MAGA) hats into recognizable symbols of hate speech says she has been banned from Facebook.

Kate Kretz rips apart the iconic red campaign hat and resews it to look like other symbols, such as a Nazi armband or a Ku Klux Klan hood.

It all seems a bit hateful and inflammatory though, sneering at the people who choose to wear the caps. Hopefully the cap wearers will recall that free speech is part of what once made America great.

The artist said Facebook took down an image of the reimagined Nazi paraphernalia for violating community standards.

She appealed the decision and labeled another image with text clarifying that the photo was of a piece of art, but her entire account was later disabled. Kretz said:

I understand doing things for the greater good. However, I think artists are a big part of Facebook's content providers, and they owe us a fair hearing.

 

 

#WeTheNipple...

National Coalition Against Censorship organises nude nipples event with photographer Spencer Tunick


Link Here16th May 2019
Full story: Facebook Censorship...Facebook quick to censor

To challenge online censorship of art featuring naked bodies or body parts, photographer Spencer Tunick, in collaboration with the National Coalition Against Censorship, will stage a nude art action in New York on June 2. The event will bring together 100 undressed participants at an as-yet-undisclosed location, and Tunick will photograph the scene and create an installation using donated images of male nipples.

Artists Andres Serrano, Paul Mpagi Sepuya, and Tunick have given photos of their own nipples to the cause, as has Bravo TV personality Andy Cohen, Red Hot Chili Peppers drummer Chad Smith, and actor/photographer Adam Goldberg.

In addition, the National Coalition Against Censorship has launched a #WeTheNipple campaign through which Instagram and Facebook users can share their experiences with censorship and advocate for changes to the social media platforms' guidelines related to nudity.

 

 

Updated: Right on censorship...

Trump to monitor the political censorship of the right by social media


Link Here13th May 2019
Full story: Facebook Censorship...Facebook quick to censor

President Trump has threatened to monitor social-media sites for their censorship of American citizens. He was responding to Facebook permanently banning figures and organizations from the political right. Trump tweeted:

I am continuing to monitor the censorship of AMERICAN CITIZENS on social media platforms. This is the United States of America -- and we have what's known as FREEDOM OF SPEECH! We are monitoring and watching, closely!!

On Thursday, Facebook announced it had permanently banned users including Louis Farrakhan, the founder of the Nation of Islam, along with far-right figures Milo Yiannopoulos, Laura Loomer and Alex Jones, the founder of Infowars. The tech giant removed their accounts, fan pages and affiliated groups on Facebook as well as its photo-sharing service Instagram, claiming that their presence on the social networking sites had become dangerous.

For his part, President Trump repeatedly has accused popular social-networking sites of exhibiting political bias, and threatened to regulate Silicon Valley in response. In a private meeting with Twitter CEO Jack Dorsey last month, Trump repeatedly raised his concerns that the company has removed some of his followers.

On Friday, Trump specifically tweeted he was surprised about Facebook's decision to ban Paul Joseph Watson, a YouTube personality who has served as editor-at-large of Infowars .

Update: Texas bill would allow state to sue social media companies like Facebook and Twitter that censor free speech

13th May 2019. See article from texastribune.org

A bill before the Texas Senate seeks to prevent social media platforms like Facebook and Twitter from censoring users based on their viewpoints. Supporters say it would protect the free exchange of ideas, but critics say the bill contradicts a federal law that allows social media platforms to regulate their own content.

The measure -- Senate Bill 2373 by state Sen. Bryan Hughes -- would hold social media platforms accountable for restricting users' speech based on personal opinions. Hughes said the bill applies to social media platforms that advertise themselves as unbiased but still censor users. The Senate State Affairs Committee unanimously approved the bill last week. The Texas Senate approved the bill on April 25 in an 18-12 vote. It now heads to the House.

 

 

The right to be silent...

Facebook bans several UK far right groups


Link Here19th April 2019
Full story: Facebook Censorship...Facebook quick to censor

Facebook has banned far-right groups including the British National Party (BNP) and the English Defence League (EDL) from having any presence on the social network. The banned groups, which also includes Knights Templar International, Britain First and the National Front as well as key members of their leadership, have been removed from both Facebook or Instagram.

Facebook said it uses an extensive process to determine which people or groups it designates as dangerous, using signals such as whether they have used hate speech, and called for or directly carried out acts of violence against others based on factors such as race, ethnicity or national origin.

Offsite comment: How to fight the new fascism

19th April 2019. See article from spiked-online.com by Andrew Doyle

This week we have seen David Lammy doubling down on his ludicrous comparison of the European Research Group with the Nazi party, and Chris Key in the Independent calling for UKIP and the newly formed Brexit Party to be banned from television debates. It is clear that neither Key nor Lammy have a secure understanding of what far right actually means and, quite apart from the distasteful nature of such political opportunism, their strategy only serves to generate the kind of resentment upon which the far right depends.

Offsite comment: Facebook is calling for Centralized Censorship. That Should Scare You

19th April 2019. See article from wired.com by Emma Llansó

If we're going to have coherent discussions about the future of our information environment, we--the public, policymakers, the media, website operators--need to understand the technical realities and policy dynamics that shaped the response to the Christchurch massacre. But some of these responses have also included ideas that point in a disturbing direction: toward increasingly centralized and opaque censorship of the global interne

 

 

Updated: Right wronged...

Facebook censors Tommy Robinson's page


Link Here27th February 2019
Full story: Facebook Censorship...Facebook quick to censor
Tommy Robinsonm has been permanently banned from Facebook and sister website Instagram. In a blogpost, Facebook said:

When ideas and opinions cross the line and amount to hate speech that may create an environment of intimidation and exclusion for certain groups in society -- in some cases with potentially dangerous offline implications -- we take action. Tommy Robinson's Facebook page has repeatedly broken these standards, posting material that uses dehumanizing language and calls for violence targeted at Muslims. He has also behaved in ways that violate our policies around organized hate.

Robinson is already banned from Twitter and the decision to cut him off from Instagram and Facebook will leave him reliant on YouTube as the only major online platform to provide him with a presence.

The ban comes a month after Facebook issued a final written warning against Robinson, warning him that he would be removed from its platform permanently if he continued to break the company's hate speech policies.

Mainstream outlets have struggled to deal with Robinson. When he was interviewed by Sky News last year, Robinson responded b uploading an unedited video of the discussion showing that Sky News did in fact mislead viewers by mixing and matching questions to answers to make Robinson look bad. The video became an online success and was shared far more widely online than the original interview.

Robinson adopted a similar tactic with the BBC's Panorama, which is investigating the far-right activist. Two weeks ago, Robinson agreed to be interviewed by the programme, only to turn the tables on reporter John Sweeney by revealing he had sent an associate undercover to film the BBC reporter.

Several other accounts were removed from Facebook on Tuesday, including one belonging to former Breitbart London editor Raheem Kassam.

Update: BBC receive complaints about Panorma

27th February 2019.  See  article from bbc.co.uk

Complaint

We received complaints following the third party release of secretly recorded material related to a BBC Panorama investigation.

BBC Response

 BBC Panorama is investigating Tommy Robinson, whose real name is Stephen Yaxley-Lennon. The BBC strongly rejects any suggestion that our journalism is faked or biased. Any programme we broadcast will adhere to the BBC's strict editorial guidelines. BBC Panorama's investigation will continue.

John Sweeney made some offensive and inappropriate remarks whilst being secretly recorded, for which he apologises. The BBC has a strict expenses policy and the drinks bill in this video was paid for in full by John.

Offsite Comment: Why Tommy Robinson should not be banned

27th February 2019. See  article from spiked-online.com by Brendan O'Neill

Facebook and Instagram's ban confirms that corporate censorship is out of control.

 

 

Wrongs righted...

Facebook restores Russia Today page that it recently censored


Link Here26th February 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook has restored several RT-linked pages a week after it blocked them without prior notice. The pages were only freed-up after their administrators posted data about their management and funding.

The Facebook pages of InTheNow, Soapbox, Back Then and Waste-Ed -- all operated by the Germany-based company Maffick Media were made accessible again as of Monday evening.

Facebook said in a statement at the time of the ban that it wants the pages' administrators to reveal their ties to Russia to their audience in the name of greater transparency. Facebook's measure was taken following a CNN report, which ludicrously accused the pages of concealing their ties to the Kremlin, even though their administrators had never actually made a secret of their relations to Ruptly and RT. In fact RT is very blatantly, a propaganda channel supporting Russia.

Maffick CEO Anissa Naouai revealed that the social media giant agreed to unblock the pages, but only after their administration updated our 'About' section, in a manner NO other page has been required to do. The accounts now indeed feature information related to their funding and management, visible under the pages' logos.

 

 

Offsite Article: Banned in 1964 and again in 2019...


Link Here 7th February 2019
Full story: Facebook Censorship...Facebook quick to censor
An article on US censorship history about the 1964 obscenity case against the avant garde movie Flaming Creatures has stills banned by Facebook in 2019

See article from indiewire.com

 

 

Facebook's chaotic censorship of a theatre poster...

Governments set a lot of store by censorship machines, but Facebook shows that its algorithms are actually rubbish


Link Here 24th January 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook has banned the Theatre Royal Plymouth from using a picture to advertise one of their upcoming shows...because it had three small pictures of people showing some flesh.

The theatre featured a collaged image of the production by Phil Porter on their social media account but they received a message to say it breached Facebook's advertising policies. The three pictures that offended Facebook were:

  • a male torso,
  • one of breasts covered by a bra and
  • one of a bottom.

Ironically, the show is all about internet moderators and online censorship.

God of Chaos is an outrageously funny and provocative new play about the world of online censorship. Written by Olivier-nominated playwright Phil Porter.

 

 

Offsite Article: Russia unliked and unfriended...


Link Here 19th January 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook censors the Russian propaganda news service Sputnik

See article from polygraph.info

 

 

Facing off the state censor...

Facebook refuses to bow to Vietnam's repressive new internet censorship law


Link Here11th January 2019
Full story: Facebook Censorship...Facebook quick to censor
A repressive new internet censorship took effect at the beginning of 2019. I demands that data about Vietnam users is held locally in the country so that the Government is able to lodge censorship requests to remove content that it does not like and to hand over local account details of users that it wants to pursue.

Facebook has refused to go along with some of these provisions and has already been threatened by the government. claiming that Facebook violated the new law by not removing what it says is anti-government content.

According to a report published by state-controlled media Vietnam News, the Ministry of Information and Communications (MIC) accused Facebook of allowing personal accounts to post slanderous content, anti-government sentiment and libel and defamation of individuals, organisations and State agencies. The report noted:

Facebook had not reportedly responded to a request to remove fanpages provoking activities against the State at the request of authorities.

The MIC reported that the government had sent emails repeatedly asking Facebook to remove distorted and misleading content, but the platform delayed removal of the content, saying it didn't violate its community standards. The MIC also said that Facebook refused to hand over account data it sought for the associated accounts.

Vietnam News said that authorities are still gathering evidence of Facebook's infringements.

 

 

Commented: Stay just good friends...

Or else Facebook will censor your advances, no matter how subtle


Link Here8th December 2018
Full story: Facebook Censorship...Facebook quick to censor
Facebook has added a new category of censorship, sexual solicitation. It added the update on 15thh October but no one really noticed until recently.

The company has quietly updated its content-moderation policies to censor implicit requests for sex.The expanded policy specifically bans sexual slang, hints of sexual roles, positions or fetish scenarios, and erotic art when mentioned with a sex act. Vague, but suggestive statements such as looking for a good time tonight when soliciting sex are also no longer allowed.

The new policy reads:

15. Sexual Solicitation Policy

Do not post:

Content that attempts to coordinate or recruit for adult sexual activities including but not limited to:

    Filmed sexual activities Pornographic activities, strip club shows, live sex performances, erotic dances Sexual, erotic, or tantric massages

Content that engages in explicit sexual solicitation by, including but not limited to the following, offering or asking for:

    Sex or sexual partners Sex chat or conversations Nude images

Content that engages in implicit sexual solicitation, which can be identified by offering or asking to engage in a sexual act and/or acts identified by other suggestive elements such as any of the following:

    Vague suggestive statements, such as "looking for a good time tonight" Sexualized slang Using sexual hints such as mentioning sexual roles, sex positions, fetish scenarios, sexual preference/sexual partner preference, state of arousal, act of sexual intercourse or activity (sexual penetration or self-pleasuring), commonly sexualized areas of the body such as the breasts, groin, or buttocks, state of hygiene of genitalia or buttocks Content (hand drawn, digital, or real-world art) that may depict explicit sexual activity or suggestively posed person(s).

Content that offers or asks for other adult activities such as:

    Commercial pornography Partners who share fetish or sexual interests

Sexually explicit language that adds details and goes beyond mere naming or mentioning of:

    A state of sexual arousal (wetness or erection) An act of sexual intercourse (sexual penetration, self-pleasuring or exercising fetish scenarios)

Comment: Facebook's Sexual Solicitation Policy is a Honeypot for Trolls

8th December 2018. See  article from eff.org by Elliot Harmon

Facebook just quietly adopted a policy that could push thousands of innocent people off of the platform. The new " sexual solicitation " rules forbid pornography and other explicit sexual content (which was already functionally banned under a different statute ), but they don't stop there: they also ban "implicit sexual solicitation" , including the use of sexual slang, the solicitation of nude images, discussion of "sexual partner preference," and even expressing interest in sex . That's not an exaggeration: the new policy bars "vague suggestive statements, such as 'looking for a good time tonight.'" It wouldn't be a stretch to think that asking " Netflix and chill? " could run afoul of this policy.

The new rules come with a baffling justification, seemingly blurring the line between sexual exploitation and plain old doing it:

[P]eople use Facebook to discuss and draw attention to sexual violence and exploitation. We recognize the importance of and want to allow for this discussion. We draw the line, however, when content facilitates, encourages or coordinates sexual encounters between adults.

In other words, discussion of sexual exploitation is allowed, but discussion of consensual, adult sex is taboo. That's a classic censorship model: speech about sexuality being permitted only when sex is presented as dangerous and shameful. It's especially concerning since healthy, non-obscene discussion about sex--even about enjoying or wanting to have sex--has been a component of online communities for as long as the Internet has existed, and has for almost as long been the target of governmental censorship efforts .

Until now, Facebook has been a particularly important place for groups who aren't well represented in mass media to discuss their sexual identities and practices. At very least, users should get the final say about whether they want to see such speech in their timelines.

Overly Restrictive Rules Attract Trolls

Is Facebook now a sex-free zone ? Should we be afraid of meeting potential partners on the platform or even disclosing our sexual orientations ?

Maybe not. For many users, life on Facebook might continue as it always has. But therein lies the problem: the new rules put a substantial portion of Facebook users in danger of violation. Fundamentally, that's not how platform moderation policies should work--with such broadly sweeping rules, online trolls can take advantage of reporting mechanisms to punish groups they don't like.

Combined with opaque and one-sided flagging and reporting systems , overly restrictive rules can incentivize abuse from bullies and other bad actors. It's not just individual trolls either: state actors have systematically abused Facebook's flagging process to censor political enemies. With these new rules, organizing that type of attack just became a lot easier. A few reports can drag a user into Facebook's labyrinthine enforcement regime , which can result in having a group page deactivated or even being banned from Facebook entirely. This process gives the user no meaningful opportunity to appeal a bad decision .

Given the rules' focus on sexual interests and activities, it's easy to imagine who would be the easiest targets: sex workers (including those who work lawfully), members of the LGBTQ community, and others who congregate online to discuss issues relating to sex. What makes the policy so dangerous to those communities is that it forbids the very things they gather online to discuss.

Even before the recent changes at Facebook and Tumblr , we'd seen trolls exploit similar policies to target the LGBTQ community and censor sexual health resources . Entire harassment campaigns have organized to use payment processors' reporting systems to cut off sex workers' income . When online platforms adopt moderation policies and reporting processes, it's essential that they consider how those policies and systems might be weaponized against marginalized groups.

A recent Verge article quotes a Facebook representative as saying that people sharing sensitive information in private Facebook groups will be safe , since Facebook relies on reports from users. If there are no tattle-tales in your group, the reasoning goes, then you can speak freely without fear of punishment. But that assurance rings rather hollow: in today's world of online bullying and brigading, there's no question of if your private group will be infiltrated by the trolls ; it's when .

Did SESTA/FOSTA Inspire Facebook's Policy Change?

The rule change comes a few months after Congress passed the Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act (SESTA/FOSTA), and it's hard not to wonder if the policy is the direct result of the new Internet censorship laws.

SESTA/FOSTA opened online platforms to new criminal and civil liability at the state and federal levels for their users' activities. While ostensibly targeted at online sex trafficking, SESTA/FOSTA also made it a crime for a platform to "promote or facilitate the prostitution of another person." The law effectively blurred the distinction between adult, consensual sex work and sex trafficking. The bill's supporters argued that forcing platforms to clamp down on all sex work was the only way to curb trafficking--nevermind the growing chorus of trafficking experts arguing the very opposite .

As SESTA/FOSTA was debated in Congress, we repeatedly pointed out that online platforms would have little choice but to over-censor : the fear of liability would force them not just to stop at sex trafficking or even sex work, but to take much more restrictive approaches to sex and sexuality in general, even in the absence of any commercial transaction. In EFF's ongoing legal challenge to SESTA/FOSTA , we argue that the law unconstitutionally silences lawful speech online.

While we don't know if the Facebook policy change came as a response to SESTA/FOSTA, it is a perfect example of what we feared would happen: platforms would decide that the only way to avoid liability is to ban a vast range of discussions of sex.

Wrongheaded as it is, the new rule should come as no surprise. After all, Facebook endorsed SESTA/FOSTA . Regardless of whether one caused the other or not, both reflect the same vision of how the Internet should work--a place where certain topics simply cannot be discussed. Like SESTA/FOSTA, Facebook's rule change might have been made to fight online sexual exploitation. But like SESTA/FOSTA, it will do nothing but push innocent people offline.

 

 

Google's conscience...

Google employees write open letter opposing Google supporting the Chinese internet censorship regime


Link Here28th November 2018
Full story: Facebook Censorship...Facebook quick to censor

We are Google employees. Google must drop Dragonfly.

We are Google employees and we join Amnesty International in calling on Google to cancel project Dragonfly, Google's effort to create a censored search engine for the Chinese market that enables state surveillance.

We are among thousands of employees who have raised our voices for months. International human rights organizations and investigative reporters have also sounded the alarm, emphasizing serious human rights concerns and repeatedly calling on Google to cancel the project. So far, our leadership's response has been unsatisfactory.

Our opposition to Dragonfly is not about China: we object to technologies that aid the powerful in oppressing the vulnerable, wherever they may be. The Chinese government certainly isn't alone in its readiness to stifle freedom of expression, and to use surveillance to repress dissent. Dragonfly in China would establish a dangerous precedent at a volatile political moment, one that would make it harder for Google to deny other countries similar concessions.

Our company's decision comes as the Chinese government is openly expanding its surveillance powers and tools of population control. Many of these rely on advanced technologies, and combine online activity, personal records, and mass monitoring to track and profile citizens. Reports are already showing who bears the cost, including Uyghurs, women's rights advocates, and students. Providing the Chinese government with ready access to user data, as required by Chinese law, would make Google complicit in oppression and human rights abuses.

Dragonfly would also enable censorship and government-directed disinformation, and destabilize the ground truth on which popular deliberation and dissent rely. Given the Chinese government's reported suppression of dissident voices, such controls would likely be used to silence marginalized people, and favor information that promotes government interests.

Many of us accepted employment at Google with the company's values in mind, including its previous position on Chinese censorship and surveillance, and an understanding that Google was a company willing to place its values above its profits. After a year of disappointments including Project Maven, Dragonfly, and Google's support for abusers, we no longer believe this is the case. This is why we're taking a stand.

We join with Amnesty International in demanding that Google cancel Dragonfly. We also demand that leadership commit to transparency, clear communication, and real accountability. Google is too powerful not to be held accountable. We deserve to know what we're building and we deserve a say in these significant decisions.

Signed by 478 Google employees

 

 

Extracts Friends and Censors...

A Facebook Blueprint for Content Governance and Enforcement. By Mark Zuckerberg


Link Here16th November 2018
Full story: Facebook Censorship...Facebook quick to censor

Mark Zuckerberg has been publishing a series of articles ddressing the most important issues facing Facebook. This is the second in the series. Here are a few selected extracts

Community Standards

The team responsible for setting these policies is global -- based in more than 10 offices across six countries to reflect the different cultural norms of our community. Many of them have devoted their careers to issues like child safety, hate speech, and terrorism, including as human rights lawyers or criminal prosecutors.

Our policy process involves regularly getting input from outside experts and organizations to ensure we understand the different perspectives that exist on free expression and safety, as well as the impacts of our policies on different communities globally. Every few weeks, the team runs a meeting to discuss potential changes to our policies based on new research or data. For each change the team gets outside input -- and we've also invited academics and journalists to join this meeting to understand this process. Starting today, we will also publish minutes of these meetings to increase transparency and accountability.

The team responsible for enforcing these policies is made up of around 30,000 people, including content reviewers who speak almost every language widely used in the world. We have offices in many time zones to ensure we can respond to reports quickly. We invest heavily in training and support for every person and team. In total, they review more than two million pieces of content every day. We issue a transparency report with a more detailed breakdown of the content we take down.

For most of our history, the content review process has been very reactive and manual -- with people reporting content they have found problematic, and then our team reviewing that content. This approach has enabled us to remove a lot of harmful content, but it has major limits in that we can't remove harmful content before people see it, or that people do not report.

Accuracy is also an important issue. Our reviewers work hard to enforce our policies, but many of the judgements require nuance and exceptions. For example, our Community Standards prohibit most nudity, but we make an exception for imagery that is historically significant. We don't allow the sale of regulated goods like firearms, but it can be hard to distinguish those from images of paintball or toy guns. As you get into hate speech and bullying, linguistic nuances get even harder -- like understanding when someone is condemning a racial slur as opposed to using it to attack others. On top of these issues, while computers are consistent at highly repetitive tasks, people are not always as consistent in their judgements.

The vast majority of mistakes we make are due to errors enforcing the nuances of our policies rather than disagreements about what those policies should actually be. Today, depending on the type of content, our review teams make the wrong call in more than 1 out of every 10 cases.

Proactively Identifying Harmful Content

The single most important improvement in enforcing our policies is using artificial intelligence to proactively report potentially problematic content to our team of reviewers, and in some cases to take action on the content automatically as well.

This approach helps us identify and remove a much larger percent of the harmful content -- and we can often remove it faster, before anyone even sees it rather than waiting until it has been reported.

Moving from reactive to proactive handling of content at scale has only started to become possible recently because of advances in artificial intelligence -- and because of the multi-billion dollar annual investments we can now fund. To be clear, the state of the art in AI is still not sufficient to handle these challenges on its own. So we use computers for what they're good at -- making basic judgements on large amounts of content quickly -- and we rely on people for making more complex and nuanced judgements that require deeper expertise.

In training our AI systems, we've generally prioritized proactively detecting content related to the most real world harm. For example, we prioritized removing terrorist content -- and now 99% of the terrorist content we remove is flagged by our systems before anyone on our services reports it to us. We currently have a team of more than 200 people working on counter-terrorism specifically.

Some categories of harmful content are easier for AI to identify, and in others it takes more time to train our systems. For example, visual problems, like identifying nudity, are often easier than nuanced linguistic challenges, like hate speech. Our systems already proactively identify 96% of the nudity we take down, up from just close to zero a few years ago. We are also making progress on hate speech, now with 52% identified proactively. This work will require further advances in technology as well as hiring more language experts to get to the levels we need.

In the past year, we have prioritized identifying people and content related to spreading hate in countries with crises like Myanmar. We were too slow to get started here, but in the third quarter of 2018, we proactively identified about 63% of the hate speech we removed in Myanmar, up from just 13% in the last quarter of 2017. This is the result of investments we've made in both technology and people. By the end of this year, we will have at least 100 Burmese language experts reviewing content.

Discouraging Borderline Content

One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content. This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.

ur research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average -- even when they tell us afterwards they don't like the content.

This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement. By making the distribution curve look like the graph below where distribution declines as content gets more sensational, people are disincentivized from creating provocative content that is as close to the line as possible.

The category we're most focused on is click-bait and misinformation. People consistently tell us these types of content make our services worse -- even though they engage with them. As I mentioned above, the most effective way to stop the spread of misinformation is to remove the fake accounts that generate it. The next most effective strategy is reducing its distribution and virality.

Interestingly, our research has found that this natural pattern of borderline content getting more engagement applies not only to news but to almost every category of content. For example, photos close to the line of nudity, like with revealing clothing or sexually suggestive positions, got more engagement on average before we changed the distribution curve to discourage this. The same goes for posts that don't come within our definition of hate speech but are still offensive.

This pattern may apply to the groups people join and pages they follow as well. This is especially important to address because while social networks in general expose people to more diverse views, and while groups in general encourage inclusion and acceptance, divisive groups and pages can still fuel polarization. To manage this, we need to apply these distribution changes not only to feed ranking but to all of our recommendation systems for things you should join.

One common reaction is that rather than reducing distribution, we should simply move the line defining what is acceptable. In some cases this is worth considering, but it's important to remember that won't address the underlying incentive problem, which is often the bigger issue. This engagement pattern seems to exist no matter where we draw the lines, so we need to change this incentive and not just remove content.

Building an Appeals Process

Any system that operates at scale will make errors, so how we handle those errors is important. This matters both for ensuring we're not mistakenly stifling people's voices or failing to keep people safe, and also for building a sense of legitimacy in the way we handle enforcement and community governance.

We began rolling out our content appeals process this year. We started by allowing you to appeal decisions that resulted in your content being taken down. Next we're working to expand this so you can appeal any decision on a report you filed as well. We're also working to provide more transparency into how policies were either violated or not.

...Read the full article from facebook.com

 

 

Er...it's easy, just claim it transgresses 'community guidelines'...

Facebook will train up French censors in the art of taking down content deemed harmful


Link Here15th November 2018
Full story: Facebook Censorship...Facebook quick to censor

The French President, Emmanuel Macron has announced a plan to effectively embed French state censors with Facebook to learn more about how to better censor the platform. He announced a six-month partnership with Facebook aimed at figuring out how the European country should police hate speech on the social network.

As part of the cooperation both sides plan to meet regularly between now and May, when the European election is due to be held. They will focus on how the French government and Facebook can work together to censor content deemed 'harmful'. Facebook explained:

It's a pilot program of a more structured engagement with the French government so that both sides can better understand the other's challenges in dealing with the issue of hate speech online. The program will allow a team of regulators, chosen by the Elysee, to familiarize [itself] with the tools and processes set up by Facebook to fight against hate speech. The working group will not be based in one location but will travel to different Facebook facilities around the world, with likely visits to Dublin and California. The purpose of this program is to enable regulators to better understand Facebook's tools and policies to combat hate speech and, for Facebook, to better understand the needs of regulators.

 

 

Extract: Daily Facebook Censorship...

Tech Titans Made Serious Mistakes, and More Censorship Won't Right the Ship. By David French


Link Here25th August 2018
Full story: Facebook Censorship...Facebook quick to censor

yesterday, journalist and bestselling author Salena Zito reported that Facebook seemed to be censoring a story she wrote for the New York Post detailing why many Trump supporters won't be shaken by the Paul Manafort conviction or the Michael Cohen plea deal.

Some of her readers reported that it was being marked as spam. Others told her that Facebook was reporting that the article did not follow its Community Standards.

Then, suddenly, the posts reappeared. In both instances there has been no satisfactory explanation from Facebook for its censorship.

Read the full  article from nationalreview.com

 

 

Facebook: connecting naively trusting users with totally untrustworthy organisations...

And now Facebook implements daily deeds of censorship as if these are acts of contrition for its failures of trust


Link Here22nd August 2018
Full story: Facebook Censorship...Facebook quick to censor
And today's daily act of censorship is to take down 652 accounts and pages connected to Russia and Iran that published political propaganda.

Facebook said in a blog post  that the errant accounts were first uncovered by the cybersecurity firm FireEye, and have links to Russia and Iran. CEO Mark Zuckerberg said:

These were networks of accounts that were misleading people about who they were and what they were doing. We ban this kind of behavior because authenticity matters. People need to be able to trust the connections they make on Facebook.

In July, FireEye tipped Facebook off to the existence of a network of pages known as Liberty Front Press. The network included 70 accounts, three Facebook groups, and 76 Instagram accounts, which had 155,000 Facebook followers and 48,000 Instagram followers. Not exactly impressive figures though. And the paltry $6,000 spent since 2015 rather suggests that these a small fry.

Liberty Free Press also was linked to a set of pages that posed as news organizations while also hacking people's accounts and spread malware, Facebook said. That network included 12 pages and 66 accounts, plus nine Instagram accounts. They had about 15,000 Facebook followers and 1,100 Instagram followers, and did not buy advertising or events.

Iran-linked accounts and pages created in 2011 shared posts about politics in the Middle East, United Kingdom, and United States. That campaign had 168 pages and 140 Facebook accounts, as well as 31 Instagram accounts, and had 813,000 Facebook followers and 10,000 Instagram followers. Again the total advertising spend was just $6,000.

Russian accounts taken down in the Facebook action were focused on politics in Syria and Ukraine, but did not target the United States.

Facebook's reputation ratings

See  article from bbc.co.uk

Facebook has confirmed that it has started scoring some of its members on a trustworthiness scale.The Washington Post revealed that the social network had developed the system over the past year.

The tech firm says it has been developed to help handle reports of false news on its platform, but it has declined to reveal how the score is calculated or the limits of its use. Critics are concerned that users have no apparent way to obtain their rating. The BBC understands that at present only Facebook's misinformation team makes use of the measurement.

Perhaps the scheme works on 1 to 5 scale with the bottom rating of 1, being as trustworthy as Facebook, a lowly score of 2 for being twice as trustworthy as Facebook, whilst top of the scale is 5 times as trustworthy as Facebook.

Facebook objected the scale being described in the Washington Post as being a 'reputation' score. Facebook said that this was just plain wrong claiming:

What we're actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system. The reason we do this is to make sure that our fight against misinformation is as effective as possible.

No doubt armies of Indian SEO workers will now redirect their efforts at improving website's Facebook reputation ratings.

Seeking refuge in blaming Facebook

See  article from nytimes.com

Meanwhile Warwick University research suggests that anti refugee troubles are worse in German towns where Facebook usage is more than the national average. Facebook are taking a lot of stick lately but it seems a little much to start blaming them for all the world's ills. If Facebook were to be banned tomorrow, would the world suddenly become a less fractious place? What do you think?

 

 

Offsite Article: PragerU...


Link Here21st August 2018
Full story: Facebook Censorship...Facebook quick to censor
The latest example of political censorship by Facebook

See article from thenewamerican.com

 

 

Offsite Article: Facebook under pressure and a new keenness for political censorship...


Link Here18th August 2018
Full story: Facebook Censorship...Facebook quick to censor
After getting ticked off everywhere around the western world, Facebook is now kowtowing to any government who asks, this time banning a Palestinian Occupy London page

See article from rt.com

 

 

Facebook nudity inspectors...

Patrolling Rubens House in Antwerp to protect social media users from nudity


Link Here26th July 2018
Full story: Facebook Censorship...Facebook quick to censor
The Flemish Tourism Board has responded to Facebook's relentless censorship of nudity in classical paintings by Peter Paul Rubens

In the satirical video, a team of Social Media Inspectors block gallery goers from seeing paintings at the Rubens House in Antwerp. Facebook-branded security--called fbi--redirect unwitting crowds away from paintings that depict nude figures. We need to direct you away from nudity, even if artistic in nature, says one Social Media Inspector.

The Flemish video, as well as a cheeky open letter from the tourism board and a group of Belgian museums, asks Facebook to roll back its censorship standards so that they can promote Rubens. "Breasts, buttocks and Peter Paul Rubens cherubs are all considered indecent. Not by us, but by you, the letter, addressed to Facebook CEO Mark Zuckerberg, says. Even though we secretly have to laugh about it, your cultural censorship is making life rather difficult for us.

The Guardian reported that Facebook is planning to have talks with the Flemish tourist board.

 

 

Offsite Article: Want Facebook to Censor Speech?...


Link Here 24th July 2018
Full story: Facebook Censorship...Facebook quick to censor
Be Careful What You Wish For. By Antonio García Martínez

See article from wired.com

 

 

For hating Britain?...

Facebook censors the US Declaration of Independence as hate speech


Link Here9th July 2018
Full story: Facebook Censorship...Facebook quick to censor

One moment Facebook's algorithms are expected to be able to automatically distinguish terrorism support from news reporting or satire, the next moment, it demonstrates exactly how crap it is by failing to distinguish hate speech from a profound, and nation establishing, statement of citizens rights.

Facebook's algorithms removed parts of the US Declaration of Independence from the social media site after determining they represented hate speech.

The issue came to light when a local paper in Texas began posting excerpts of the historic text on its Facebook page each day in the run up to the country's Independence Day celebrations on July 4.

However when The Liberty County Vindicator attempted to post its tenth extract, which refers to merciless Indian savages, on its Facebook page the paper received a notice saying the post went against its standards on hate speech.

Facebook later 'apologised' as it has done countless times before and allowed the posting.

 

 

Social media against conservatives...

Facebook details its censorship enforcement, no doubt conservatives bear the brunt of it


Link Here 16th May 2018
Full story: Facebook Censorship...Facebook quick to censor

We're often asked how we decide what's allowed on Facebook -- and how much bad stuff is out there. For years, we've had Community Standards that explain what stays up and what comes down. Three weeks ago, for the first time, we published the internal guidelines we use to enforce those standards. And today we're releasing numbers in a Community Standards Enforcement Report so that you can judge our performance for yourself.

Alex Schultz, our Vice President of Data Analytics, explains in more detail how exactly we measure what's happening on Facebook in both this Hard Questions post and our guide to Understanding the Community Standards Enforcement Report . But it's important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what's important and what works.

This report covers our enforcement efforts between October 2017 to March 2018, and it covers six areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts. The numbers show you:

  • How much content people saw that violates our standards;

  • How much content we removed; and

  • How much content we detected proactively using our technology -- before people who use Facebook reported it.

Most of the action we take to remove bad content is around spam and the fake accounts they use to distribute it. For example:

  • We took down 837 million pieces of spam in Q1 2018 -- nearly 100% of which we found and flagged before anyone reported it; and

  • The key to fighting spam is taking down the fake accounts that spread it. In Q1, we disabled about 583 million fake accounts -- most of which were disabled within minutes of registration. This is in addition to the millions of fake account attempts we prevent daily from ever registering with Facebook. Overall, we estimate that around 3 to 4% of the active Facebook accounts on the site during this time period were still fake.

In terms of other types of violating content:

  • We took down 21 million pieces of adult nudity and sexual activity in Q1 2018 -- 96% of which was found and flagged by our technology before it was reported. Overall, we estimate that out of every 10,000 pieces of content viewed on Facebook, 7 to 9 views were of content that violated our adult nudity and pornography standards.

  • For graphic violence, we took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018 -- 86% of which was identified by our technology before it was reported to Facebook.

  • For hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 -- 38% of which was flagged by our technology.

As Mark Zuckerberg said at F8 , we have a lot of work still to do to prevent abuse. It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important. For example, artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue. And more generally, as I explained two weeks ago, technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported. In addition, in many areas -- whether it's spam, porn or fake accounts -- we're up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts. It's why we're investing heavily in more people and better technology to make Facebook safer for everyone.

It's also why we are publishing this information. We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too. This is the same data we use to measure our progress internally -- and you can now see it to judge our progress for yourselves. We look forward to your feedback.

 

 

Offsite Article: Old injustices...


Link Here19th April 2018
Full story: Facebook Censorship...Facebook quick to censor
Facebook found to be issuing posting punishments for years old posts that were OK at the time but are now contrary to recently updated censorship rules

See article from avn.com

 

 

Offsite Article: AI censors are just round the next corner...


Link Here 15th April 2018
Full story: Facebook Censorship...Facebook quick to censor
A good report from Mark Zuckerberg's grilling at the US Congress

See article from lawfareblog.com

 

 

Extract: Your thoughts may be recorded for training purposes...

Mark Zuckerberg admits that private conversations via Facebook are monitored and may be censored


Link Here 5th April 2018
Full story: Facebook Censorship...Facebook quick to censor

One of the more understated but intriguing statements in Zuckerberg's Vox interview this past Monday was his public acknowledgement at long last that the company uses computer algorithms to scan all of our private communications on its platform, including Facebook Messenger. While users could always manually report threatening or illegal behavior and communications for human review, Zuckerberg acknowledged for the first time that even in private chat sessions, Facebook is not actually a neutral communications platform like the phone company that just provides you a connection and goes away -- Facebook's algorithms are there constantly monitoring your most private intimate conversations in an Orwellian telescreen that never turns off.

...

The company emphasized in an interview last year that it does not use mine private conversations for advertising, but left open the possibility that they might scan them for other purposes.

In his interview this week, Zuckerberg offered that in cases where people are sending harmful and threatening private messages our systems detect that that's going on. We stop those messages from going through. His reference to our systems detect suggested this was more than just humans manually flagging threatening content. A spokesperson confirmed that in this case the first human recipients of the messages had manually flagged them as violations and as large number of users began flagging the same set of messages, Facebook's systems deleted future transmission of them. The company had previously noted that it uses similarity detection for its fake news and other filters, both matching exact duplicates and highly similar content. The company confirmed that its fingerprinting algorithms (which the company has previously noted include revenge porn, material from the shared terrorism database and PhotoDNA) are applied to private messages as well.

...Read the full article from forbes.com

 

 

Facebook repent for you have sinned...

You have rejected Jesus Christ whilst accepting tainted coin from those that would corrupt the lambs that have entrusted their souls to you as their shepherd


Link Here 5th April 2018
Full story: Facebook Censorship...Facebook quick to censor
An almost theological question for, what will AI make of religion? What will it make of people who proclaim peace whilst inciting violence; who preach tolerance whilst practising intolerance; and whose hypocrisy about sexuality is simply perverse?

Anyway, Facebook have excelled themselves by banning an image of Jesus Christ on the cross in a context of religious education.

A post on the Franciscan University blog explains:

We posted yesterday a series of ads to Facebook to promote our online MA Theology and MA Catechetics and Evangelization programs.

One ad was rejected, and an administrator of our Facebook page noticed this rejection today. The reason given for the rejection?

Your image, video thumbnail or video can't contain shocking, sensational, or excessively violent content.

Our ad was rejected because it contained:

  • shocking content

  • sensational content

  • excessively violent content

What was the offending image?

And indeed, the Crucifixion of Christ was all of those things. It was the most sensational action in history: man executed his God.

It was shocking, yes: God deigned to take on flesh and was obedient unto death, even death on a cross. (Philippians 2:8)

And it was certainly excessively violent: a man scourged to within an inch of his life, nailed naked to a cross and left to die, all the hate of all the sin in the world poured out its wrath upon his humanity.

Although the university owned up to the 'violent' image Facebook then decided that of course the image wasn't violent and yet again issued a grovelling apology for its shoddy censorship process. So do you think AI censorship process will be any better?

 

 

How much more symbolic can you get?...

Facebook censors France's iconic artwork, Liberty Leading the People


Link Here 19th March 2018
Full story: Facebook Censorship...Facebook quick to censor
 Facebook has admitted a ghastly mistake after it banned an advert featuring French artist Eugčne Delacroix's famous work, La Liberté guidant le peuple, because it depicts a bare-breasted woman.

The 19th-century masterpiece was featured in an online campaign for a play showing in Paris when it was blocked on the social networking site this week, the play's director Jocelyn Fiorina said:

A quarter of an hour after the advert was launched, it was blocked, with the company telling us we cannot show nudity.

He then posted a new advert with the same painting with the woman's breasts covered with a banner saying censored by Facebook, which was not banned.

As always when Facebook's shoddy censorship system is found lacking, the company apologised profusely for its error.

 

 

They can't tell art from arse...

Another in the long lists of shoddy Facebook censorship decisions


Link Here 1st March 2018
Full story: Facebook Censorship...Facebook quick to censor
Cases of art censorship on Facebook continue to surface. The latest work deemed pornographic is the 30,000 year-old nude statue famously known as the Venus of Willendorf, part of the Naturhistorisches Museum (NHM) collection in Vienna. An image of the work posted on Facebook by Laura Ghianda, a self-described artivist, was removed as inappropriate content despite four attempts to appeal the decision.

The NHM reacted to Ghianda's Facebook post in January, requesting that Facebook allow the Venus to remain naked. There has never been a complaint by visitors concerning the nakedness of the figurine, says Christian Koeberl, the director general of NHM. There is no reason to cover the Venus of Willendorf and hide her nudity, neither in the museum nor on social media.

 

 

Offsite Article: Facebook moderator: I had to be prepared to see anything...


Link Here 8th February 2018
Full story: Facebook Censorship...Facebook quick to censor
It's mostly pornography, says Sarah Katz, recalling her eight-month stint working as a Facebook moderator

See article from bbc.com

 

 

Facebook IS the internet for many countries...

Why Facebook's News Feed Changes Pose a Threat to Free Expression


Link Here 27th January 2018
Full story: Facebook Censorship...Facebook quick to censor
Just a bit of background from Thailand explaining how internet is priced for mobile phones, it rather explains how Facebook amd Youtube are even more dominant than in the west:

We give our littl'un a quid a week to top up her pay as you go mobile phone. She can, and does, spend unlimited time on YouTube, Facebook, Messenger, Skype, Line and a couple of other social media sites. It's as cheap as chips, but the rub is that she has just a tiny bandwidth allowance to look at any sites apart from the core social media set.

On the other hand wider internet access with enough bandwidth to watch a few videos costs abut 15 quid a month (a recently reduced price, it used to be 30 quid a month a few months ago).

Presumably the cheap service is actually paid for by Google and Facebook etc with the knowledge that people are nearly totally trapped in their walled garden. Its quite useful for kids because they haven't got the bandwidth to go looking round where they shouldn't. But the price makes it very attractive to many adults too.

Anyway Summer Lopez from PEN America considers how this internet monopoly stitch up is even more sensitive to the announced Facebook feed changes than in the west.

Read the full article from pen.org by Summer Lopez

 

 

Offsite Article: Facebook cuts back on news to give you more time with advertisers...


Link Here17th January 2018
Full story: Facebook Censorship...Facebook quick to censor
On censorship, 'fake news' and filter bubbles

See article from theconversation.com

 

 

Offsite Article: Facebook friends with censorship...


Link Here 16th January 2018
Full story: Facebook Censorship...Facebook quick to censor
World Socialist Web Site states that Facebook has blocked shares to a video organising an event calling for opposition to internet censorship

See article from wsws.org

 

 

Offsite Article: Cheapo Facebook censorship...


Link Here30th December 2017
Full story: Facebook Censorship...Facebook quick to censor
ProPublica asked Facebook about its handling of 49 posts that might be deemed offensive. The company acknowledged that its content reviewers had made the wrong call on 22 of them.

See article from propublica.org

 

 

It's a hard job being an algorithm...

Facebook gets in a censorship mess over whether news picture of Poland's marches are incendiary or newsworthy


Link Here 19th November 2017
Full story: Facebook Censorship...Facebook quick to censor
On 11th November, thousands of people marched in the streets of Warsaw, Poland, to celebrate the country's Independence Day. The march attracted massive numbers of people from the nationalist or far right end of the political spectrum.

The march proved very photogenic, with images showing the scale of the march and also the stylised symbology proved very powerful and thought provoking.

But the images caused problems for the likes of Facebook, on what should be censored and what should not.

Once could argue that the world needs to see what is going on amongst large segments of the population in Poland, and indeed across Europe. Perhaps if they see the popularity of the far right then maybe communities and politicians can be spurred into addressing some of the fundamental societal break downs leading to this mass movement.

On the other hand, there will be those that consider the images to be something that could attract and inspire others to join the cause.

But from just looking at news pictures, it would be hard to know what to think. And that dilemma is exactly what caused confusion amongst censors at Facebook.

Quartz (qz.com ) reports on a collection of such images, published on Facebook by a renowned photojournalist in Poland, that was taken down by the social media's content censors. Chris Niedenthal attended the march to practice his craft, not to participate, and posted his photos on Nov. 12, the day after the march. Facebook took them down. He posted them again the next day. Facebook took them down again on Nov. 14. Niedenthal himself was also blocked from Facebook for 24 hours. The author concludes that a legitimate professional journalist or photojournalist should not be 'punished' for doing his duty.

Facebook told Quartz that the photos, because they contained hate speech symbols, were taken down for violating the platform's community standards policy barring content that shows support for hate groups. The captions on the photos were neutral, so Facebook's moderators could not tell if the person posting them supported, opposed, or was indifferent about hate groups, a spokesperson said. Content shared that condemns or merely documents events can remain up. But that which is interpreted to show support for hate groups is banned and will be removed.

Eventually Facebook allowed the photos to remain on the platform. Facebook apologized for the error, in a message, and in a personal phone call.

 

 

Offsite Article: Facebook consider robin red 'breast' Christmas card to be adult content...


Link Here12th November 2017
Full story: Facebook Censorship...Facebook quick to censor
If Facebook are going to mess around with people's livelihoods with shoddy algorithms, then they should at least set up an appeal system that they actually respond to

See article from dailymail.co.uk

 

 

Update: Facebook in the middle...

Censorship is becoming a little political in the struggle between Russia and the US


Link Here27th September 2017
Full story: Facebook Censorship...Facebook quick to censor
Russia will block access to Facebook next year if the websites refuses to comply with a law requiring websites to store personal data of Russian citizens on Russian servers so as to facilitate state snooping. Russia's internet censor, Roskomnadzor, told reporters that either Facebook abides by the law or the social network will cease to work on Russian territory.

Roskomnadzor blocked Russian access to LinkedIn last November as a result of the social media company being found guilty of violating the same data storage law. Since then, foreign internet companies have been under pressure to comply or risk losing their service in the country. Twitter has told Roskomnadzor that it aims to localise the personal data of its users by mid-2018. Companies including Google and Alibaba have already complied .

Meanwhile on the other side of the iron curtain, Facebook said it will turn over to the United States Congress Russian-linked ads that may have been intended to sway the 2016 US election. The social network revealed that it identified around 500 fake accounts with ties to Russia that purchased $100,000 worth of ads during the campaign, as well as $50,000 ad purchases from Russian accounts.

We support Congress in deciding how to best use this information to inform the public, and we expect the government to publish its findings when their investigation is complete, Facebook CEO Mark Zuckerberg said.

 

 

Update: But does 'fake' now mean opinions not approved by the mainstream media...

Facebook ramps up its censorship to pages that share supposed 'fake news'


Link Here30th August 2017
Full story: Facebook Censorship...Facebook quick to censor
Facebook has revealed new plans to censor supposed 'fake news'. It has announced that any pages which are flagged for hosting stories that are considered unpolitically correct will be banned from buying advertising to publicise themselves.

A group of third party fact checkers will be tasked with highlighting these pages.

In a statement, Satwik Shukla and Tessa Lyons, who are both product managers, wrote:

Currently, we do not allow advertisers to run ads that link to stories that have been marked false by third-party fact-checking organizations. Now we are taking an additional step.

If Pages repeatedly share stories marked as false, these repeat offenders will no longer be allowed to advertise on Facebook.

This update will help to reduce the distribution of false news which will keep Pages that spread false news from making money.

 

 

Update: Unfriending Terrorists...

Facebook announces measures to prevent terrorist related content


Link Here23rd June 2017
Full story: Facebook Censorship...Facebook quick to censor
Facebook is launching a UK initiative to train and fund local organisations it hopes will combat extremism and hate speech. The UK Online Civil Courage Initiative's initial partners include Imams Online and the Jo Cox Foundation.

Facebook's chief operating officer, Sheryl Sandberg said:

The recent terror attacks in London and Manchester - like violence anywhere - are absolutely heartbreaking. No-one should have to live in fear of terrorism - and we all have a part to play in stopping violent extremism from spreading. We know we have more to do - but through our platform, our partners and our community we will continue to learn to keep violence and extremism off Facebook.

Last week Facebook outlined its technical measures to remove terrorist-related content from its site. The company told the BBC it was using artificial intelligence to spot images, videos and text related to terrorism as well as clusters of fake accounts.

Facebook explained that it was aiming to detect terrorist content immediately as it is posted and before other Facebook users see it. If someone tries to upload a terrorist photo or video, the systems look to see if this matches previous known extremist content to stop it going up in the first place.

A second area is experimenting with AI to understand text that might be advocating terrorism. This is analysing text previously removed for praising or supporting a group such as IS and trying to work out text-based signals that such content may be terrorist propaganda.

The company says it is also using algorithms to detect clusters of accounts or images relating to support for terrorism. This will involve looking for signals such as whether an account is friends with a high number of accounts that have been disabled for supporting terrorism. The company also says it is working on ways to keep pace with repeat offenders who create accounts just to post terrorist material and look for ways of circumventing existing systems and controls.

Facebook has previously announced it is adding 3,000 employees to review content flagged by users. But it also says that already more than half of the accounts that it removes for supporting terrorism are ones that it finds itself.  Facebook says it has also grown its team of specialists so that it now has 150 people working on counter-terrorism specifically, including academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers.

One of the major challenges in automating the process is the risk of taking down material relating to terrorism but not actually supporting it - such as news articles referring to an IS propaganda video that might feature its text or images. An image relating to terrorism - such as an IS member waving a flag - can be used to glorify an act in one context or be used as part of a counter-extremism campaign in another.

 

 

Update: Extreme optimism...

Theresa May claims that social media censorship will be a panacea for religious violence


Link Here26th May 2017
Full story: Facebook Censorship...Facebook quick to censor
Theresa May has urged world leaders to do more to censor online extremism, saying the fight against so-called Islamic State is moving from the battlefield to the internet.

Speaking about counter-terrorism at the G7 summit in Sicily, the PM said more pressure should be put on tech companies to remove extreme material and to report such content to the authorities. She led a discussion on how to work together to prevent the plotting of terrorist attacks online and to stop the spread of hateful extremist ideology on social media.

She said that the industry has a social responsibility to do more to take down harmful content. She acknowledged that the industry has been taking action to remove extremist content, but said it has not gone far enough and needs to do more.

She called for an international forum to develop the means of intervening where danger is detected, and for companies to develop tools which automatically identify and remove harmful material based on what it contains, and who posted it.

Update: Norway too

26th May 2017 See article from telegraph.co.uk

Norway is considering introducing uniformed police profiles which would patrol Facebook looking for criminal activity.

Kripos, Norway's National Criminal Investigation Service, is reportedly examining the legal aspects of how police accounts could be given access to areas of Facebook that are not open to the public. It would mean police gaining access to closed groups and interacting with members as they search for evidence of criminal activity.

Police in Norway and elsewhere have previously used fake Facebook profiles to investigate crimes including smuggling alcohol and tobacco.

 

 

Update: The published rules are fake...

Facebook's real censorship rules are leaked


Link Here 21st May 2017
Full story: Facebook Censorship...Facebook quick to censor
Thousands of pages of internal documents from Facebook have been leaked, revealing the censorship rules used to identify user content that is to be censored.

Among the rules detailed in documents obtained by the Guardian are those covering nudity, violence and threats.

A threat to kill the US President would be deleted, but similar remarks against an ordinary person would not be viewed as credible unless further comments showed signs of a plot.

Other rules reveal that videos depicting self-harm are allowed, as long as there exists an opportunity to help the person.  Videos of suicide, however, are never allowed.

Film of child and animal abuse (as long as it is non-sexual) can remain in an effort to raise awareness and possibly help those affected.

Aside from footage of actual violence, Facebook must also decide how to respond to threats of it, what they call credible threats of violence. There is an entire rulebook for what is considered credible and what is not. Statements like someone shoot Trump will be deleted by the website, but comments like let's go beat up fat kids, or I hope someone kills you will not. The leaked documents state that violent threats are most often not credible, until specific statements make it clear that the threat is no longer simply an expression of emotion but a transition to a plot or design.

Facebook's rules regarding nudity now makes allowance for newsworthy exceptions. like the famous Vietnam War photo of a naked young girl hit by napalm, and for handmade art. Digitally made art showing sexually explicit content is not allowed.

 

 

Update: Austria unfriends free speech...

Austrian court demands worldwide censorship of Facebook postings insulting a politician


Link Here 11th May 2017
Full story: Facebook Censorship...Facebook quick to censor
An Austrian appeals court has ordered Facebook to remove political criticism of an Austrian politician. the court ruled that posts calling Green Party leader Eva Glawischnig a lousy traitor of the people and a corrupt klutz are somehow hate speech.

The ruling by the Austrian court doesn't just require Facebook to delete the offending posts in Austria, but for all users around the world, including any verbatim repostings. That would be an aggressive precedent to set, since Facebook has historically enforced country-specific speech laws only for local users.

Facebook has removed the posts in Austria, which were posted by a fake account. It has yet to remove the posts globally because it is appealing the case.

American legal experts speaking to The Outline called the ruling troubling, and warned of the potential ramifications Facebook and its users could face as a result. Daphneth Keller, director of intermediary liability at the Stanford Center for Internet and Society, told The Outline that the ruling sends a signal to other countries that they too can impose their laws on the rest of the world's internet. She asked:

Should Facebook comply globally with Russia's anti-gay laws, or Thailand's laws against insulting the king, or Saudi Arabia's blasphemy laws? Would Austria want those laws to dictate what speech its citizens can share online?

 

 

Update: Facebook all at sea about censorship...

Facebook again embarrasses itself by censoring an image of classic nude art.


Link Here 4th January 2017
Full story: Facebook Censorship...Facebook quick to censor
Facebook has once again drawn sharp criticism over its censorship policies after the social media giant reportedly blocked a photo of the historic naked statue of the sea god Neptune that stands in the Piazza del Nuttuno in Bologna, Italy.

Local writer Elisa Barbari said she chose a photograph of the 16th century 3.2-metre high bronze Renaissance statue of the sea god holding a trident to illustrate her Facebook page titled, Stories, curiosities and views of Bologna.

However, Facebook reportedly objected to the nude image of the iconic statue. In a statement, the social media company told Barbari:

The use of the image was not approved because it violates Facebook's guidelines on advertising. It presents an image with content that is explicitly sexual and which shows to an excessive degree the body, concentrating unnecessarily on body parts.

Inevitably when sufficient bad press is generated by Facebook's ludicrous aversion to trivial nudity, the company admitted that it had again made a ghastly mistake and grovelled:

Our team processes millions of advertising images each week, and in some instances we incorrectly prohibit ads. This image does not violate our ad policies. We apologise for the error and have let the advertiser know we are approving their ad.

 

 

Commented Fake news...

Facebook outlines how its 'fake news' detection will work.


Link Here31st December 2016
Full story: Facebook Censorship...Facebook quick to censor
Facebook has outlined its approach to 'fake news' in a blog post:

A few weeks ago we previewed some of the things we're working on to address the issue of fake news and hoaxes. We're committed to doing our part and today we'd like to share some updates we're testing and starting to roll out.

We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we're approaching this problem carefully. We've focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third party organizations.

The work falls into the following four areas. These are just some of the first steps we're taking to improve the experience for people on Facebook. We'll learn from these tests, and iterate and extend them over time.

We're testing several ways to make it easier to report a hoax if you see one on Facebook, which you can do by clicking the upper right hand corner of a post. We've relied heavily on our community for help on this issue, and this can help us detect more fake news.

We believe providing more context can help people decide for themselves what to trust and what to share. We've started a program to work with third-party fact checking organizations that are signatories of Poynter's International Fact Checking Code of Principles. We'll use the reports from our community, along with other signals, to send stories to these organizations. If the fact checking organizations identify a story as fake, it will get flagged as disputed and there will be a link to the corresponding article explaining why. Stories that have been disputed may also appear lower in News Feed.

It will still be possible to share these stories, but you will see a warning that the story has been disputed as you share. Once a story is flagged, it can't be made into an ad and promoted, either.

We're always looking to improve News Feed by listening to what the community is telling us. We've found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way. We're going to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it.

We've found that a lot of fake news is financially motivated. Spammers make money by masquerading as well-known news organizations, and posting hoaxes that get people to visit to their sites, which are often mostly ads. So we're doing several things to reduce the financial incentives. On the buying side we've eliminated the ability to spoof domains, which will reduce the prevalence of sites that pretend to be real publications. On the publisher side, we are analyzing publisher sites to detect where policy enforcement actions might be necessary.

It's important to us that the stories you see on Facebook are authentic and meaningful. We're excited about this progress, but we know there's more to be done. We're going to keep working on this problem for as long as it takes to get it right.

Offsite Article: Fake news detection on the cheap

The Guardian investigates how Facebook's trumpeted 'fake news' detection relies on unpaid volunteers.

17th December 2016. See  article from theguardian.com

Offsite Comment: Don't make Facebook the ministry of truth

The fake-news panic is a threat to internet freedom.

31st December 2016. See  article from spiked-online.com by Naomi Firsht

 

 

Offsite Article: From a friend of a friend...


Link Here22nd December 2016
Full story: Facebook Censorship...Facebook quick to censor
German newspaper reveals some of Facebook's secret censorship rules

See article from international.sueddeutsche.de

 

 

Update: But for the moment...

Facebook promises to relax censorship but Swedish cancer group still has to illustrate its message with stylised square breasts to get round Facebook censorship rules


Link Here22nd October 2016
Full story: Facebook Censorship...Facebook quick to censor
Facebook is notoriously terrible when it comes to censorship of the naked human body, especially when it comes to pieces of the female anatomy. So it's not surprising that a non-profit's breast cancer awareness video was taken down because it featured stylised female nipples.

So the Swedish Cancer Society countered with a replacement ad, which featured square breasts instead of round ones. The organization posted the video earlier this week, but it was removed because, as Facebook said the:

Ad can not market sex products or services nor adults products or services.

The organization wrote up an open letter to Facebook, in which it introduced the shape-based compromise:

We understand that you have to have rules about the content published on your platform. But you must also understand that one of our main tasks is to disseminate important information about cancer -- in this case breast cancer.

After trying to meet your control for several days without success, we have now come up with a solution that will hopefully make you happy: Two pink squares! This can not possibly offend you, or anyone. Now we can continue to spread our important breast school without upsetting you.

Facebook later apologized for its crap censorship rules being found out:

We apologize for the error and have let the advertiser know we are approving their ads.

 

 

Update: Where there's a buck there's a way...

Facebook sees a good future as people's news provider and so has announced that it will let up on its strict censorship policies when posts are considered newsworthy


Link Here 22nd October 2016
Full story: Facebook Censorship...Facebook quick to censor
Facebook's VPs Joel Kaplan and Justin Osofsky wrote in a blog:

In recent weeks, we have gotten continued feedback from our community and partners about our Community Standards and the kinds of images and stories permitted on Facebook. We are grateful for the input, and want to share an update on our approach.

Observing global standards for our community is complex. Whether an image is newsworthy or historically significant is highly subjective. Images of nudity or violence that are acceptable in one part of the world may be offensive -- or even illegal -- in another. Respecting local norms and upholding global practices often come into conflict. And people often disagree about what standards should be in place to ensure a community that is both safe and open to expression.

In the weeks ahead, we're going to begin allowing more items that people find newsworthy, significant, or important to the public interest -- even if they might otherwise violate our standards. We will work with our community and partners to explore exactly how to do this, both through new tools and approaches to enforcement. Our intent is to allow more images and stories without posing safety risks or showing graphic images to minors and others who do not want to see them.

As always, our goal is to channel our community's values, and to make sure our policies reflect our community's interests. We're looking forward to working closely with experts, publishers, journalists, photographers, law enforcement officials and safety advocates about how to do better when it comes to the kinds of items we allow. And we're grateful for the counsel of so many people who are helping us try to get this right.

 

 

Offsite Article: Not much to like about Facebook...


Link Here 14th September 2016
Full story: Facebook Censorship...Facebook quick to censor
Facebook is imposing prissy American censorship on the whole rest of the world. By Jane Fae

See article from telegraph.co.uk

 

 

Commented: Shoddy standards...

Cheapo Facebook censorship makes the news again for banning iconic war photo


Link Here11th September 2016
Full story: Facebook Censorship...Facebook quick to censor
Facebook's first line of censorship is handled by cheap worldwide staff armed with detailed rules prohibiting nearly all forms of nudity. If bad decisions get sufficient publicity then the censorship task is escalated to employees allowed a little more discretion. These senior censors than have to laugh off the previous crap decision by saying it was all some silly mistake and that it couldn't possibly be a reflection of Facebook censorship policy.

And so it was Facebook's censorship of an iconic Vietnam war photo featuring a naked girl in the aftermath of napalm attack.

Norway's largest newspaper published a front-page open letter to Mark Zuckerberg on Thursday, slamming Facebook's decision to censor the historic photograph of nine-year-old Kim Phuc running away from a napalm attack and calling on the CEO to live up to his role as the world's most powerful editor .

Facebook initially defended its decision to remove the image, saying:

While we recognize that this photo is iconic, it's difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.

On Friday, following widespread criticisms from news organizations and media experts across the globe, Facebook reversed its decision, saying in a statement to the Guardian:

After hearing from our community, we looked again at how our Community Standards were applied in this case. An image of a naked child would normally be presumed to violate our Community Standards, and in some countries might even qualify as child pornography. In this case, we recognize the history and global importance of this image in documenting a particular moment in time.

Because of its status as an iconic image of historical importance, the value of permitting sharing outweighs the value of protecting the community by removal, so we have decided to reinstate the image on Facebook where we are aware it has been removed.

Offsite Comment: Facebook needs an editor

11th September 2016. See  article from theguardian.com

What Facebook has to do now is think very hard about what it really means to be a publisher, said Emily Bell, director of the Tow Center for Digital Journalism at Columbia University. If they don't, she warned, this is going to happen to them over and over again. 'We need more than just algorithms'

Whether Facebook and media executives like to admit it, the social media site now plays a vital role in how people consume news, carrying an influence that is difficult to overstate. Studies have repeatedly found that Facebook has become the primary news source for many people, and that publishers' revenues have been hit hard as a result.

Facebook wants to have the responsibility of a publisher but also to be seen as a neutral carrier of information that is not in the position of making news judgments, said Jim Newton, a former Los Angeles Times editor who teaches journalism ethics at the University of California, Los Angeles. I don't know how they are going to be able to navigate that in the long term.

Bell said Facebook was a spectacularly well resourced and brilliant organization from a technological perspective -- and that its editorial efforts should start to reflect that rigor and dedication. Some have called for someone responsible for tough newsroom decisions to take over: an editor in duties and title.

...Read the full article from theguardian.com

 

 

Update: Live issues...

Facebook comment on video censorship rules for its live video streaming service


Link Here9th July 2016
Full story: Facebook Censorship...Facebook quick to censor
Facebook commented about issues related to showing violence, or its aftermath, on the live video streaming service, Facebook Live:

Live video allows us to see what's happening in the world as it happens. Just as it gives us a window into the best moments in people's lives, it can also let us bear witness to the worst. Live video can be a powerful tool in a crisis -- to document events or ask for help.

We understand the unique challenges of live video. We know it's important to have a responsible approach. That's why we make it easy for people to report live videos to us as they're happening. We have a team on-call 24 hours a day, seven days a week, dedicated to responding to these reports immediately.

The rules for live video are the same for all the rest of our content. A reviewer can interrupt a live stream if there is a violation of our Community Standards. Anyone can report content to us if they think it goes against our standards, and it only takes one report for something to be reviewed.

One of the most sensitive situations involves people sharing violent or graphic images of events taking place in the real world. In those situations, context and degree are everything. For instance, if a person witnessed a shooting, and used Facebook Live to raise awareness or find the shooter, we would allow it. However, if someone shared the same video to mock the victim or celebrate the shooting, we would remove the video.

Live video on Facebook is a new and growing format. We've learned a lot over the past few months, and will continue to make improvements to this experience wherever we can.

 

 

Updated: Trending Censorship...

US Senate to investigate Facebook over claims from former Facebook workers who said they routinely suppressed right wing news sources from the 'trending news' list


Link Here13th May 2016
Full story: Facebook Censorship...Facebook quick to censor
The US Senate Commerce Committee has sent Facebook CEO Mark Zuckerberg a letter requesting that he answer questions about the recent allegations regarding the social media site's Trending Topics feature.

Gizmodo published a report May 3 alleging that Facebook's news curation team intentionally avoids selecting stories to promote from certain news outlets, including World Star Hip Hop, Breitbart and TheBlaze.

The committee request comes the same day comedian and conservative pundit Steven Crowder announced that he has filed a legal motion seeking answers from the social media giant. The motion, posted on the Louder with Crowder talkshow host's website Tuesday, alleges that Crowder's blog was among Facebook's blacklisted sites and that his accounts were unfairly targeted.

In the letter, Senate Commerce Chairman John Thune asks Zuckerberg to make Trending Topics curators available to answer questions about how the feature works. Questions include:

What steps is Facebook taking to investigate claims of politically motivated manipulation of news stories in the Trending Topics section? and If such claims are substantiated, what steps will Facebook take to hold the responsible individuals accountable?

Thune also asks that the Trending Topics team provide a list of all news stories removed from or injected into the Trending Topics section since January 2014.

Update: Trending bollox, it's just news selected by Facebook

13th May 2016. See  article from theguardian.com

Leaked documents show how Facebook , now the biggest news distributor on the planet, relies on old-fashioned news values on top of its algorithms to determine what the hottest stories will be for the 1 billion people who visit the social network every day.

The documents show that the company relies heavily on the intervention of a small editorial team to determine what makes its trending module headlines -- the list of news topics that shows up on the side of the browser window on Facebook's desktop version. The company backed away from a pure-algorithm approach in 2014 after criticism that it had not included enough coverage of unrest in Ferguson , Missouri, in users' feeds.

The guidelines show human intervention -- and therefore editorial decisions -- at almost every stage of Facebook's trending news operation, a team that at one time was as few as 12 people:

 

 

Offsite Article: Streams of complaints...


Link Here9th April 2016
Full story: Facebook Censorship...Facebook quick to censor
Wired considers how Facebook will censor its upcoming live video streaming app

See article from wired.com

 

 

Offsite Article: Double Trouble...


Link Here4th January 2016
Full story: Facebook Censorship...Facebook quick to censor
Copenhagen's Little Mermaid falls foul of Facebook censors and then Denmark's copyright extortionists

See article from thelocal.dk

 

 

Offsite Article: Even museum curators should have got the message by now...


Link Here 19th December 2015
Full story: Facebook Censorship...Facebook quick to censor
Facebook Content Police Censors Image from a German Museum

See article from vrworld.com

 

 

Update: Seeking Refuge in Censorship...

Germany seeks to keep a lid on Facebook comments about immigration


Link Here 1st September 2015
Full story: Facebook Censorship...Facebook quick to censor

Germany has a wide range of opinions on the subject of immigration, and no doubt accepting one million Syrian refugees will be quite a challenge. The German government has been looking to keep the lid on internet comments on the subject. Unfortunately for the censors, not all of the unwelcome comments have triggered the level of offence/threat/hatred etc set by internet companies that results in comments being removed. So the German government are currently trying to convince Facebook to be more proactive in acting against comments that the government considers racist.

Germany's Justice Minister Heiko Maas has warned. In an interview with Reuters, Maas said:

If Facebook wants to do business in Germany, then it must abide by German laws. It doesn't matter that we, because of historical reasons, have a stricter interpretation of freedom of speech than the United States does.

He said that Holocaust denial and inciting racial hatred are crimes in Germany wherever they are found, and that he expects Facebook to be more vigilant in dealing with them on its service.

Maas has also made his views known in a letter to Facebook's public policy director in Dublin, Richard Allan. Maas said that he had received many complaints from German users of Facebook that their protests about racist posts on the service have been ignored. Maas 'suggested' meeting with Allan in Berlin on 14 September to discuss the matter.

Arstechinca commented:

Complying with these kind of local laws is hardly a new problem for US companies that do business in Europe.

One obvious solution--censoring the German-language service and preventing German Facebook users from accessing posts made on other parts of the system--is likely to be unacceptably extreme for users. On the other hand, solutions that only censor comments made on the German-language service, while leaving those posted elsewhere untouched, will make it easy for German users to circumvent the country's laws. Think global, act local, may be great as an Internet slogan, but it's really hard to put into practice when it comes to the law.

 

 

Update: Detailed Breast Censorship...

Facebook provides more detailed rules about banned content


Link Here16th March 2015
Full story: Facebook Censorship...Facebook quick to censor
Facebook has provided more details about what content can be posted on the site, and what updates will get users banned.

The new guidelines describe exactly what kind of nudity can and can't be shared, as well as including a whole section about dangerous organisations . Previously, the site only provided vague limitations about what couldn't be posted.

The updates now explicitly outlaw fully exposed buttocks and images of female breasts if they include the nipple . The bans affect CGI nudity as well, in addition to text posts that describe sexual acts in vivid detail . The site has also explicitly banned revenge porn.

The site explicitly says that it will allow pictures of breastfeeding women, or images showing breasts with post-mastectomy scarring . Pictures of paintings , sculptures and art depicting nude people is also allowed.

Violent content is still not explicitly banned. The site tells users to warn their audience that updates include graphic violence. Facebook can add those warnings itself, but only when videos are reported.

The site will still rely on users to complain about content, and has said that it has no plans to develop technology to do so automatically.

Facebook's new rules also have special sections for criminal activity, self-injury and bullying, all of which it says it will do more to remove.

 

 

Offsite Article: Facebook Censorship...


Link Here13th September 2014
Full story: Facebook Censorship...Facebook quick to censor
Facebook's chief censor speaks of the website's rules about nudity and terrorism

See article from recode.net

 

 

Update: When algorithms rule our news, should we be worried or relieved?...

Facebook and Google use systems to curate what appears on our screens -- but sociologists call this algorithmic censorship


Link Here29th August 2014
Full story: Facebook Censorship...Facebook quick to censor

 

 

Update: Facebook celebrates International Homophobia Day...

Another example of cheapo Facebook censorship being reversed when faced with bad publicity


Link Here 26th May 2014
Full story: Facebook Censorship...Facebook quick to censor
Facebook's censorship policies have been thrust into the spotlight after a seemingly innocuous photo of two women kissing was removed on the grounds that it violated the community's standards on nudity and pornography .

To add insult to injury the pic had been uploaded to mark International Day Against Homophobia and Transphobia by an Italian woman, Carlotta Trevisan.

It was reported to Facebook, presumably by homophobes. Facebook's cheapo first line censorship crew jumped in to demand that Trevisan remove the image, suspending her account for three days when she failed to comply.

Commenting on the incident Trevisan, a gay rights activist, said:

How can they say a kiss, which is something so loving, is nudity or porn?'

When the ludicrous censorship was escalated to more competent censorship staff, the decision was inevitably reversed. In a statement Facebook said their action had been a 'mistake' and Trevisan's account was now back up and running. A spokesperson said:

In an effort to quickly and efficiently process reports we receive, our community operations team reviews many reports every week, and as you might expect, occasionally, we make a mistake and block a piece of content we shouldn't have. We can understand how people can be frustrated with this when, as in this case, a mistake happens.

 

 

Update: Antisocial Censorship...

Continuing Facebook censorship of mastectomy pictures


Link Here20th January 2014
Full story: Facebook Censorship...Facebook quick to censor

A cancer sufferer was accused of breaching Facebook anti-porn rules, for uploading before and after mastectomy photos to encourage women to check their breasts.

Tracy Morris lodged a complaint with the site after it blocked the pictures so no one else could see them. Tracy said:

I had a photoshoot done when I was first diagnosed because I wanted a lasting memory of how I had once looked. After my second mastectomy I decided to have another shoot done because I still felt beautiful -- in a different way. Losing my second breast was traumatic. It made me realise how drastic cancer is. I decided that I had to warn other women, to shock them into checking their breasts before it was too late.

She posted her photos on Facebook and received dozens of positive messages. But then Facebook sent her a message telling her it was investigating the photographs for violating its standards on nudity and pornography. She said:

I am disgusted by Facebook. If one woman checks her breasts after seeing my photos they might save a life. How can that be offensive to anyone?

Tracy tried to re-post the photos but had no success until the Sunday Mirror contacted the site to query their removal. The photos are now visible to everybody.

 

 

Update: No Brains!...

Facebook ludicrously bans poster who likes faggots


Link Here2nd November 2013
Full story: Facebook Censorship...Facebook quick to censor
A man was banned from Facebook for being homophobic after posting a comment about his favourite childhood dish which read, I like faggots .

Robert Wilkes was referring to the traditional British meat balls which are usually made from butchers' off-cuts minced together with onion and breadcrumbs but he was blocked from the site for 12 hours after other users complained about his language, which is used as derogatory term for gay men in the US.

Speaking to The Sun, he said:

It may have a different meaning in America but I used it in a food context. Facebook allows beheading videos, cruelty to animals, stabbing and terrible swear words -- but not this. It's political correctness gone mad.

But this was not a one off mistake by Facebook incompetent censors. The comment was posted in response to a report that Eileen Perrin had her account similarly locked for 12 hours for posting a picture of savoury dish. Eileen said:

A lot of people on the Facebook group found it very funny and started saying things like 'free Eileen'.

Facebook claimed that the word had been misinterpreted.

 

 

Updated: Carry On Beheading...

Facebook speaks of implementing warnings about 'graphic content'


Link Here24th October 2013
Full story: Facebook Censorship...Facebook quick to censor
Facebook has announced it is working on new ways to keep users from stumbling across gruesome content such as beheading videos.

Facing sharp criticism from the likes of David Cameron, Facebook issued a statement clarifying that violent videos were only allowed if they were presented as news or held up as atrocities to be condemned.

If they were being celebrated, or the actions in them encouraged, our approach would be different. However, since some people object to graphic video of this nature, we are working to give people additional control over the content they see. This may include warning them in advance that the image they are about to see contains graphic content.

Facebook banned beheading videos in May but recently lifted the prohibition - a development flagged by the BBC.

Facebook's administrators face constant pressure from interest groups trying to impose their own forms of censorship or fighting to lift restrictions they see as oppressive. Women's rights groups want the company to ban sexy content; others have ridiculed Facebook's ban on the depiction of female breasts. Some believers have urged the site to ban what they see as blasphemous content.

Sean Gallagher of Index on Censorship said:

Films about beheadings may be deeply upsetting and offensive, but they do expose the reality of violent acts that are taking place in the world today. When trying to draw a line about what should or shouldn't be allowed, it's important to look at context, not just content.

Update: Heads Roll at Facebook

24th October 2013. See  article from  independent.co.uk

Facebook has removed a video of a woman being beheaded and updated its policy on graphic violence following a supposed 'public outcry'.

In a move which David Cameron described as irresponsible , Facebook had said that it would be allowing users to upload images and videos of graphic violence so that they could be condemned .

It has now backtracked on that decision, moving to take down a particular video which sparked this week's debate. Entitled only Challenge: Anybody can watch this video? it seemed to show a masked man beheading a woman in Mexico. In a statement, Facebook explained refinements to its policy on violent content:

When we review content that is reported to us, we will take a more holistic look at the context surrounding a violent image or video.

Second, we will consider whether the person posting the content is sharing it responsibly, such as accompanying the video or image with a warning and sharing it with an age-appropriate audience.

Based on these enhanced standards, we have re-examined recent reports of graphic content and have concluded that this content improperly and irresponsibly glorifies violence. For this reason, we have removed it.

 

 

Update: Crucified by Facebook...

Facebook kindly censors artworks by Eril Ravelo


Link Here11th September 2013
Full story: Facebook Censorship...Facebook quick to censor

  Thailand

Cuban artist Erik Ravelo's latest project is a personal artwork, unrelated to his career as a creative director at Benetton, has managed to outrage the easily offended.

I had people writing me, threatening me, he said in a phone conversation with the Huffington Post. At first the project was fun but it got a little out of hand.

Los Intocables, which translates to The Untouchables, is what Ravelo refers to as a human installation, featuring a variety of issues plaguing children around the world. Several works features both a child and an adult posed to demonstrate a contemporary evil, whether it be gun violence, molestation or the threat of nuclear war. Each work features a child being crucified on the back of an adult, each scene attempting to tell a different story about the loss of innocence.

The human sculptures are then photographed with the child's face blurred, resulting in images as visually jarring as they are conceptually saddening. It's art, it's communication, Ravelo explained.

Facebook obligingly have censored Ravelo's project. Halting his likes at 18,000, he has been prevented from uploading more images. I am used to governmental censorship from Cuba but with this, he paused, my first reaction was 'woah.'

 

 

Update: Shared Concerns...

Facebook set to require real identities for those that want to post bad taste humour


Link Here29th May 2013
Full story: Facebook Censorship...Facebook quick to censor

Recently there has been some attention given to Facebook's content policy. The current concern, voiced by Women, Action and The Media, The Everyday Sexism Project, and the coalition they represent, has focused on content that targets women with images and content that threatens or incites gender-based violence or hate. 

In light of this recent attention, we want to take this opportunity to explain our philosophy and policies regarding controversial or harmful content, including hate speech, and to explain some of the steps we are taking to reduce the proliferation of content that could create an unsafe environment for users.

Facebook's mission has always been to make the world more open and connected. We seek to provide a platform where people can share and surface content, messages and ideas freely, while still respecting the rights of others.

To facilitate this goal, we also work hard to make our platform a safe and respectful place for sharing and connection.  This requires us to make difficult decisions and balance concerns about free expression and community respect.  We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organizing real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual (e.g. bullying). 

In addition, our Statement of Rights and Responsibilities  ( www.facebook.com/terms ) prohibits "hate speech." While there is no universally accepted definition of hate speech, as a platform we define the term to mean direct and serious attacks on any protected category of people based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease. We work hard to remove hate speech quickly, however there are instances of offensive content, including distasteful humor, that are not hate speech according to our definition. In these cases, we work to apply fair, thoughtful, and scalable policies. This approach allows us to continue defending the principles of freedom of self-expression on which Facebook is founded. We've also found that posting  insensitive or cruel content often results in many more people denouncing it than supporting it on Facebook. That being said, we realize that our defense of freedom of expression should never be interpreted as license to bully, harass, abuse or threaten violence. We are committed to working to ensure that this does not happen within the Facebook community. We believe that the steps outlined below will help us achieve this goal.

As part of doing better, we will be taking the following steps, that we will begin rolling out immediately:
  • We will complete our review and update the guidelines that our User Operations team uses to evaluate reports of violations of our Community Standards around hate speech.  To ensure that these guidelines reflect best practices, we will solicit feedback from legal experts and others, including representatives of the women's coalition and other groups that have historically faced discrimination.
  • We will update the training for the teams that review and evaluate reports of hateful speech or harmful content on Facebook. To ensure that our training is robust, we will work with legal experts and others, including members of the women's coalition to identify resources or highlight areas of particular concern for inclusion in the training. 
  • We will increase the accountability of the creators of content that does not qualify as actionable hate speech but is cruel or insensitive by insisting that the authors stand behind the content they create.  A few months ago we began testing a new requirement that the creator of any content containing cruel and insensitive humor include his or her authentic identity for the content to remain on Facebook.  As a result, if an individual decides to publicly share cruel and insensitive content, users can hold the author accountable and directly object to the content. We will continue to develop this policy based on the results so far, which indicate that it is helping create a better environment for Facebook users.
  • We will establish more formal and direct lines of communications with representatives of groups working in this area, including women's groups, to assure expedited treatment of content they believe violate our standards. We have invited representatives of the women Everyday Sexism to join the less formal communication channels Facebook has previously established with other groups.
  • We will encourage the Anti-Defamation League's Anti-Cyberhate working group and other international working groups that we currently work with on these issues to include representatives of the women's coalition to identify how to balance considerations of free expression, to undertake research on the effect of online hate speech on the online experiences of members of groups that have historically faced discrimination in society, and to evaluate progress on our collective objectives.

 

 

Update: Violence is Fine as Long as it's Against Men...

Feminists call on Facebook to censor 'hate speech' and violence against women


Link Here 22nd May 2013
Full story: Facebook Censorship...Facebook quick to censor

We, the undersigned, are writing to demand swift, comprehensive and effective action addressing the representation of rape and domestic violence on Facebook. Specifically, we call on you, Facebook, to take three actions:

  1. Recognize speech that trivializes or glorifies violence against girls and women as hate speech and make a commitment that you will not tolerate this content.

  2. Effectively train moderators to recognize and remove gender-based hate speech.

  3. Effectively train moderators to understand how online harassment differently affects women and men, in part due to the real-world pandemic of violence against women.

To this end, we are calling on Facebook users to contact advertisers whose ads on Facebook appear next to content that targets women for violence, to ask these companies to withdraw from advertising on Facebook until you take the above actions to ban gender-based hate speech on your site.

Specifically, we are referring to groups, pages and images that explicitly condone or encourage rape or domestic violence or suggest that they are something to laugh or boast about. Pages currently appearing on Facebook include Fly Kicking Sluts in the Uterus, Kicking your Girlfriend in the Fanny because she won't make you a Sandwich, Violently Raping Your Friend Just for Laughs, Raping your Girlfriend and many, many more. Images appearing on Facebook include photographs of women beaten, bruised, tied up, drugged, and bleeding, with captions such as This bitch didn't know when to shut up and Next time don't get pregnant.

These pages and images are approved by your moderators, while you regularly remove content such as pictures of women breastfeeding, women post-mastectomy and artistic representations of women's bodies. In addition, women's political speech, involving the use of their bodies in non-sexualized ways for protest, is regularly banned as pornographic, while pornographic content - prohibited by your own guidelines - remains. It appears that Facebook considers violence against women to be less offensive than non-violent images of women's bodies, and that the only acceptable representation of women's nudity are those in which women appear as sex objects or the victims of abuse. Your common practice of allowing this content by appending a [humor] disclaimer to said content literally treats violence targeting women as a joke.

The latest global estimate from the United Nations Say No UNITE campaign is that the percentage of women and girls who have experienced violence in their lifetimes is now up to an unbearable 70 percent. In a world in which this many girls and women will be raped or beaten in their lifetimes, allowing content about raping and beating women to be shared, boasted and joked about contributes to the normalisation of domestic and sexual violence, creates an atmosphere in which perpetrators are more likely to believe they will go unpunished, and communicates to victims that they will not be taken seriously if they report.

According to a UK Home Office Survey, one in five people think it is acceptable in some circumstances for a man to hit or slap his wife or girlfriend in response to her being dressed in sexy or revealing clothes in public. And 36 percent think a woman should be held fully or partly responsible if she is sexually assaulted or raped whilst drunk. Such attitudes are shaped in part by enormously influential social platforms like Facebook, and contribute to victim blaming and the normalisation of violence against women.

Although Facebook claims, not to be involved in challenging norms or censoring people's speech, you have in place procedures, terms and community guidelines that you interpret and enforce. Facebook prohibits hate speech and your moderators deal with content that is violently racist, homophobic, Islamophobic, and anti-Semitic every day. Your refusal to similarly address gender-based hate speech marginalizes girls and women, sidelines our experiences and concerns, and contributes to violence against us. Facebook is an enormous social network with more than a billion users around the world, making your site extremely influential in shaping social and cultural norms and behaviors.

Facebook's response to the many thousands of complaints and calls to address these issues has been inadequate. You have failed to make a public statement addressing the issue, respond to concerned users, or implement policies that would improve the situation. You have also acted inconsistently with regards to your policy on banning images, in many cases refusing to remove offensive rape and domestic violence pictures when reported by members of the public, but deleting them as soon as journalists mention them in articles, which sends the strong message that you are more concerned with acting on a case-by-case basis to protect your reputation than effecting systemic change and taking a clear public stance against the dangerous tolerance of rape and domestic violence.

In a world in which hundreds of thousands of women are assaulted daily and where intimate partner violence remains one of the leading causes of death for women around the world, it is not possible to sit on the fence. We call on Facebook to make the only responsible decision and take swift, clear action on this issue, to bring your policy on rape and domestic violence into line with your own moderation goals and guidelines.

Sincerely, Laura Bates, The Everyday Sexism Project Soraya Chemaly, Writer and Activist Jaclyn Friedman, Women, Action & the Media (WAM!) Angel Band Project Anne Munch Consulting, Inc. Association for Progressive Communications Women's Rights Programme Black Feminists The Body is Not An Apology Breakthrough Catharsis Productions Chicago Alliance Against Sexual Exploitation Collective Action for Safe Spaces Collective Administrators of Rapebook CounterQuo End Violence Against Women Coalition The EQUALS Coalition Fem 2.0 Feminist Peace Network The Feminist Wire FORCE: Upsetting Rape Culture A Girl's Guide to Taking Over the World Hollaback! Illinois Coalition Against Sexual Assault Jackson Katz, PhD., Co-Founder and Director, Mentors in Violence Prevention Lauren Wolfe, Director of WMC's Women Under Siege Media Equity Collaborative MissRepresentation.org No More Page 3 Object The Pixel Project Rape Victim Advocates Social Media Week SPARK Movement Stop Street Harassment Take Back the Tech! Tech LadyMafia Time To Tell The Uprising of Women in the Arab World V-Day The Voices and Faces Project The Women's Media Center Women's Networking Hub The Women's Room.

 

 

Update: Facebook Prudery Has Few Friends...

Facebook censors French protest against its censorship of nudity


Link Here 22nd May 2013
Full story: Facebook Censorship...Facebook quick to censor

Day of Nude on Facebook , a French protest aimed at challenging Facebook's unnecessary censorship of photos was censored when Facebook took down the event page and suspended the accounts of some involved in the online demonstration.

Launched by French photographer Alain Bachellier, the Facebook event asked its 8,000-plus participants to publish a nude picture on Monday, Le Huffington Post reports. While some chose to post of a photo of their own creation, most instead shared copies of famous nude works of art.

Coinciding with the final day of the European Festival of Nude Photography, the Facebook event sought to fight against the ridiculous censorship that flouts the basic rules of our freedom of expression in the name of Puritanism or the moral rules of another age,

A spokesman for Facebook France told the Agence France-Presse that page was closed in the early afternoon.

Facebook authorizes users to mobilize around common causes, included cultural ones, but it can't authorize the cause itself to encourage users to disrespect their conditions of use.

 

 

Update: Elbowed Out...

Facebook shown to be unable to tell their arse from their elbows


Link Here27th November 2012
Full story: Facebook Censorship...Facebook quick to censor

The Facebook Page: Dedicated to Theories of the deep understanding of things describes itself as:

Random pictures - hopefully there'll be something that someone finds offensive.

And of course Facebook just had to rise to the bait re the bathtub picture.

They deleted the picture on the grounds that it violates Facebook's Statement of Rights and Responsibilities

 

 

Offsite Article: Facebook Copyright Takedown: Justice or Injustice?...


Link Here 9th October 2012
Full story: Facebook Censorship...Facebook quick to censor
An alleged copyright infringer has his fan page removed by Facebook. But the story highlights the lack of transparency in Facebook's policy regarding its handling of infringement claims.

See article from pdnpulse.com

 

 

Update: Nipplegate...

Facebook take down New Yorker cartoon over a couple of nipple dots in an Adam and Eve cartoon


Link Here11th September 2012
Full story: Facebook Censorship...Facebook quick to censor
The New Yorker has a Facebook page for cartoons.

It got temporarily banned from Facebook for supposedly violating their community standards on Nudity and Sex, by posting this Mick Stevens cartoon.

The artist redrew the cartoon with a clothed Adam and Eve, but somehow it didn't quite work. Still, at least it kept the nutters of Facebook happy.

 

 

Update: Facebook Acts as Judge, Jury and Executioner...

Facebook condemned by Article 19 for rubbish censorship procedures that were exploited to help hide allegations of torture in Syria


Link Here 8th July 2012
Full story: Facebook Censorship...Facebook quick to censor

Facebook has apologised after it incompetently deleted a free speech group's post on human rights abuses in Syria. The website removed a status update by Article 19, which campaigns for freedom of speech, that linked to a Human Rights Watch report detailing alleged torture in the Arab country.

Dr Agnes Callamard, the executive director of Article 19, accused Facebook of acting like judge, jury and executioner in the way it removes material from the website.

Facebook told the Guardian that the post was mistakenly removed after being reported as containing offensive content. A spokesman said:

The link was reported to Facebook. We assess such reports manually and because of the high volume, occasionally content that shouldn't be taken down is removed by mistake. We're sorry about this. The organisation concerned should try posting the link again.

Dr Agnes Callamard, the executive director of Article 19, was somewhat underwhelmed by Facebook's censorship procedure. She said:

The deletion shows the looming threat of private censorship. We commend Facebook for creating tools to report abuse, but if your post was wrongly deleted for any reason, there is no way to appeal. Facebook don't notify you before deleting a comment and they don't tell you why after they have. Facebook act like judge, jury and executioner.

Facebook is now widely recognised as a quasi-public space and as such has responsibilities when it comes to respecting free speech. They can't just delete content without some kind of transparent and accountable system. International law says that censorship is only acceptable when it is clearly prescribed, is for a legitimate aim -- such as for public health -- and is necessary in a democracy.

 

 

Update: A Network of Censors...

Facebook speaks of its censorship procedures


Link Here24th June 2012
Full story: Facebook Censorship...Facebook quick to censor

Facebook have revealed some of their procedures used for responding to complaints about user posted content.

Facebook employs 4 teams based in Menlo Park, Austin, Dublin and Hyderabad. Facebook explained:

Reports of inappropriate content, which users can submit with just a couple of clicks, are directed to one of four support teams.

An Abusive Content Team handles spam and sexually explicit content. Meanwhile, a Safety Team handles threats of vandalism, graphic violence, credible threats of violence and illegal drug use. A Hate and Harassment Team handles, well, reports of hate speech and harassment. The team that handles hacked and imposter accounts is called the Access Team.

If found to be in violation of Facebook's policies, Statement of Rights and Responsibilities or Community Standards, the content is removed and its publisher warned. Facebook's support teams may also block users who post inappropriate content or ban them from specific features. A separate team handles appeals.

Sometimes content on Facebook violates not just the company's policies, but the law. Facebook says it will share reports with law enforcement when:

we have a good faith belief it is necessary to prevent fraud or other illegal activity, to prevent imminent bodily harm, or to protect ourselves and you from people violating our Statement of Rights and Responsibilities.

 

 

Offsite Article: Climate of Censorship...


Link Here15th June 2012
Full story: Facebook Censorship...Facebook quick to censor
Australian political cartoonist gets suspended from Facebook for his nude depiction of PM Julia Gillard wearing a strap-on

See article from lpickering.net

 

 

Update: Facebook Automatons...

Breast cancer recovery pictures offend Facebook's cheapo censors


Link Here27th May 2012
Full story: Facebook Censorship...Facebook quick to censor

Joanne Jackson had a photo session to commemorate winning her battle with the killer disease after having a mastectomy - and posted them on the social networking site.

But Facebook removed some of the images, which revealed her operation scar, for being offensive.

Joanne has been warned that further abusive breaches will result in her account being shut down.

Angry Joanne said:

There is nothing pornographic or explicit about these pictures. That was not the idea at all. I took breast cancer and the mastectomy in my stride and decided it wasn't going to stop me living my life. It wasn't going to define who I was, and it didn't make me any less attractive as a woman.

She has no idea who reported the pictures but the warning came out of the blue and lacked any hint of sensitivity. The message said:

Content you shared on Facebook has been removed because it violated Facebook's Statement of Rights and Responsibilities. Shares that contain nudity, pornography and graphic sexual content are not permitted on Facebook. This serves as a warning. Additional violations may result in the termination of your account.

A Facebook spokesman confirmed that several images had been removed because they breached terms and conditions. He shamefully spouted that Facebook welcomed mastectomy pictures. ..BUT... said that some images may breach regulations.

 

7th February
2012
  

Update: Stigmatising Breastfeeding...

Worldwide protests against Facebook's censorship of breastfeeding pictures

Protesters assembled at more than 30 locations worldwide at 10am yesterday to oppose Facebook's policy regarding the removal of images of breastfeeding from the social networking website.

Irish protesters stood their ground for two hours to highlight the fact Facebook is removing breast feeding photos. Moreover, parents argued that Facebook's censorship reflects a disturbing trend stigmatising breastfeeding in public.

Chris Finn, a representative from Friends of Breastfeeding, an advocacy group in Ireland. said:

Some might ask why would a mother want to post a picture of herself breastfeeding on Facebook. And the only question I can ask you back is, 'Why wouldn't she'?

We're here to stand up and say that our nation's attitude towards breastfeeding needs to change. Why? Because breastfeeding is just the biologically normal way to feed a baby, and the only way to make a change is if we see breastfeeding.

Facebook said that its terms prohibit nudity. Therefore, images containing a fully exposed breast are deemed to violate those terms of user safety. A statement said:

These policies are based on the same standards that apply to television and print media. We agree that breastfeeding is natural and we are very glad to know that it is important for mothers, including the many mothers who work at Facebook, to share their experience with others on the site.

 

10th January
2012
  

Update: Infant Censors...

Facebook again caught making crap censorship decisions about breast feeding pictures

Facebook has again apologised for crap and arbitrary censorship after it deleted a page showing two little girls pretending to breastfeed their dolls.

Express Yourself Mums, an NHS-backed breastfeeding website, discovered its group had been removed on for a supposed policy violation .

The previous day co-owner Sharon Blackstone had posted a picture of her seven-year-old daughter Maya playing with her doll. She said:

After giving her doll a naming ceremony, Maya told me that her baby needed to be fed. As she's only ever seen me breastfeed her little sister, it was the most natural thing in the world for her to pretend to do it the same way.

Like many mums, I got out my phone and took a picture because I thought it was a sweet moment. I shared it with the 600 other mothers on our Facebook page because I thought it was something they'd like to see. After all, don't millions of people post cute pictures of their kids on Facebook?

A few minutes later, my business partner Carly Silver also posted a similar shot of her seven-year-old daughter Izzy cradling her baby doll in her arms.

Last Friday afternoon Express Yourself Mums discovered the page (with 600 fans) had been removed. The reason given was a vague list of restrictions including nudity or obscenity.

Under pressure to reinstate the page from more than 400 women who formed a campaigning group, Facebook has now apologised for the error and reinstated the page. Facebook says any complaint is reviewed by its operation team, which then makes the decision about whether to remove the images or close down the group. A Facebook spokesman said: The group was removed in error. It will be reinstated, and we apologise for any inconvenience caused.

[Presumably the Facebook censorship system is as cheap as possible and gives low grade 'operators' minimal time to make decisions which turn out to be arbitrary. I guess these are re-considered by more senior censors if a fuss is kicked up. One has to wonder how many people and businesses suffer from equally crap decisions but cannot organise sufficient press coverage to get Facebook to reconsider].

 

4th December
2011
  

Update: Effin' Censors...

Facebook bans the Irish village of Effin claiming that it is an offensive word

A Limerick woman is leading the battle to have her home village of Effin recognized by social network site Facebook.

Ann Marie Kennedy is taking on the giant corporation which has deemed the village name of Effin to be offensive.

She has also failed in an attempt to launch a Facebook campaign based on a Please get my hometown Effin recognised page on the website. It came back with an error message saying 'offensive,' Kennedy told the Irish Independent.

I would like to be able to put Effin on my profile page and so would many other Effin people around the world to proudly say that they are from Effin, Co Limerick, but it won't recognize that. It keeps coming up as Effingham, Illinois; Effingham, New Hampshire; and it gives suggestions of other places.

Kennedy has vowed to carry on her battle until Effin gains official status on Facebook.

...Read the full article

 

9th November
2011
  

Updated: Faced Down...

Facebook removes pages of bad taste jokes

Facebook have removed pages dedicated to bad taste jokes about rape and sexual violence.

Change.org has been campaigning against the pages for 2 months, and raised a petition of 186,000 signatures against the pages. In addition they ran a twitter campaign and a Facebook page of their own.

One of the target pages, now removed was called : You know she's playing hard to get when... and featured wisecracks such as:

  •  Don't You Hate it When You Punch a Slut in the Mouth and They Suck It

After removing the pages, Facebook's rep told AllFacebook that they take things seriously, and reminded everyone that reporting a Page is how to get offending content reviewed and also said that they've made the social reporting tool totally much more awesome because they care and stuff.

Update: Tagged as Humour

9th November 2011. See  article from  bbc.co.uk

Facebook has removed several rape joke pages from its social network. However, controversial postings may remain if administrators add a tag stating they are humorous or satire.

Facebook told the BBC:

We take reports of questionable and offensive content very seriously. However, we also want Facebook to be a place where people can openly discuss issues and express their views, while respecting the rights and feelings of others.

Groups or pages that express an opinion on a state, institution, or set of beliefs - even if that opinion is outrageous or offensive to some - do not by themselves violate our policies. These online discussions are a reflection of those happening offline, where conversations happen freely.

The statement's formal language contrasts with the firm's previous comments. In August it said: Just as telling a rude joke won't get you thrown out of your local pub, it won't get you thrown off Facebook.

 

4th November
2011
  

Update: Breast Cancer Awareness Body Painting Project...

Kindly publicised by Facebook's censorship department

Ellen Gondola had breast cancer. One day, years later, she stood topless in an artist's studio and allowed her chest to be covered in paint, her cancer scars blanketed with bamboo and butterflies. She'd never felt so beautiful.

But Facebook called it pornography, inappropriate nudity, a violation of the terms of use. The social networking giant took her photo down, and the encouraging comments beneath it.

Twenty-four other breast cancer survivors have posed topless like she did. Most of their images have been taken down, too, creator and photographer Michael Colanero said, citing puritanical resistance from Facebook users who flagged the images as inappropriate.

Gondola had joined a cause, the Fort Lauderdale-based Breast Cancer Awareness Body Painting Project, which has a group page on Facebook. Now she's part of a second cause, the Facebook No-Censor Petition.

 

8th October
2011
  

Update: Nothing's Shocking...

Except to Facebook who censor Jane's Addiction album cover art

The band Jane's Addiction posted the cover for their 1988 album Nothing's Shocking on their official Facebook page, along with a few other classic images from their history. But Facebook apparently took offence to the Nothing's Shocking cover, which features two naked ladies, and removed it.

The band quickly reposted the image, albeit an edited version with Facebook logos covering the girls' modesties, along with a post that said:

In 1988, nine of the 11 leading record chains refused to carry Nothing's Shocking because of its cover. (In 2011, Facebook joined them.)

 

9th May
2011
  

Update: Feeding Hysteria...

More Facebook nonsense about banning breast feeding pictures

Breast-feeding advocates are angry that Facebook has once again removed photos of mothers nursing their babies.

In the latest ludicrous censorship, last month Facebook removed breast-feeding images from Earth Mama Angel Baby's Facebook page.

Babies get hungry, explained a post on Earth Mama's website. And breasts feed babies. We don't consider either photo obscene. Each shows a human baby having lunch.

Peggy O'Mara, editor of Mothering magazine, decried the move in a lengthy blog post that called for readers to post pictures of themselves nursing on their personal Facebook pages if you agree with me that breast-feeding is normal and not obscene :

 

7th May
2011
  

Update: Censored Lest Tongues Wag...

Facebook ban kissing image for promotion of the movie, Attenberg

A Swedish film distributor's attempt to use an image of two women kissing in a Facebook advertising campaign has been rejected by the ever censorial website.

Sweden-based TriArt Film was hoping to use Facebook to publicise the Greek film Attenberg , currently showing in Swedish cinemas.

Our ad for Attenberg, using the poster image of two women touch tongues, has been DISAPPROVED, TriArt said in a statement on its own Facebook page. TriArt went on to suggest that Facebook appears to have a double standard when it comes to who can be seen locking lips in advertisements running on the site, explaining that their ad for the film Tre , featuring a male-female couple engaged in a deep kiss, was approved.

We're confused, TriArt CEO Eva Esseen Arndorff said in a statement.

 

19th April
2011
  

Kissed Better...

Facebook censors restore picture of gay kiss after protest

Facebook has apologised for removing a photo of a gay kiss taken from the  UK soap EastEnders, It was removed for being sexually suggestive and supposedly abusive .

It was used by US writer Niall O'Conghaile to accompany a blog post about the kiss-in held to support a gay couple who were kicked out of a pub.

Facebook said in a statement: The photo in question does not violate our Statement of Rights and Responsibilities and was removed in error. We apologise for the inconvenience.

Hundreds of people added the image to their profiles to complain about the removal.

 

6th January
2011
  

Update: The Leaky B@@b...

Facebook again get offended by breast feeding pictures

It's been a hectic start to the year for mom Jessica Martin-Weber, founder and editor of the breastfeeding support group The Leaky B@@b.

The group, which offers a space on Facebook for around 5,000 breastfeeding moms to ask questions and offer advice and support, was deleted over the weekend. Facebook claimed that it had violated their Terms of Service, insinuating that breastfeeding photos posted on the group's page were obscene.

In response to the deletion, breastfeeding supporters, both former members of the group and others, jumped into action, creating two pages on Facebook, Bring Back the Leaky Boob and TLB Support, which together gained more than 10,000 fans.

Martin-Weber released a statement urging Facebook not only to restore the group's page, but to stop considering breastfeeding and any other material and photos related to breast health, obscene.

Shortly thereafter, Facebook reinstated the group's page after 'offending' photos  and pages were deleted by Facebook, also vaguely claiming that they were in violation of the company's Terms of Service.

Shortly after Facebook has once again deleted The Leaky B@@b – as well as the Bring Back the Leaky Boob group that had formed in response to its deletion!

But again later restored The Leaky B@@b and the page is currently still available.

 

28th November
2010
  

Update: Scarred by Facebook Censorship...

British woman allowed to post images of scars to raise breast cancer awareness

Social networking site Facebook is to allow photographs of a woman who had surgery for breast cancer after it removed them from her profile.

The pictures of Anna Antell from Oxfordshire, were initially deemed to be nudity and taken down.

Facebook now says it supports her right to share her experience and the images of her post-op scars can be published.

Ms Antell, who said it was brilliant news , will again upload the images which she hopes will raise awareness. One of the pictures which was removed depicts Ms Antell covering one breast while showing the scar tissue of the removed breast.

She said: I think it is really good they have realised that it is a valid thing; me showing a bare shoulder and a scar is not offensive.

Update: Acquitted

14th March 2011. See  article from  bbc.co.uk

A breast cancer survivor's Facebook page has been blocked after she published a photo of her reconstructed breasts following her operation.

Melissa Tullett put the picture on the website after she had a double mastectomy. The social networking site blocked her page and removed the image because it said it broke its rules on nudity. Ms Tullett said she had only intended to offer encouragement to fellow breast cancer sufferers.

It was to show other women that after such an ordeal you can come out of it with your dignity and your womanhood again, and that it's not all frightening. They [Facebook] just told me that I'd uploaded a photo that violated their terms of use and that they were deleting the photo. But they didn't actually tell me they were disabling my account .

Ms Tullett's page has since been reactivated, but she has been told not to repost the picture.

 

11th August
2010
  

Update: No Liberty at Facebook...

Facebook takes down topless Statue of Liberty picture

GoTopLess.org is calling for a public protest after an image at the organization's Facebook page depicting the Statue of Liberty with bare breasts was removed by Facebook staff. The disputed image was a photo of a painting by GoTopless member Donna Grabow.

The incident began when GoTopLess president Nadine Gary received an e-mail from Facebook staff on July 18 explaining the reason for the photo's removal. It read, in part:

You uploaded a picture to 'NEW YORK National Go Topless Day: A March for Women's Equal Rights! AUG 22 that violates our Terms of Use, and this picture has been removed. Facebook does not allow photos that attack an individual or group, or that contain nudity, drug use, violence, or other violations of the Terms of Use.

Brigitte Boisselier said:

I'm asking all my friends on Facebook and those who believe in equal rights for men and women to post the picture that was taken down, Boisselier said. Some frustrated individuals can't see a nipple without freaking out or feeling offended, but we've already had enough discrimination against the female body. I'm asking all women on Facebook to stand for equal topless rights by posting this photo to their own pages. And I'm also asking all men who can appreciate a female body without feeling guilty to do the same.

The female chest is beautiful and children shouldn't be told it's sinful to look at it. That sort of repression causes frustration and guilt that they will experience as adults, which is such a ridiculous waste. Bare female breasts are seen on all European beaches at this time of year, but as far as I know, incidence of rape and other sexually violent incidents is lower in Europe than in America.

Artist Grabow agrees that Facebook's action was discriminatory and wrong.

Censorship of this painting denies freedom of speech and expression and reflects American prudishness, she said. What's funny is that the Statue of Liberty was a gift from the French government, and all the French people I know smile when they see this feminized painting. In fact, Europeans just laugh when they learn that Facebook is censoring innocent images like this one. After all, images of nude statues are displayed everywhere else without protest, including in school books.

 

16th April
2010
  

Update: The Mean Face of Facebook...

Social networking website takes issue with breastfeeding

What was supposed to be images celebrating pregnancy and motherhood created by a Courtenay artist are now considered hateful, threatening or obscene by one of largest social networking sites in the world.

Mother and artist Kate Hansen recently created a series of portraits called The Madonna Child Project — images which feature different mothers and babies cuddling their babies while breastfeeding and bottle feeding.

Hansen posted some of the images in a figurative art group on Facebook and discovered the portraits were being deleted around late March.

Hansen noted she initially posted images in groups of three, and all images got deleted. She inquired with the Facebook group administrator, who assured her she had no reason to delete the images. Hansen continued to repost the images, and soon after, found they were being continually deleted from the site.

Last week, she received an e-mail from The Facebook Team noting: you posted an item that violated our terms of use, and this item has been removed. Among other things, content that is hateful, threatening or obscene is not allowed, nor is content that attacks an individual or group. Continued misuse of Facebook's features could result in your account being disabled.

During a recent interview with CBC Radio, which contacted a Facebook representative, Hansen said the social networking site representative noted they supposedly do not delete breastfeeding images.

She said the entire incident has made her question the overall topic of breastfeeding in society, and the public perception of the act. At least it's gotten people talking about it, noted Hansen: I will continue to post images and risk my account being deleted; the risk is worth it, she added.

 

15th February
2010
  

Update: Pig's Nipples at Facebook...

Hey Facebook, Breastfeeding is Not Obscene!

Facebook routinely deletes from its site photos of breastfeeding. It has labelled them obscene and pornographic. It says that it has rules for what is allowed on its site, but its careless actions show it does not.

Facebook's clueless manner of censoring is not just pointless but harmful. There are other ways to deal with unwanted material than by immature, arrogant, and foolish removal of what one doesn't like, especially when photos of breastfeeding are claimed to harm children, a claim Facebook has made for years.

Here is a recent photo Facebook removed. Could Facebook have a bad case of nipplephobia?

Based on article from theotherpaper.com

A charge led by Facebook administrators to delete pictures of breast-feeding moms from its pages may land the social media site in the middle of a class action lawsuit.

There have been rumblings since last December. A lot of people are really eager to call Facebook to task and we're considering whether a class action lawsuit will be viable, said Stephanie Muir, a Canadian administrator for the Facebook group, Hey Facebook, Breastfeeding is Not Obscene! We want to hit them in the pocketbook so they'll actually pay attention. Facebook is getting away with something they would not be able to get away with outside the virtual world. It's basically discrimination.

Facebook fired a warning shot recently to show it's serious about taking down the group's page by deleting Muir's personal page as well.

The group is still there. And I have created a different account for myself, said Muir. But everything I previously had is gone, including every single post I've ever made.

Muir said Facebook initially told the group they were in copyright violation and that's why they were going to be removed: One of our administrators in Scotland e-mailed an inquiry and the response said, 'We're sorry, our message was in error. It's not a copyright violation, it's nudity and explicit sexual content that your group has been removed, They said in their statement it wasn't the breast-feeding, it was the nipples that were the problem. They're very inconsistent, which is a great source of irritation. They have changed their story a number of times.

We're going to continue to keep a strong presence . It's still a mystery to me how anyone could feel so strongly to interfere with a community of a quarter of a million people. You know, you have options; if you see a breast-feeding woman (or her picture), you can either harass her or you can use your neck and swivel your head in the other direction. We ultimately just want them to leave breast-feeding pictures alone.



Censor Watch logo
censorwatch.co.uk

 

Top

Home

Links
 

Censorship News Latest

Daily BBFC Ratings

Site Information