I recently completed a book defending free speech. Emerald Press scheduled it for publication but then decided not to proceed. Here's what it said about the book in Emerald's September 2019 catalogue:
In Defense of Free
Speech: The University as Censor Author James R. Flynn, University of Otago, New Zealand
Synopsis: The good university is one that teaches students the intellectual skills they need to be intelligently critical--of
their own beliefs and of the narratives presented by politicians and the media. Freedom to debate is essential to the development of critical thought, but on university campuses today free speech is restricted for fear of causing offence. In Defense of
Free Speech surveys the underlying factors that circumscribe the ideas tolerated in our institutions of learning. James Flynn critically examines the way universities censor their teaching, how student activism tends to censor the opposing side and how
academics censor themselves, and suggests that few, if any, universities can truly be seen as good. In an age marred by fake news and social and political polarization, In Defense of Free Speech makes an impassioned argument for a return to critical
thought.
I was notified of Emerald's decision not to proceed byEmerald's publishing director, in an email on 10th June:
I am contacting you in regard to your manuscript In Defense
of Free Speech: The University as Censor . Emerald believes that its publication, in particular in the United Kingdom, would raise serious concerns. By the nature of its subject matter, the work addresses sensitive topics of race, religion, and gender.
The challenging manner in which you handle these topics as author, particularly at the beginning of the work, whilst no doubt editorially powerful, increase the sensitivity and the risk of reaction and legal challenge. As a result, we have taken external
legal advice on the contents of the manuscript and summarize our concerns below.
There are two main causes of concern for Emerald. Firstly, the work could be seen to incite racial hatred and stir up religious hatred under United
Kingdom law. Clearly you have no intention of promoting racism but intent can be irrelevant. For example, one test is merely whether it is likely that racial hatred could be stirred up as a result of the work. This is a particular difficulty given modern
means of digital media expression. The potential for circulation of the more controversial passages of the manuscript online, without the wider intellectual context of the work as a whole and to a very broad audience--in a manner beyond our
control--represents a material legal risk for Emerald.
Secondly, there are many instances in the manuscript where the actions, conversations and behavior of identifiable individuals at specific named colleges are discussed in
detail and at length in relation to controversial events. Given the sensitivity of the issues involved, there is both the potential for serious harm to Emerald's reputation and the significant possibility of legal action. Substantial changes to the
content and nature of the manuscript would need to be made, or Emerald would need to accept a high level of risk both reputational and legal. The practical costs and difficulty of managing any reputational or legal problems that did arise are of further
concern to Emerald.
Downton Abbey is a 2019 UK period drama by Michael Engler. Starring Michelle Dockery, Maggie Smith and Tuppence Middleton.
Adapted from the hit TV series Downton Abbey that tells
the story of the Crawley family, a wealthy owner of a large estate in the English countryside in the early 20th century.
The Downton Abbey movie may seem family-friendly, but gay references have caused a stir for the Irish film
censor. The Irish Film Classification Office (IFCO) revealed that the film was initially given a 12A rating due to several offensive references to the sexuality of a gay butler, Thomas Barrow. A subplot in which Barrow visits a gay club in York sees the
bar raided by police who are verbally abusive towards the gay men, describing them as 'perverts' and 'queers'.
In fact filmmakers had consulted a historical adviser who said that Barrow's experiences are an accurate depiction of gay life in
interwar Britain. The plot was also praised by LGBT+ activists.
So the movie's distributor Universal were no doubt confident in appealing IFCO's decision, seeking a PG rating. The appeal was duly won and the film has now been re-rated to PG
for brief homophobic reference.
The appeals board felt an audience familiar with its characters and setting would have been aware of the storyline about a gay character, so they changed the rating to a PG.
Ger Connolly, the director of
IFCO, told The Times that the 12A classification had been a margin call. He said:
My decision came down to the use of words like 'pervert' in the context of a character's sexuality. For me, that moved it into the 12A
rating.
Connolly was clearly a lone voice in deciding on a 12A rating and received no support in the UK where the film was always rated PG uncut by the BBFC for mild threat, language.
Three families of those killed while watching a Batman film in 2012 have written to Warner Bros complaining about the new Joker film and urging the studio to join action against gun violence.
Twelve people died in a cinema showing The Dark Knight
Rises in Colorado. They included Jessica Ghawi, 24, whose mother Sandy Phillips told BBC News she was horrified by the Joker trailers. Speaking to BBC News, Phillips said:
When I first saw the trailers of the movie, I
was absolutely horrified. And then when I dug a little deeper and found out that it had such unnecessary violence in the movie, it just chilled me to my bones. It just makes me angry that a major motion picture company isn't taking responsibility and
doesn't have the concern of the public at all.
A letter from the 3 families asked the studio to lobby for gun reform, help fund survivor funds and gun violence intervention schemes, and end political contributions to candidates who
take money from the National Rifle Association.
Warner Bros responded that the latest film Joker was not an endorsement of real-world violence and said that the studio has a long history of donating to victims of violence, including the 2012
cinema shooting in Aurora, Colorado. It added:
Make no mistake: neither the fictional character Joker, nor the film, is an endorsement of real-world violence of any kind. It is not the intention of the film, the
filmmakers or the studio to hold this character up as a hero.
The EU recently enacted an internet censorship law giving websites the right to demand fees for linking to them. It was hoped that Google in particular would end up paying for links to European news providers struggling for revenue in the modern internet
environment.
But it seems that Google may have other ideas. Google is changing the way it displays news stories produced by European publishers in France as new copyright rules go into effect. Rather than paying publishers to display snippets of their
news stories, as was intended, Google will show only headlines from articles instead.
Google says that it doesn't pay for news content as a matter of policy. The company shut down its Google News in Spain after a law passed in 2014 would have
mandated such payments. Google are sticking to their guns. The company said:
We believe that Search should operate on the basis of relevance and quality, not commercial relationships. That's why we don't accept payment from anyone
to be included in organic search results and we don't pay for the links or preview content included in search results.
This move will disappoint publishers who had hoped for additional revenue as a result of new copyright law that goes into effect
in France next month. The country is the first to implement European Union copyright rules passed earlier this year .
But perhaps there is a worse to come for European companies. It could be that in a page of Google news search results, US news
services may have embellished entries with snippets and thumbnail images whilst the European equivalent will just have a boring text link. And guess which entries people will probably click on.
Maybe it won't be long before European companies set
their fees at zero for using their snippets and images.
Doctors writing for British Medical Journal open have issues a strong criticism of Ofcom for not implementing a total ban on junk food advertising before the 9pm watershed. Ofcom had a more nuanced restriction only targeting pre-watershed adverts in
programmes that appeal to children.
The doctors have no gone further and suggested that public health officials should take over Ofcom's TV censorship powers related to health.
In a new study published in the journal BMJ Open, the campaigners
said that the industry has unduly influenced the regulations for TV advertising of unhealthy foods to children. Hence, they said that since Ofcom's duty is to protect commercial broadcast interests, they should not be responsible for a public health
issue.
Instead, the doctors argue, that public health doctors should be the ones to decide on this matter, noting that they are more credible in making decisions regarding health. The researchers based their conclusions on a thematic analysis of
responses from stakeholders to the public consultation on proposals, which became effective in 2009. The proposals aimed to emphasize rules on TV advertising of foods for children and even teens.
The doctors say that Ofcom may have prioritized
commercial considerations over the health of the children. This fact has led to questioning of the conflict of interests of the regulatory body, if protecting broadcasting interests should be a reason for not allowing it to lead a public health
regulation.
They added that Ofcom should have banned adverts of high-sugar, high-fat, and salty food before 9 p.m. when children are still watching programs like evening shows with their parents. Despite banning junk food advertisements during
shows watched by children aged 4 to 15 years old, it did so after two years. It banned the adverts only after industry representatives told it to do so.
The EU's top court has ruled that Google does not have to apply the right to be forgotten globally.
It means the firm only needs to remove links from its search results in the EU, and not elsewhere
The ruling stems from a dispute between Google
and a French privacy censor CNIL. In 2015 it ordered Google to globally remove search result listings to pages containing banned information about a person.
The following year, Google introduced a geoblocking feature that prevents European users
from being able to see delisted links. But it resisted censoring search results for people in other parts of the world. And Googlechallenged a 100,000 euro fine that CNIL had tried to impose.
The right to be forgotten, officially known as the
right to erasure, gives EU citizens the power to request data about them be deleted. Members of the public can make a request to any organisation verbally or in writing and the recipient has one month to respond. They then have a range of considerations
to weigh up to decide whether they are compelled to comply or not.
Google had argued that internet censorship rules should not be extended to external jurisdictions lest other countries do the same, eg China would very much like to demand that the
whole world forgets the Tiananmen Square massacre.
The court also issued a related second ruling, which said that links do not automatically have to be removed just because they contain information about a person's sex life or a criminal
conviction. Instead, it ruled that such listings could be kept where strictly necessary for people's freedom of information rights to be preserved. However, it indicated a high threshold should be applied and that such results should fall down search
result listings over time.
Notably, the ECJ ruling said that delistings must be accompanied by measures which effectively prevent or, at the very least, seriously discourage an internet user from being able to access the results from one of
Google's non-EU sites. It will be for the national court to ascertain whether the measures put in place by Google Inc meet those requirements.
Nick Clegg, the Facbook VP of Global Affairs and Communications writes in a blog post:
Fact-Checking Political Speech
We rely on third-party fact-checkers to help reduce the spread of false news and
other types of viral misinformation, like memes or manipulated photos and videos. We don't believe, however, that it's an appropriate role for us to referee political debates and prevent a politician's speech from reaching its audience and being subject
to public debate and scrutiny. That's why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we
will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related
information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here .
Newsworthiness Exemption
Facebook has had a newsworthiness exemption since 2016 . This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads -- if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.
When we make a determination as to newsworthiness, we evaluate the public interest value of the piece of speech against the risk of harm. When balancing these interests, we take a number of factors into consideration, including
country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the
country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these
evaluations will be holistic and comprehensive in nature, and will account for international human rights standards.
An outdoor poster ad for Not Just Cooling, an air conditioning company, seen in July 2019, featured a woman in denim shorts, a white T-shirt and sunglasses. Large text adjacent to the image stated, YOUR WIFE IS HOT! alongside the claim Better get the air
conditioning fixed.
Twenty five complainants, who claimed the ad was sexist and objectified women, challenged whether the ad was offensive and irresponsible.
Not Just Cooling Ltd said that the tagline, YOUR
WIFE IS HOT was relevant to the nature of their business. They did not believe that the tagline was inappropriate and added that the woman was tastefully dressed.
ASA Assessment: Complains upheld
The ASA
noted that the ad depicted a woman wearing summer clothing alongside the tagline, YOUR WIFE IS HOT. We acknowledged that the choice of image and tagline was broadly relevant to the advertised product which would predominately be used in summer when an
individual was too hot, and therefore in need of air conditioning. Although we acknowledged that the use of the woman and tagline would be understood as a double entendre on the women being both literally hot and also attractive, we considered that it
was likely to be viewed as demeaning towards women. While some consumers might appreciate that the use of the double entendre was comical in tone, we considered that the ad had the effect of objectifying women and by commenting on a woman's physical
appearance to draw attention to the ad. In light of those factors, we concluded that the ad was likely to cause serious offence to some consumers and was socially irresponsible.
The ad must not appear again in its current form. We
told Not Just Cooling Ltd to ensure their advertising was socially responsible and did not cause serious or widespread offence by objectifying women.
A an oppressive censorship bill has been tabled in the Kenyan parliament targeting social media group admins and bloggers.
MP Malulu Injendi has tabled The Kenya Information and Communication (Amendment) Bill 2019 which specifically targets
group admins, who will be used to police the kind of content shared in their groups.
The Bill defines social media platforms to include online publishing and discussion, media sharing, blogging, social networking, document and data sharing
repositories, social media applications, social bookmarking and widgets. The bill reads;
The new part will introduce new sections to the Act on licensing of social media platforms, sharing of information by a licensed person, creates obligations
to social media users, registration of bloggers and seeks to give responsibility to the Kenyan Communications Authority (CA) to develop a bloggers' code of conduct in consultation with bloggers.
The Communications Authority will
maintain a registry of all bloggers and develop censorship rules for bloggers.
The proposed bill means that all group admins on any social platform will be required to get authorisation from CA before they can open such groups. The bill also
states that admins should monitor content shared in their groups and remove any member that posts inappropriate content. The admins are also required to ensure all their members are over 18 years old. Group admins will also be required to have a physical
address and keep a record of the group members.
Joker is a 2019 USA crime thriller by Todd Phillips. Starring Robert De Niro, Joaquin Phoenix and Marc Maron.
Joker centers around an origin of the iconic arch
nemesis and is an original, standalone story not seen before on the big screen. Todd Phillips' exploration of Arthur Fleck (Joaquin Phoenix), a man disregarded by society, is not only a gritty character study, but also a broader cautionary tale.
After a fair amount of speculation about the BBFC rating, brought on in part because of the belated publication, the BBFC have announced that the cinema release will be 15 uncut for strong bloody violence, language.
For comparison
the US film censors at the MPAA rated the film R for strong bloody violence, disturbing behavior, language and brief sexual images.
Prior speculation was that maybe the MPAA R rated film would be BBFC 18 rated or maybe the delay was caused
by Warner Brothers was commissioning cuts for a 15 rating.
It seems strange that the BBFC seems OK with cinema films in particular getting rated only a few days before release. Surely many people will therefore be making decisions, or even booking
seats, before they are made aware of the rating. It probably leads to a few cinema goers watching films that they, or maybe their parents, may have felt unsuitable if they had but known.
John Wick: Chapter 3 - Parabellum is a 2019 USA action crime thriller by Chad Stahelski. Starring Keanu Reeves, Halle Berry and Ian McShane.
In this third installment of the adrenaline-fueled action franchise, skilled assassin John Wick (Keanu Reeves) returns with a $14 million price tag on his head and an army of bounty-hunting killers on his trail.
After killing a member of the shadowy international assassin's guild, the High Table, John Wick is excommunicado, but the world's most ruthless hit men and women await his every turn.
The film was uncut in the US and UK, with MPAA R
and BBFC 15 ratings respectively.
However the film did not fare so well in Australia where it suffered category cuts for am MA15+ cinema release. The reported cuts are:
After a protracted knife fight John Wick stabs one of his opponents in the eye. But the details of this have been obscure by reframing.
Edits to a fight with the French guy in the library.
Edits to a head shot using a shotgun
towards the end.
The Australian Censorship Board reveals that the film will be uncut and R18+ for 4K Blu-ray but details for DVD and standard Blu-ray have not yet been published.
Meanwhile IMDb notes that the film has been cut in India for an adults only 'A'
rating. The cuts were:
Cuts to strong violence
to remove middle finger gestures
to remove an expletive
A Smoking kills caption was also included in scenes featuring characters
smoking.
Earlier this year we started testing a way to give people more control over the conversations they start. Today, we're expanding this test to Japan and the United States!
With
this test, we want to understand how conversations on Twitter change if the person who starts a conversation can hide replies. Based on our research and surveys we conducted, we saw a lot of positive trends during our initial test in Canada, including:
People mostly hide replies that they think are irrelevant, abusive or unintelligible. Those who used the tool thought it was a helpful way to control what they saw, similar to when keywords are muted.
We saw that people were more likely to reconsider their interactions when their tweet was hidden: 27% of people who had their tweets hidden said they would reconsider how they interact with others in the future.
People were concerned hiding someone's reply could be misunderstood and potentially lead to confusion or frustration. As a result, now if you tap to hide a Tweet, we'll check in with you to see if you want to also block that
account.
We're interested to see if these trends continue, and if new ones emerge, as we expand our test to Japan and the US. People in these markets use Twitter in many unique ways, and we're excited to see how they might use this new tool.
Midsommar is a 2019 USA horror mystery thriller by Ari Aster. Starring Florence Pugh, Jack Reynor and William Jackson Harper.
Dani and Christian are a young American
couple with a relationship on the brink of falling apart. But after a family tragedy keeps them together, a grieving Dani invites herself to join Christian and his friends on a trip to a once-in-a-lifetime midsummer festival in a remote Swedish village.
What begins as a carefree summer holiday in a land of eternal sunlight takes a sinister turn when the insular villagers invite their guests to partake in festivities that render the pastoral paradise increasingly unnerving and viscerally disturbing. From
the visionary mind of Ari Aster comes a dread-soaked cinematic fairytale where a world of darkness unfolds in broad daylight.
There was a little hype about cuts in the US for an MPAA R rating. A Director's Cut has now
been produced but it seems more about adding an extra scene favoured by Ari Aster rather than anything related to censorship.
Apple has grabbed the US rights to the Director's Cut. It will therefore be an exclusive to Apple's store and Apple TV. The
Director's Cut will be released on Blu-ray in Europe though.
Update: But Apple hasn't got the worldwide exclusive rights though
The UK Blu-ray artwork has now been released and it shows that the Director's Cut will be included in the Blu-ray release.
UK: The Theatrical Version was passed 18 uncut for strong gory images: UK: The Directors Cut details have
not yet been published by the BBFC:
2019 Entertainment in Video [Theatrical Version + Director's Cut] (RB) Blu-ray
at UK Amazon released on 28th October 2019
2019 Entertainment in
Video [Theatrical Version only] R2 DVD at UK Amazon released on 28th October 2019
The Long Version of Midsommar will also be released on Blu-ray in France. The Collector's Edition from Metropolitan will be released on 2nd December 2019 with both the Theatrical Version and the Long Version.
The BBFC decide that The Howling III: The Marsupials should be 18 rated, whilst the MPAA think that PG-13 is closer to the mark
21st September 2019
Thanks to David
Howling III: The Marsupials is a 1987 Australia comedy horror by Philippe Mora. Starring William Yang, Deby Wightman and Christopher Pate.
There are no censorship issues with this release. There is a little bit of discrepancy in age ratings though. The BBFC claim that it should be 18 rated, whilst the MPAA have decided on a PG-13 rating.
The 2019 Blu-ray release was passed 18
uncut by the BBFC for strong gory images for:
2019 Screenbound Pictures (RB) Blu-ray at UK Amazon released on 7th
October 2019
The 1987 rating from the MPAA pre-dates rating reasons, but PG-13 has the generic warning: Parents Strongly Cautioned - Some material may be inappropriate for children under 13.
And by way of a tie break, IMDb's parental guidance seems
to side with the MPAA's opinion:
Violence & Gore Edit
Mild. Lots of werewolves attacks are obscured and off camera, but some blood and bite marks are shown.
Frightening & Intense Scenes Edit
Mild. Some werewolf transformations may appear frightening to young children.
I've never heard of any
alternative versions of the film that would explain this apparent discrepancy.
Promotional Material
Barry Otto (Strictly Ballroom) is maverick Professor Harry Beckmeyer who learns to understand the torment
of a freak species when he experiments on a captured "Werewolf" in his lab. When the goverments wants the experiments stopped and all traces of the species wiped out Harry (who's now fallen in love with the marsupial girl, Jerboa - Imogen
Annesley) is determined to protect her no matter what.
Hustlers is a 2019 USA crime comedy thriller by Lorene Scafaria. Starring Constance Wu, Jennifer Lopez and Julia Stiles.
Inspired by the viral New York Magazine article,
Hustlers follows a crew of savvy former strip club employees who band together to turn the tables on their Wall Street clients.
Hustlers was banned by Malaysia's film censors of the LPF because of its excessive obscene content.
The board said naked breasts, erotic dances and scenes featuring drugs made it not suitable for public screening.
And for comparison the film was uncut and BBFC 15 rated in the UK for sexualised nudity, strong sex references, language, drug
misuse.
How cookies and tracking exploded, and why the adtech industry now wants full identity tokens. A good technical write up of where we are at and where it all could go
The organisation that was previously well known as the MPAA has changed its name a little. The Motion Picture Association of America has become The Motion Picture Association.
The trade group, representing Hollywood's major studios + Netflix, then
adds a regional identify to this generic name. So in the US the organisation will be known as MPA America whilst in the far east it will be known as MPA Asia Pacific.
The MPA name has been used outside of the US for sometime, so that will be
familiar already.
The updated version of the iconic globe and reel logo that is so familiar to American moviegoers will now be used by all regional offices. Previously used versions of the logo will be phased out in the coming weeks and months.
The MPA writes:
Unifying the Motion Picture Association brand is the latest initiative under Chairman Rivkin's leadership, which has also included the addition of Netflix as a member studio earlier this year and
the elevation of Gail MacKinnon to Senior Executive Vice President of Global Policy and Government Affairs, overseeing all government affairs functions around the world. This month, the Motion Picture Association returned to its newly renovated
headquarters in Washington and will begin hosting screenings and other events this fall.
After siding with the censors against DNS over HTTPS, the UK ISP trade association chair is interviewed over future directions for government internet censorship
Culture Secretary Nicky Morgan's made the keynote address to the Royal Television Society at the University of Cambridge. She took the opportunity to announce that the government is considering how to censor internet more in line with strict TV
censorship laws.
She set the background noting how toxic the internet has become. Politicians never seem to consider that a toxic response to politicians may be totally justified by the dreadful legislation being passed to marginalise, disempower and
impoverish British people. She noted:
And this Government is determined to see a strong and successful future for our public service broadcasters and commercial broadcasters alike.
I really
value the important contribution that they all make to our public life, at a time when our civil discourse is increasingly under strain.
Disinformation, fuelled by hermetically sealed online echo chambers, is threatening the
foundations of truth that we all rely on.
And the tenor of public conversations, especially those on social media, has become increasingly toxic and hostile.
Later she spoke of work in progress to move censor
the internet along the lines of TV. She said:
The second area where we need to adapt is the support offered by the Government and regulators.
We need to make sure that regulations, some of which
were developed in the analogue age, are fit for the new ways that people create and consume content.
While I welcome the growing role of video on demand services and the investment and consumer choice they bring, it is important
that we have regulatory frameworks that reflect this new environment.
For example, whereas a programme airing on linear TV is subject to Ofcom's Broadcasting Code, and the audience protections it contains, a programme going out on
most video on demand services is not subject to the same standards.
This does not provide the clarity and consistency that consumers would expect.
So I am interested in considering how regulation should
change to reflect a changing sector.
Nearly 30,000 people have signed a petition calling on Oxford University Press to remove entries that discriminate against, and patronise women
Launched this summer by Maria Beatrice Giovanardi from the feminist group, the Fawcett Society, the
petition claims that Oxford dictionaries contain words such as bitch, besom, piece, bit, mare, baggage, wench, petticoat, frail, bird, bint, biddy, filly as synonyms for woman.
Sentences chosen to show usage of the word woman include: Ms
September will embody the professional, intelligent yet sexy career woman and I told you to be home when I get home, little woman. Such sentences depict women as sex objects, subordinate, and/or an irritation to men, the petition says.
Signatories are calling on OUP to eliminate all phrases and definitions that discriminate against and patronise women and/or connote men's ownership of women, to enlarge the dictionary's entry for 'woman', and to include examples representative of minorities, for example, a transgender woman, a lesbian woman, etc.
In response, OUP's head of lexical content strategy Katherine Connor Martin pointed out that the content referred to in the petition is not from the scholarly Oxford English Dictionary, but from the Oxford Thesaurus of English and the Oxford
Dictionary of English, which are drawn from real-life use of language, Martin said:
If there is evidence of an offensive or derogatory word or meaning being widely used in English, it will not be excluded from the
dictionary solely on the grounds that it is offensive or derogatory.
Nonetheless, part of the descriptive process is to make a word's offensive status clear in the dictionary's treatment. For instance, the phrase the little woman
is defined as 'a condescending way of referring to one's wife', and the use of 'bit' as a synonym for woman is labelled as 'derogatory' in the thesaurus.
Update: Utterly ridiculous!
19th September 2019.
Thanks to Alan
Utterly ridiculous!
Basic purpose of a dictionary is to enable the user to find the meaning of a word.
Thus, it must contain words that could offend for one reason or another - 'fuck' in its literal
meaning or as a cussword; 'Jesus Christ!' used blasphemously; 'nigger' , essential for readers of Mark Twain; 'womanish' , 'girly' , as sexist insults...
A foreign speaker seeing that Boris Johnson
called David Cameron a 'girly swot' needs to be able to find out what these words mean and why they are insulting.
The Victorian scholars who founded the OED beat themselves up precisely because contemporary mores wouldn't
allow them to include words like 'fuck' which they believed should have been there. Hard to believe that these idiots can be trying to turn the clock back in the 21st century.
Here's a rundown of the privacy concerns I had when setting up and using a smart TV -- and the steps I took in an attempt to reduce these privacy concerns. By James Boyle
Facebook sets out a plans for a top level body to decide upon censorship policy and to arbitrate on cases brought by Facebook, and later, Facebook users
Mark Zuckerberg has previously described plans to create a high level oversight board to decide upon censorship issues with a wider consideration than just Facebook interests. He suggested that national government interests should be considered at this
top level of policy making. Zuckerberg wrote:
We are responsible for enforcing our policies every day and we make millions of content decisions every week. But ultimately I don't believe private companies like ours
should be making so many important decisions about speech on our own. That's why I've called for governments to set clearer standards around harmful content. It's also why we're now giving people a way to appeal our content decisions by establishing the
independent Oversight Board.
If someone disagrees with a decision we've made, they can appeal to us first, and soon they will be able to further appeal to this independent board. The board's decision will be binding, even if I or
anyone at Facebook disagrees with it. The board will use our values to inform its decisions and explain its reasoning openly and in a way that protects people's privacy.
The board will be an advocate for our community --
supporting people's right to free expression, and making sure we fulfill our responsibility to keep people safe. As an independent organization, we hope it gives people confidence that their views will be heard, and that Facebook doesn't have the
ultimate power over their expression. Just as our Board of Directors keeps Facebook accountable to our shareholders, we believe the Oversight Board can do the same for our community.
As well as a detailed charter, Facebook provided a
summary of the design of the board.
Along with the charter, we are providing a summary which
breaks down the elements from the draft charter , the feedback we've received, and the rationale
behind our decisions in relation to both. Many issues have spurred healthy and constructive debate. Four areas in particular were:
Governance: The majority of people we consulted supported our decision to establish an independent trust. They felt that this could help ensure the board's independence, while also providing a means to provide additional
accountability checks. The trust will provide the infrastructure to support and compensate the Board.
Membership: We are committed to selecting a diverse and qualified group of 40 board members, who will serve
three-year terms. We agreed with feedback that Facebook alone should not name the entire board. Therefore, Facebook will select a small group of initial members, who will help with the selection of additional members. Thereafter, the board itself will
take the lead in selecting all future members, as explained in this post . The trust will formally appoint members.
Precedent: Regarding the board, the charter confirms that panels will be expected, in general, to defer to past decisions. This reflects the feedback received during the public consultation period. The board can also request
that its decision be applied to other instances or reproductions of the same content on Facebook. In such cases, Facebook will do so, to the extent technically and operationally feasible.
Implementation : Facebook will
promptly implement the board's content decisions, which are binding. In addition, the board may issue policy recommendations to Facebook, as part of its overall judgment on each individual case. This is how it was envisioned that the board's decisions
will have lasting influence over Facebook's policies, procedures and practices.
Process
Both Facebook and its users will be able to refer cases to the board for review. For now, the board will begin its operations by hearing Facebook-initiated cases. The system for users to initiate
appeals to the board will be made available over the first half of 2020.
Over the next few months, we will continue testing our assumptions and ensuring the board's operational readiness. In addition, we will focus on sourcing and
selecting of board members, finalizing the bylaws that will complement the charter, and working toward having the board deliberate on its first cases early in 2020.
There was a protest outside Chester's Storyhouse on Monday as the highly anticipated Rocky Horror Show opened in the city for its first night.
The iconic rock n roll musical which has been performed worldwide for the past 45 years and contains themes
including homosexuality and cross-dressing is one of the most popular shows in the history of theatre.
But not everyone shares this opinion. A group of christian protesters set up camp opposite the theatre, brandishing placards bearing Bible
quotes and shouting at audience members that they had dirty minds as they arrived. One sign read:
Be sure your sin will find you out.
Pastor Peter Simpson, minister of Penn Free Methodist Church,
was unhappy that the show focuses on themes including gender fluidity and homosexuality.
Earlier this month a pastor in Buckinghamshire handed a letter of protest into the Wycombe Swan theatre claiming the show to be 'a corruption of public
decency'.
Chinese news channel hires ex Ofcom bigwig for reputation management after a disgraceful reporting incident, only for him to resign when he sees how the channel handled the Hong Kong protests
Facebook has launched a new feature allowing Instagram users to flag posts they claim contain fake news to its fact-checking partners for vetting.
The move is part of a wider raft of measures the social media giant has taken to appease the authorities
who claim that 'fake news' is the root of all social ills.
Launched in December 2016 following the controversy surrounding the impact of Russian meddling and online fake news in the US presidential election, Facebook's partnership now involves
more than 50 independent 'fact-checkers' in over 30 countries .
The new flagging feature for Instagram users was first introduced in the US in mid-August and has now been rolled out globally.
Users can report potentially false posts by
clicking or tapping on the three dots that appear in the top right-hand corner, selecting report, it's inappropriate and then false information.
No doubt the facility will be more likely to report posts that people don't like rather for 'false
information'.
The Irish Communications Minister Richard Bruton has scrapped plans to introduce restrictions on access to porn in a new online safety bill, saying they are not a priority.
The Government said in June it would consider following a UK plan to block
pornographic material until an internet user proves they are over 18. However, the British block has run into administrative problems and been delayed until later this year.
Bruton said such a measure in Ireland is not a priority in the Online
Safety Bill, a draft of which he said would be published before the end of the year.
It's not the top priority. We want to do what we committed to do, we want to have the codes of practice, he said at the Fine Gael parliamentary party think-in. We
want to have the online commissioner - those are the priorities we are committed to.
An online safety commissioner will have the power to enforce the online safety code and may in some cases be able to force social media companies to remove or
restrict access. The commissioner will have responsibility for ensuring that large digital media companies play their part in ensuring the code is complied with. It will also be regularly reviewed and updated.
Bruton's bill will allow for a more
comprehensive complaint procedure for users and alert the commissioner to any alleged dereliction of duty. The Government has been looking at Australia's pursuit of improved internet safety.
And the Council of Europe responds to it by calling for aggressive and unfair measures that will inevitably prove divisive, unjust and alienating to everybody
The Council of Europe is an organisation which aims to uphold human rights across Europe (beyond the EU and reaching as far as Russia). The European Court of Human Rights was established under the auspices of the Council of Europe.
The Council has
recently been considering the issue of sexism being everywhere and has penned a long list of recommendations that are taken straight out of the uncompromising language of extreme feminism. The council explains in a press release:
New Council of Europe action against sexism
In March 2019, the Council of Europe Committee of Ministers adopted a new Recommendation on Preventing and Combating Sexism. Not only does this text contain the first ever
internationally agreed definition of sexism, but it also proposes a set of concrete measures to combat this wide-spread phenomenon.
Sexism is present in all areas of life. From catcalls on the street, to women being ignored during
work meetings, to boys being bombarded with aggressive role-models in video games. It is also there when comments are made about politicians on the length of their skirts rather than their latest parliamentary report. When sexist behaviour accumulates,
it can lead to an acceptance of discrimination and even violence.
Secretary General Thorbjørn Jagland said that No-one should be discriminated against because of their sex. This is a basic principle which we are still far from
respecting in practice. Through efforts to prevent and combat sexist behaviour, the Council of Europe wants to help ensure a level playing field for women and men, boys and girls.
Sexism is harmful and lies at the root of gender
inequality. It produces feelings of worthlessness, self-censorship, changes in behaviour, and a deterioration in health. Sexism affects women and girls disproportionately. Some groups of women, such as politicians, journalists, women's human rights
defenders, or young women, may be particularly vulnerable to acts of sexism. But it can also affect men and boys, when they don't conform to stereotyped gender roles. Moreover, the impact of sexism can be worse for some women and men due to ethnicity,
age, disability, social origin, religion, gender identity, sexual orientation or other factors.
To address these issues and encourage the full implementation of the Recommendation, the Council of Europe has just launched a video
and action page under the hashtag #stopsexism and the slogan See it. Name it. Stop it. The aim is to help the wider public identify acts of sexism and take a stand against them.
Children's cartoon Spongebob Squarepants has fallen afoul of Indonesia's broadcasting watchdog. Again.
As reported by Kompas, the Indonesian Broadcast Commission (KPI) sent a written warning to local channel GTV for broadcasting
scenes that allegedly portrayed violence in The Spongebob Squarepants Movie.
In the warning, KPI said that the movie, first broadcasted by GTV on August 6, portrayed scenes of violence that were inappropriate for children, particularly as
it was aired during a time slot reserved for content suitable for family viewing.
In addition, [the movie was aired again] on August 22, 2019 starting at 3:06pm, which contained scenes such as throwing pie at someone's face and hitting someone
with a block of wood, KPI Deputy Chairman Mulyo Hadi Purnomo said.
At any rate, KPI ruled that GTV violated several articles in the Broadcasting Code of Conduct and Program Standards (P3-SPS), including the prohibition of content that might
encourage children to learn about inappropriate behavior. It said the TV station only received a written warning because this was its first offense of its kind.
Marighella is a 2019 Brazil action historical thriller by Wagner Moura. Starring Ana Paula Bouzas, Bella Camero and Herson Capri.
1969. Marighella had no time for fear. On the one hand,
a violent military dictatorship. On the other, an intimidated left. Alongside, revolutionaries 30 years younger than him and willing to fight, the revolutionary leader opted for action. In Wagner Moura's,"Marighella," Brazil's number one enemy
attempts to articulate a resistance all the while ousting the heinous crimes of torture and the infamous censorship instituted by the oppressive regime. In a radical face off, he fights for a people whose support is uncertain - all the while trying to a
keep the promise of reuniting with his son - who he distanced himself from in order to protect.
The Bazilain film Marighella has had its Premiere Canceled.
The political film had been scheduled for its premiere on a notable
date marking both the fiftieth anniversary of Carlos Marighella's murder and the Black Awareness Day.
In a statement, producer O2 Filmes reported that the film's premiere, initially scheduled for November 20th, was canceled because the directors
were unable to comply with the procedures required by the National Cinema Agency.
Google has paid a fine for failing to block access to certain websites banned in Russia.
Roscomnadzor, the Russian government's internet and media censor, said that Google paid a fine of 700,000 rubles ($10,900) related to the company's refusal to
fully comply with rules imposed under the country's censorship regime.
Search engines are prohibited under Russian law from displaying banned websites in the results shown to users, and companies like Google are asked to adhere to a regularly
updated blacklist maintained by Roscomnadzor.
Google does not fully comply with the blacklist, however, and more than a third of the websites banned in Russia could be found using its search engine, Roscomnadzor said previously.
No doubt
Russia is no working on increased fines for future transgressions.
Russia's powerful internal security agency FSB has enlisted the help of the telecommunications, IT and media censor Roskomnadzor to ask a court to block Mailbox and Scryptmail email providers.
It seems that the services failed to register with
the authorities as required by Russian law. Both are marketed as focusing strongly on the privacy segment and offering end-to-end encryption.
News source RBK noted that the process to block the two email providers will in legal terms follow the
model applied to the Telegram messaging service -- adding, however, that imperfections in the blocking system are resulting in Telegram's continued availability in Russia.
On the other hand, some experts argued that it will be easier to block an
email service than a messenger like Telegram. In any case, Russia is preparing to a new law to come into effect on November 1 that will see the deployment of Deep Packet Inspection equipment, which should result in more efficient blocking of services.
A parliamentary committee initiated by the Australian government will investigate how porn websites can verify Australians visiting their websites are over 18, in a move based on the troubled UK age verification system.
The family and social
services minister, Anne Ruston, and the minister for communications, Paul Fletcher, referred the matter for inquiry to the House of Representatives standing committee on social policy and legal affairs.
The committee will examine how age
verification works for online gambling websites, and see if that can be applied to porn sites. According to the inquiry's terms of reference, the committee will examine whether such a system would push adults into unregulated markets, whether it would
potentially lead to privacy breaches, and impact freedom of expression.
The committee has specifically been tasked to examine the UK's version of this system, in the UK Digital Economy Act 2017.
Hopefully they will understand better than UK
lawmakers that it is paramount importance that legislation is enacted to keep people's porn browsing information totally safe from snoopers, hackers and those that want to make money selling it.
The Music Marathon is a music programme on Gold which is broadcast on AM radio in Manchester, London, Derby and Nottingham and nationally on DAB. The licences for these services are
held by Global Radio Limited.
Ofcom received a complaint about offensive language (“yellow Chinkies”) in the music track Melting Pot, a song from 1969 by Blue Mink . No introduction to the track was broadcast, or any
other content discussing it. The track included the following lyrics:
“Take a pinch of white man, Wrap him up in black skin, Add a touch of blue blood, And a little bitty bit of Red Indian boy. Oh,
Curly Latin kinkies, Mixed with yellow Chinkies, If you lump it all together And you got a recipe for a get along scene; Oh what a beautiful dream If it could only come true, you know, you know.
What we need is
a great big melting pot, Big enough to take the world and all it’s got And keep it stirring for a hundred years or more And turn out coffee-coloured people by the score”.
We considered that references in
the lyrics (including “yellow Chinkies”, “Red Indian boy”, “curly Latin kinkies” and “coffee-coloured people”) raised potential issues under Rule 2.3 of the Code:
Rule 2.3: “In applying generally accepted standards
broadcasters must ensure that material which may cause offence is justified by the context...Such material may include, but is not limited to, offensive language…discriminatory treatment or language (for example on the grounds of…race…) Appropriate
information should also be broadcast where it would assist in avoiding or minimising offence”.
Global Radio said that it understood some of the lyrics in this song had the potential to cause offence but said that the
other lyrics and the context of the time it was written and released mitigated the potential for offence. It said that the offensive language was not intended to be used in a derogatory fashion in the song. It said that the term yellow Chinkies was not used as an insulting term directed at a person of Chinese origin. The Licensee said that it is clear from the lyrics of the song that the message of the song is racial harmony, inclusivity and equality
The Licensee said that following the complaint notification from Ofcom, it had permanently removed the track from Gold's playlist.
Ofcom decision: Resolved
We considered that the
use of the term yellow was a derogatory reference to the skin colour of Chinese people. We therefore considered that the phrase yellow Chinkies had the potential to be highly offensive.
Ofcom's research does not provide
direct evidence for the offensiveness of the terms Red Indian boy , curly Latin kinkies and coffee-coloured people . However, Ofcom considered that Red Indian is generally understood to be a pejorative term in modern speech
and is frequently replaced with Native American . Although the terms curly Latin kinkies and coffee-coloured people are not widely understood to be racial slurs in modern society, unlike the terms Chinky and Red Indian , we considered that they had the potential to cause offence because they could also be considered derogatory references to particular ethnic groups.
In our view, the potential offence caused by these lyrics may have been heightened by the cumulative effect of the repeated use of this language during the verse and chorus
In considering the context of the
broadcast, Ofcom took into account that Melting Pot was released in 1969 by Blue Mink, and reached number three in the UK Singles chart and number 11 in Ireland in 1970. We considered that, although this song was popular at the time, the passage of time
(nearly 40 years) may have not made it sufficiently well-known today to mitigate the potential for offence.
Ofcom also considered Global's argument that any offence was mitigated in this case by the positive intention of the song,
which was a message of racial harmony.
We did not agree that this provided sufficient context to mitigate the potential for offence. The title Melting Pot, which may have provided an indication of the track's overall message, was
not broadcast, nor was the song introduced with any contextual information that would have highlighted its overall message to listeners. There was also no other context provided to justify the broadcast of the offensive language.
For all of the reasons above, Ofcom's Decision is that this potentially offensive material was not justified by the context.
However, we took into account the steps taken by the Licensee following notification of the complaint from Ofcom. We acknowledged that it said it had removed the track permanently from Gold's playlist.
Our
Decision therefore, is that this case is resolved.
Content from previous decades can be broadcast under the Code. However generally accepted standards clearly change significantly over time, and audience expectations of older
content may not be sufficient to justify its broadcast. Where older material contains content, such as language, which has the potential to cause offence to today's audiences, broadcasters should consider carefully how to provide sufficient context to
comply with Rule 2.3 of the Code.
Update: Please leave it alone. I just think it's ridiculous
Sixties band Blue Mink has blasted a radio station's decision to drop their racial harmony promoting song Melting Pot from its playlist.
TV censor Ofcom made a politically correct decision to ban the song after one listener complained
about the lyrics when the song was played on Gold.
African-American lead singer Madeline Bell said:
It took years to suddenly decide in this politically correct time that we live in that it was an offensive and
racist record. We're worrying about the lyrics of a protest song about making coffee-coloured people. The song is 50 years old. Please leave it alone. I just think it's ridiculous.
Bell, ho performs Blue Mink songs as part of
her solo routine, has vowed to continue performing Melting Pot.
One of the key learnings from recent events is that there is growing demand for privacy features. The Firefox Private Network is an extension which provides a secure, encrypted path to the web to protect your connection and your personal
information anywhere and everywhere you use your Firefox browser.
There are many ways that your personal information and data are exposed: online threats are everywhere, whether it's through phishing emails or data breaches. You may often find
yourself taking advantage of the free WiFi at the doctor's office, airport or a cafe. There can be dozens of people using the same network -- casually checking the web and getting social media updates. This leaves your personal information vulnerable to
those who may be lurking, waiting to take advantage of this situation to gain access to your personal info. Using the Firefox Private Network helps protect you from hackers lurking in plain sight on public connections. To learn more about Firefox Private
Network, its key features and how it works exactly, please take a look at this blog post .
As a Firefox user and account holder in the US, you can start testing the Firefox Private Network
today . A Firefox account allows you to be one of the first to test potential new products and services when we make them available in Europe, so sign up today
and stay tuned for further news and the Firefox Private Network coming to your location soon!
New age rating symbols come into effect for theatrical and VOD services on 31 October 2019.
For packaged media, introducing the new symbols requires changes to the relevant piece of secondary legislation, the Video Recordings Act (Labelling)
Regulations 2012. It is expected that the necessary changes to legislation will be made in time for the new symbols to be used on packaged media starting from 6 April 2020.
Presumably the change is basically to simplify the background so that the
symbols can display better on small screens. There is also two distinct colour changes:
The video 12 rating changes from red to orange to match the orange 12A symbol for theatrical releases.
The 15 certificate changes from red to pink. This matches the Irish theatrical !5A symbol, so perhaps there is some future unification
there. Many films have a joint video distribution in the UK and Ireland but for the moment the IFCO video symbols are very different.
Call to regulate video game loot boxes under gambling law and ban their sale to children among measures needed to protect players, say MPs. Lack of honesty and transparency reported among representatives of some games and social media companies in giving
evidence.
The wide-ranging report calls upon games companies to accept responsibility for addictive gaming disorders, protect their players from potential harms due to excessive play-time and spending, and along with social media
companies introduce more effective age verification tools for users.
The immersive and addictive technologies inquiry investigated how games companies operate across a range of social media platforms and other technologies,
generating vast amounts of user data and operating business models that maximise player engagement in a lucrative and growing global industry.
Sale of loot boxes to children should be banned Government should regulate loot boxes
under the Gambling Act Games industry must face up to responsibilities to protect players from potential harms Industry levy to support independent research on long-term effects of gaming Serious concern at lack of effective system to keep children off
age-restricted platforms and games
MPs on the Committee have previously called for a new Online Harms regulator to hold social media platforms accountable for content or activity that harms individual users. They say the new
regulator should also be empowered to gather data and take action regarding addictive games design from companies and behaviour from consumers. E-sports, competitive games played to an online audience, should adopt and enforce the same duty of care
practices enshrined in physical sports. Finally, the MPs say social media platforms must have clear procedures to take down misleading deep-fake videos 203 an obligation they want to be enforced by a new Online Harms regulator.
In
a first for Parliament, representatives of major games including Fortnite maker Epic Games and social media platforms Snapchat and Instagram gave evidence on the design of their games and platforms.
DCMS Committee Chair Damian
Collins MP said:
Social media platforms and online games makers are locked in a relentless battle to capture ever more of people's attention, time and money. Their business models are built on this, but it's time for
them to be more responsible in dealing with the harms these technologies can cause for some users.
Loot boxes are particularly lucrative for games companies but come at a high cost, particularly for problem gamblers, while
exposing children to potential harm. Buying a loot box is playing a game of chance and it is high time the gambling laws caught up. We challenge the Government to explain why loot boxes should be exempt from the Gambling Act.
Gaming contributes to a global industry that generates billions in revenue. It is unacceptable that some companies with millions of users and children among them should be so ill-equipped to talk to us about the potential harm of their products.
Gaming disorder based on excessive and addictive game play has been recognised by the World Health Organisation. It's time for games companies to use the huge quantities of data they gather about their players, to do more to
proactively identify vulnerable gamers.
Both games companies and the social media platforms need to establish effective age verification tools. They currently do not exist on any of the major platforms which rely on
self-certification from children and adults.
Social media firms need to take action against known deepfake films, particularly when they have been designed to distort the appearance of people in an attempt to maliciously damage
their public reputation, as was seen with the recent film of the Speaker of the US House of Representatives, Nancy Pelosi.
Regulate 'loot boxes' under the Gambling Act:
Loot box
mechanics were found to be integral to major games companies' revenues, with further evidence that they facilitated profits from problem gamblers. The Report found current gambling legislation that excludes loot boxes because they do not meet the
regulatory definition failed to adequately reflect people's real-world experiences of spending in games. Loot boxes that can be bought with real-world money and do not reveal their contents in advance should be considered games of chance played for
money's worth and regulated by the Gambling Act.
Evidence from gamers highlighted the loot box mechanics in Electronic Arts's FIFA series with one gamer disclosing spending of up to £1000 a year.
The Report
calls for loot boxes that contain the element of chance not to be sold to children playing games and instead be earned through in-game credits. In the absence of research on potential harms caused by exposing children to gambling, it calls for the
precautionary principle to apply. In addition, better labelling should ensure that games containing loot boxes carry parental advisories or descriptors outlining that they feature gambling content.
The Government should bring forward regulations under section 6 of the Gambling Act 2005 in the next parliamentary session to specify that loot boxes are a game of chance. If it determines not to regulate loot boxes under the Act
at this time, the Government should produce a paper clearly stating the reasons why it does not consider loot boxes paid for with real-world currency to be a game of chance played for money's worth.
UK Government should
advise PEGI to apply the existing 'gambling' content labelling, and corresponding age limits, to games containing loot boxes that can be purchased for real-world money and do not reveal their contents before purchase.
Safeguarding younger players:
With three-quarters of those aged 5 to 15 playing online games, MPs express serious concern at the lack of an effective system to keep children off age-restricted platforms
and games. Evidence received highlighted challenges with age verification and suggested that some companies are not enforcing age restrictions effectively.
Legislation may be needed to protect children from playing games that are
not appropriate for their age. The Report identifies inconsistencies in age-ratings stemming from the games industry's self-regulation around the distribution of games. For example, online games are not subject to a legally enforceable age-rating system
and voluntary ratings are used instead. Games companies should not assume that the responsibility to enforce age-ratings applies exclusively to the main delivery platforms: all companies and platforms that are making games available online should uphold
the highest standards of enforcing age-ratings.
China's internet censor has ordered online AI algorithms to promote 'mainstream values':
Systems should direct users to approved material on subjects like Xi Jinping Thought, or which showcase the country's economic and social development, Cyberspace Administration of China says
They should not recommend content that undermines
national security, or is sexually suggestive, promotes extravagant lifestyles, or hypes celebrity gossip and scandals
The Cyberspace Administration of China released its draft regulations on managing the cyberspace ecosystem on Tuesday in another sign of how the ruling Communist Party is increasingly turning to technology to cement its ideological control over
society.
The proposals will be open for public consultation for a month and are expected to go into effect later in the year.
The latest rules point to a strategy to use AI-driven algorithms to expand the reach and depth of the government's
propaganda and ideology.
The regulations state that information providers on all manner of platforms -- from news and social media sites, to gaming and e-commerce -- should strengthen the management of recommendation lists, trending topics, hot
search lists and push notifications. The regulations state:
Online information providers that use algorithms to push customised information [to users] should build recommendation systems that promote mainstream values,
and establish mechanisms for manual intervention and override.
Today, on World Suicide Prevention Day, we're sharing an update on what we've learned and some of the steps we've taken in the past year, as well as additional actions we're going to take, to keep people safe on our apps, especially those who are most
vulnerable.
Earlier this year, we began hosting regular consultations with experts from around the world to discuss some of the more difficult topics associated with suicide and self-injury. These include how we deal with suicide notes, the risks of
sad content online and newsworthy depictions of suicide. Further details of these meetings are available on Facebook's new Suicide Prevention page in our Safety Center.
As a result of these consultations, we've made several changes to improve how
we handle this content. We tightened our policy around self-harm to no longer allow graphic cutting images to avoid unintentionally promoting or triggering self-harm, even when someone is seeking support or expressing themselves to aid their recovery. On
Instagram, we've also made it harder to search for this type of content and kept it from being recommended in Explore. We've also taken steps to address the complex issue of eating disorder content on our apps by tightening our policy to prohibit
additional content that may promote eating disorders. And with these stricter policies, we'll continue to send resources to people who post content promoting eating disorders or self-harm, even if we take the content down. Lastly, we chose to display a
sensitivity screen over healed self-harm cuts to help avoid unintentionally promoting self-harm.
And for the first time, we're also exploring ways to share public data from our platform on how people talk about suicide, beginning with providing
academic researchers with access to the social media monitoring tool, CrowdTangle. To date, CrowdTangle has been available primarily to help newsrooms and media publishers understand what is happening on Facebook. But we are eager to make it available to
two select researchers who focus on suicide prevention to explore how information shared on Facebook and Instagram can be used to further advancements in suicide prevention and support.
In addition to all we are doing to find more opportunities
and places to surface resources, we're continuing to build new technology to help us find and take action on potentially harmful content, including removing it or adding sensitivity screens. From April to June of 2019, we took action on more than 1.5
million pieces of suicide and self-injury content on Facebook and found more than 95% of it before it was reported by a user. During that same time period, we took action on more than 800 thousand pieces of this content on Instagram and found more than
77% of it before it was reported by a user.
To help young people safely discuss topics like suicide, we're enhancing our online resources by including Orygen's #chatsafe guidelines in Facebook's Safety Center and in resources on Instagram when
someone searches for suicide or self-injury content.
The #chatsafe guidelines were developed together with young people to provide support to those who might be responding to suicide-related content posted by others or for those who might want to
share their own feelings and experiences with suicidal thoughts, feelings or behaviors.
The New Zealand government has decided to legislate to require Internet TV services to provide age ratings using a self rating scheme overseen by the country's film censor.
Movies and shows available through internet television services such as
Netflix and Lightbox will need to display content classifications in a similar way to films and shows released to cinemas and on DVD, Internal Affairs Minister Tracey Martin has announced.
The law change, which the Government plans to introduce to
Parliament in November, would also apply to other companies that sell videos on demand, including Stuff Pix.
The tighter rules won't apply to websites designed to let people upload and share videos, so videos on YouTube's main site won't need to
display classifications, but videos that YouTube sells through its rental service will.
In a compromise, internet television and video companies will be able to self-classify their content using a rating tool being developed by the Chief Censor,
or use their own systems to do that if they first have them accredited by the Classification Office.
The Film and Literature Board of Review will be able to review classifications, as they do now for cinema movies and DVDs.
The Government
decided against requiring companies to instead submit videos to the film censor for classification, heeding a Cabinet paper warning that this would result in hold-ups.
What's the difference between a child throwing a tantrum and religious groups asking for a ban on something that hurt religious sentiments? Absolutely nothing, except maybe the child can be cajoled into understanding that they might be wrong. Try doing
that with the religious group and you'll be facing trolls, bans, and rape, death or beheading threats. Thankfully, when it comes to the recent call for banning the streaming platform Netflix, those demanding it have taken recourse to the law and filed a
police complaint.
Their concern? According to Shiv Sena committee member Ramesh Solanki, who filed the complaint, Netflix original shows are promoting anti-Hindu propaganda. The shows in question include Sacred Games 2 (a
Hindu godman encouraging terrorism), Leila (depicts a dystopian society divided on the basis of caste) and comedian Hasan Minhaj's Patriot Act (claims how the Lok Sabha elections 2019 disenfranchised minorities).
Rambo: Last Blood is a 2019 USA action adventure by Adrian Grunberg. Starring Sylvester Stallone, Paz Vega and Yvette Monreal.
Almost four decades after they drew first blood,
Sylvester Stallone is back as one of the greatest action heroes of all time, John Rambo. Now, Rambo must confront his past and unearth his ruthless combat skills to exact revenge in a final mission. A deadly journey of vengeance, RAMBO: LAST BLOOD marks
the last chapter of the legendary series.
The cinema release has just been passed 18 uncut by the BBFC for strong bloody violence, gory images.
The film was also passed IFCO 18 uncut in Ireland for extremely graphic
gory violence.
For comparison the film was Rated R in the US for strong graphic violence, grisly images, drug use and language.
Its good to have the occasional opportunity to watch a Hollywood action film that does not have to
tone the action down for the kids. Presumably it is a long way from being trimable for a lower rating else we would now be discussing cuts for a 15 rating.
The collected edition of Avengers: The Children's Crusade has been banned from a Brazilian book festival for featuring a kiss between two male characters.
In an unexpected move, Rio de Janeiro mayor Marcelo Crivella has announced that the
translated edition of the Marvel comic book series Avengers: The Children's Crusade would be removed from the literary festival Riocentro Bienal do Livro so as to protect the city's children from what he described as sexual content for minors.
The
so-called sexual content in question is an on-panel kiss between two fully clothed male characters, Wiccan and Hulkling, who are in committed relationship.
Officials at the festival initially refused to comply with the order, although the matter
was complicated by the fact that the majority of outlets didn't have the material in stock in the first place, with the one storefront that did reporting that copies had already sold out two days earlier.
DNS over HTTPS (DoH) is an encrypted internet protocol that makes it more difficult for ISPs and government censors to block users from being able to access banned websites It also makes it more difficult for state snoopers like GCHQ to keep tabs on
users' internet browsing history.
Of course this protection from external interference also makes it much internet browsing more safe from the threat of scammers, identity thieves and malware.
Google were once considering introducing DoH for
its Chrome browser but have recently announced that they will not allow it to be used to bypass state censors.
Mozilla meanwhile have been a bit more reasonable about it and allow users to opt in to using DoH. Now Mozilla is considering using DoH
by default in the US, but still with the proviso of implementing DoH only if the user is not using parental control or maybe corporate website blocking.
Mozilla explains in a blog post:
What's next in making Encrypted
DNS-over-HTTPS the Default
By Selena Deckelmann,
In 2017, Mozilla began working on the DNS-over-HTTPS (DoH) protocol, and since
June 2018 we've been running experiments in Firefox to ensure the performance and user experience are great. We've also been surprised and excited by the more than 70,000 users who have already chosen on their own to explicitly enable DoH in Firefox
Release edition. We are close to releasing DoH in the USA, and we have a few updates to share.
After many experiments, we've demonstrated that we have a reliable service whose performance is good, that we can detect and mitigate
key deployment problems, and that most of our users will benefit from the greater protections of encrypted DNS traffic. We feel confident that enabling DoH by default is the right next step. When DoH is enabled, users will be notified and given the
opportunity to opt out.
Results of our Latest Experiment
Our latest DoH experiment was designed to help us determine how we could deploy DoH, honor enterprise configuration and respect user choice
about parental controls.
We had a few key learnings from the experiment.
We found that OpenDNS' parental controls and Google's safe-search feature were rarely configured by Firefox users in the USA. In total, 4.3% of users in the study used OpenDNS' parental controls or safe-search. Surprisingly, there
was little overlap between users of safe-search and OpenDNS' parental controls. As a result, we're reaching out to parental controls operators to find out more about why this might be happening.
We found 9.2% of users
triggered one of our split-horizon heuristics. The heuristics were triggered in two situations: when websites were accessed whose domains had non-public suffixes, and when domain lookups returned both public and private (RFC 1918) IP addresses. There was
also little overlap between users of our split-horizon heuristics, with only 1% of clients triggering both heuristics.
Moving Forward
Now that we have these results, we want to tell you about the approach we have settled on to address managed networks and parental controls. At a high level, our plan is to:
Respect user choice for opt-in parental controls and disable DoH if we detect them;
Respect enterprise configuration and disable DoH unless explicitly enabled by enterprise configuration; and
Fall back to operating system defaults for DNS when split horizon configuration or other DNS issues cause lookup failures.
We're planning to deploy DoH in "fallback" mode; that is, if domain name lookups using DoH fail or if our heuristics are triggered, Firefox will fall back and use the default operating system DNS. This means that for the
minority of users whose DNS lookups might fail because of split horizon configuration, Firefox will attempt to find the correct address through the operating system DNS.
In addition, Firefox already detects that parental controls
are enabled in the operating system, and if they are in effect, Firefox will disable DoH. Similarly, Firefox will detect whether enterprise policies have been set on the device and will disable DoH in those circumstances. If an enterprise policy
explicitly enables DoH, which we think would be awesome, we will also respect that. If you're a system administrator interested in how to configure enterprise policies, please find documentation here.
Options for Providers of
Parental Controls
We're also working with providers of parental controls, including ISPs, to add a canary domain to their blocklists. This helps us in situations where the parental controls operate on the network rather than
an individual computer. If Firefox determines that our canary domain is blocked, this will indicate that opt-in parental controls are in effect on the network, and Firefox will disable DoH automatically.
This canary domain is
intended for use in cases where users have opted in to parental controls. We plan to revisit the use of this heuristic over time, and we will be paying close attention to how the canary domain is adopted. If we find that it is being abused to disable DoH
in situations where users have not explicitly opted in, we will revisit our approach.
Plans for Enabling DoH Protections by Default
We plan to gradually roll out DoH in the USA starting in late
September. Our plan is to start slowly enabling DoH for a small percentage of users while monitoring for any issues before enabling for a larger audience. If this goes well, we will let you know when we're ready for 100% deployment.
An internal project to rewrite how Apple's Siri voice assistant handles sensitive topics such as feminism and the #MeToo movement advised developers to respond in one of three ways: don't engage, deflect and finally inform with neutral information from
Wikipedia.
The project saw Siri's responses explicitly rewritten to ensure that the service would say it was in favour of equality, but never say the word feminism -- even when asked direct questions about the topic.
The 2018 guidelines are
part of a large tranche of internal documents leaked to the Guardian by a former Siri grader, one of thousands of contracted workers who were employed to check the voice assistant's responses for accuracy until Apple ended the programme last month in
response to privacy concerns raised by the Guardian.
In explaining why the service should deflect questions about feminism, Apple's guidelines explain that Siri should be guarded when dealing with potentially controversial content. When questions
are directed at Siri, they can be deflected ... however, care must be taken here to be neutral.
For example, Apple got tested a little on internet forums about #MeToo. Previously, when users called Siri a slut, the service responded: I'd blush
if I could. Now, a much sterner reply is offered: I won't respond to that .
MPs and activists have urged the government to protect women through censorship. They write in a letter
Women around the world are 27 times more likely to be harassed online than men. In Europe, 9 million girls have experienced
some kind of online violence by the time they are 15 years old. In the UK, 21% of women have received threats of physical or sexual violence online. The basis of this abuse is often, though not exclusively, misogyny.
Misogyny
online fuels misogyny offline. Abusive comments online can lead to violent behaviour in real life. Nearly a third of respondents to a Women's Aid survey said where threats had been made online from a partner or ex-partner, they were carried out. Along
with physical abuse, misogyny online has a psychological impact. Half of girls aged 11-21 feel less able to share their views due to fear of online abuse, according to Girlguiding UK .
The government wants to make Britain the
safest place in the world to be online, yet in the online harms white paper, abuse towards women online is categorised as harassment, with no clear consequences, whereas similar abuse on the grounds of race, religion or sexuality would trigger legal
protections.
If we are to eradicate online harms, far greater emphasis in the government's efforts should be directed to the protection and empowerment of the internet's single largest victim group: women. That is why we back the
campaign group Empower's calls for the forthcoming codes of practice to include and address the issue of misogyny by name, in the same way as they would address the issue of racism by name. Violence against women and girls online is not harassment.
Violence against women and girls online is violence.
Ali Harris Chief executive, Equally Ours Angela Smith MP Independent Anne Novis Activist Lorely Burt Liberal Democrat, House of Lords Ruth Lister Labour, House of Lords Barry Sheerman MP Labour Caroline Lucas MP Green Daniel Zeichner MP Labour Darren Jones MP Labour Diana Johnson MP Labour Flo Clucas Chair,
Liberal Democrat Women Gay Collins Ambassador, 30% Club Hannah Swirsky Campaigns officer, René Cassin Joan Ryan MP Independent Group for Change Joe Levenson Director of communications and campaigns, Young
Women's Trust Jonathan Harris House of Lords, Labour Luciana Berger MP Liberal Democrats Mandu Reid Leader, Women's Equality Party Maya Fryer WebRoots Democracy Preet Gill MP Labour Sarah Mann
Director, Friends, Families and Travellers Siobhan Freegard Founder, Channel Mum Jacqui Smith Empower
One of the Pentagon's most secretive agencies, the Defense Advanced Research Projects Agency (DARPA), is developing custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips.
DARPA now is developing a semantic analysis program called SemaFor and an image analysis program called MediFor, ostensibly designed to prevent the use of fake images or text. The idea would be to develop these technologies to help private Internet providers sift through content.
Brave presents technical new evidence about personalised advertising, and has uncovered a mechanism by which Google appears to be circumventing its purported GDPR privacy protections
Google have announced potentially far reaching new policies about kids' videos on YouTube. A Google blog post explains:
An update on kids and data protection on YouTube
From its earliest days, YouTube
has been a site for people over 13, but with a boom in family content and the rise of shared devices, the likelihood of children watching without supervision has increased. We've been taking a hard look at areas where we can do more to address this,
informed by feedback from parents, experts, and regulators, including COPPA concerns raised by the U.S. Federal Trade Commission and the New York Attorney General that we are addressing with a settlement announced today.
New
data practices for children's content on YouTube
We are changing how we treat data for children's content on YouTube. Starting in about four months, we will treat data from anyone watching children's content on YouTube as
coming from a child, regardless of the age of the user. This means that we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service. We will also stop serving personalized ads on this
content entirely, and some features will no longer be available on this type of content, like comments and notifications. In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and
we'll also use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.
Improvements to YouTube Kids
We
continue to recommend parents use YouTube Kids if they plan to allow kids under 13 to watch independently. Tens of millions of people use YouTube Kids every week but we want even more parents to be aware of the app and its benefits. We're increasing our
investments in promoting YouTube Kids to parents with a campaign that will run across YouTube. We're also continuing to improve the product. For example, we recently raised the bar for which channels can be a part of YouTube Kids, drastically reducing
the number of channels on the app. And we're bringing the YouTube Kids experience to the desktop.
Investing in family creators
We know these changes will have a significant business impact on family
and kids creators who have been building both wonderful content and thriving businesses, so we've worked to give impacted creators four months to adjust before changes take effect on YouTube. We recognize this won't be easy for some creators and are
committed to working with them through this transition and providing resources to help them better understand these changes.
We are also going to continue investing in the future of quality kids, family and educational content. We
are establishing a $100 million fund, disbursed over three years, dedicated to the creation of thoughtful, original children's content on YouTube and YouTube Kids globally.
Today's changes will allow us to better protect kids and
families on YouTube, and this is just the beginning. We'll continue working with lawmakers around the world in this area, including as the FTC seeks comments on COPPA . And in the coming months, we'll share details on how we're rethinking our overall
approach to kids and families, including a dedicated kids experience on YouTube.
No Safe Spaces is a 2019 USA documentary by Justin Folk. Starring Adam Carolla, Dennis Prager and Jordan Peterson.
A documentary that showcases colleges shutting down
freedom of speech.
No Safe Spaces, a documentary by Adam Carolla and Dennis Prager, looks at the erosion of First Amendment rights in America.
The movie stars comedian-podcaster Adam Carolla and radio talk-show host Dennis
Prager, the latter of whom sent a letter to the MPAA to protest the film's PG-13 rating, which is largely based on a 30-second animated clip of Firsty, a walking, talking embodiment of free speech who gets shot up with bullet holes.
Any kid who
sees it will probably laugh, Prager wrote in a letter to the MPAA. HE also noted that Firsty isn't killed, and he says that he seeks with all of my work to make content that is suitable for all ages.
Much of the movie takes place at colleges where
protesters railing against invited conservative speakers like Ben Shapiro and Ann Coulter use profanity in their language and on their homemade protest signs, though the cursing has been blurred and bleeped in an effort to obtain a PG rating, says
Prager.
But when it comes to Firsty, we would ask that you reconsider and allow the scene to remain and still achieve a PG rating so that we can reach the widest possible audience.
An ad for PETA displayed on the side of buses, seen in February 2019, included the text Don't let them pull the wool over your eyes. Wool is just as cruel as fur. GO WOOL-FREE THIS WINTER PeTA. Beside the text was an image of a woman with the neck of her
jumper pulled over her face.
Ten complainants, who believed that sheep needed to be shorn for health reasons and therefore wool should not be compared to fur, challenged whether the claim wool is just as cruel as fur was
misleading and could be substantiated.
ASA Assessment: Complaints upheld
The ASA considered that the general public were aware that in the fur industry animals were often kept in poor conditions and were
killed for their fur, and that they would interpret the ad's reference to cruelty in that context. We considered that people who saw the ad would therefore understand the claim wool is just as cruel as fur to refer generally to the conditions in which
sheep were kept and the effects on sheep of the methods used to obtain their wool. We considered that although the public would recognise the ad was from an animal rights organisation and as such that the claim would represent its views, it was presented
as a factual claim and a direct comparison between the two industries.
In terms of wool production in the UK, the Department for Environment, Food and Rural Affairs Code of Recommendations for the welfare of livestock had specific
guidelines on the shearing process to ensure they were adhering to the standards of animal welfare which was required by law. Those guidelines stated that every mature sheep should have its fleece removed at least once a year by experienced and competent
trained shearers who should take care in ensuring that the sheep's skin was not cut. We considered that demonstrated that the main method of obtaining wool from sheep by shearing would not be regarded by consumers as being cruel.
The Code of Recommendations and additional guidance also included specific provisions for the health, treatment, transportation and living conditions that sheep should be kept in for the overall benefit of their welfare. We considered this demonstrated that in the UK, there were standards to prevent cruelty to sheep.
We considered people who saw the ad would interpret the claim wool is just as cruel as fur as equating the conditions in which sheep were kept and the methods by which wool was obtained with the conditions and methods used in the
fur industry. However, sheep were not killed for their wool as animals were in the fur industry and there were standards in place relating to their general welfare including relating to the shearing process. We therefore concluded on that basis that the
claim was misleading and in breach of the Code.
The ad must not appear in its current form. We told PETA not to use the claim wool is just as cruel as fur in future.
The Swiss Lottery and Betting Board has published its first censorship list of foreign gambling websites to be blocked by the country's ISPs.
The censorship follows a change to the law on online gambling intended to preserve a monopoly for Swiss
gambling providers.
Over 60 foreign websites external link have been blocked to Swiss gamblers. Last June, 73% of voters approved the censorship law. The law came into effect in January but blocking of foreign gambling websites only started in
August.
Swiss gamblers can bet online only with Swiss casinos and lotteries that pay tax in the country.
Foreign service providers that voluntarily withdraw from the Swiss market with appropriate measures will not be blocked.
35 people in New Zealand have been charged by police for sharing and possession of Brenton Tarrant's Christchurch terrorist attack video.
As of August 21st, 35 people have been charged in relation to the video, according to information released under
the Official Information Act. At least 10 of the charges are against minors, which have now been referred to the Youth Court.
Under New Zealand law, knowingly possessing or distributing objectionable material is a serious offence with a maximum
jail term of 14 years.
So far, nine people have been issued warnings, while 14 have been prosecuted for their involvement.
A Catholic school in Nashville, Tennessee has banned the Harry Potter series because a reverend at the school claims the books include both good and evil magic, as well as spells, which, if read by a human can conjure evil spirits, according to the
Tennessean.
The publication obtained an email from Rev. Dan Reehil, a pastor at Saint Edwards Catholic School parish, which was sent to parents. In the email, Reehil explains in the email that he has consulted several exorcists in the U.S. and Rome,
and it was recommended that the school remove the books, the Tennessean reports.
After a long introduction about how open and diverse YouTube is, CEO Susan Wojcick gets down to the nitty gritty of how YouTube censorship works. SHe writes in a blog:
Problematic content represents a fraction of one percent of the
content on YouTube and we're constantly working to reduce this even further. This very small amount has a hugely outsized impact, both in the potential harm for our users, as well as the loss of faith in the open model that has enabled the rise of your
creative community. One assumption we've heard is that we hesitate to take action on problematic content because it benefits our business. This is simply not true -- in fact, the cost of not taking sufficient action over the long term results in lack of
trust from our users, advertisers, and you, our creators. We want to earn that trust. This is why we've been investing significantly over the past few years in the teams and systems that protect YouTube. Our approach towards responsibility involves four
"Rs":
We REMOVE content that violates our policy as quickly as possible. And we're always looking to make our policies clearer and more effective, as we've done with pranks and challenges , child safety , and hate speech just this year.
We aim to be thoughtful when we make these updates and consult a wide variety of experts to inform our thinking, for example we talked to dozens of experts as we developed our updated hate speech policy. We also report on the removals we make in our
quarterly Community Guidelines enforcement report. I also appreciate that when policies aren't working for the creator community, you let us know. One area we've heard loud and clear needs an update is creator-on-creator harassment. I said in my last
letter that we'd be looking at this and we will have more to share in the coming months.
We RAISE UP authoritative voices when people are looking for breaking news and information, especially during breaking news moments. Our breaking and top news shelves are available in 40 countries and we're continuing to expand
that number.
We REDUCE the spread of content that brushes right up against our policy line. Already, in the U.S. where we made changes to recommendations earlier this year, we've seen a 50% drop of views from recommendations to this type of
content, meaning quality content has more of a chance to shine. And we've begun experimenting with this change in the UK, Ireland, South Africa and other English-language markets.
And we set a higher bar for what channels can make money on our site, REWARDING trusted, eligible creators. Not all content allowed on YouTube is going to match what advertisers feel is suitable for their brand, we have to be sure
they are comfortable with where their ads appear. This is also why we're enabling new revenue streams for creators like Super Chat and Memberships. Thousands of channels have more than doubled their total YouTube revenue by using these new tools in
addition to advertising.
High Society: Cannabis Cafe is a Channel 4 TV documentary .
Two-part documentary about Brits trying cannabis in a Dutch cafe, from curious rookies to a pair of ex-drug squad colleagues with opposing views on
cannabis legalisation
Its good to hear that the programme has inspired the TV whinger Ann Widdecombe to return to moralising about TV. The Brexit Party MEP hit out at the show, saying:
For one
of our channels to be filming it and showing it on our television amounts to showing an unlawful act.
The argument against legalising cannabis is not being heard enough but it's very straightforward. If you legalise cannabis, it
is a gate-way drug. A study from the University of Amsterdam when I was shadow home secretary showed that as soft drug use increases, so does hard drug use. About 10 per cent of users go through the gateway.
Thailand's Ministry of a Digital Economy and Society plans to open a 'Fake News' Center by November 1st at the latest. The minister has said that the centre will focus on four categories of internet censorship.
Digital Minister Puttipong Punnakanta,
said that the coordinating committee of the Fake News Center has set up four subcommittees to screen the various categories of news which might 'disrupt public peace and national security':
natural disasters such as flooding, earthquakes, dam breaks and tsunamis;
economics, the financial and banking sector;
health products, hazardous items and illegal goods,
and of course, government policies.
The Fake News Center will analyse, verify and clarify news items and distribute its findings via its own website, Facebook and Line (a Whatsapp like messaging service that is the dominant in much of Asia).
The committee meeting considered
protocols to be used and plans to consult with representatives of major social media platforms and all cellphone service providers. It will encourage them to take part in the delivery of countermeasures to expose fake news.
Loot boxes in video games have come under fire as method of monetising games. Complainers have attacked them as if they were casino gambling, surely an unjust accusation but nevertheless loot boxes can be a rather ruthless way to extract money.
Now
the films censors of New Zealand's OFLC are reporting on an evolution towards fairer monetisation methods. The OFLC speaks about developments in a blog post:
You don't know what you are paying for and if you don't get the
item you want then you can end of buying a bunch of them.
People have been getting pretty annoyed about this for a while and pressure built up . In early August, a group of companies that make game consoles announced a policy
where they will only allow games that show players their chances of getting items from loot boxes . This chance is commonly called a drop rate by those who talk about video games, as it is the rate at which items will drop. The announcement means that,
all things going to plan, games that are published on the PlayStation, Xbox, and Switch will show drop rates from 2020 onwards.
Since the announcement last week, a few game developers have begun removing loot boxes from their
games entirely . Their solution is to replace loot boxes with boxes where players can see what is in them. Last week, popular game Apex Legends removed loot boxes less than a week after adding them in.
Players generally view the
policy announcement as a positive step forward, although some commentators have pointed out that showing the drop rate doesn't change the dodgy nature of loot boxes, as they are still based entirely on random chance.
The policy
appears to be based off regulations that were in place in China until recently, which also required games to show drop rates. Since then, Chinese regulations have intensified, placing limits on how many loot boxes players can open in a day and making
games increase the drop rate with each box opened. These regulations have proven effective in giving developers pause. Insiders now recommend moving away from loot box mechanics altogether in the Chinese market .
The fact that
China felt the need to strengthen its regulations lends credence to the fact that simply showing players drop rates may not fully manage concerns around loot boxes.
More troubling is the revelation that game publishers previously
offered to increase drop rates for people whom they paid to open loot boxes on video. By changing the drop rates, viewers are given an inflated idea of what they are likely to get from loot boxes. This suggestion of false advertising taps into why a lot
of players dislike loot boxes and think that they are exploitative and anti-consumer.
These changes show that the industry is starting to solidify a focused strategy in order to deal with the potential harms from loot boxes. The
space remains fast-moving. I will do my best to keep on top of it and let you know about more developments as they arise.
Perennial whinger Rajan Zed has taken aim at a restaurant chain in Switzerland selling beef burgers and naming itself Holy Cow.
Zed said in a statement that cow, the seat of many deities, was sacred and had long been venerated in Hinduism.
It appeared to be a clear trivialization and ridiculing of a deeply held article of faith by Hindus world over. Hinduism should not be taken frivolously. Symbols of any faith, larger or smaller, should not be
mishandled.
Zed urged Holy Cow! Gourmet Burger Company (HCGBC) to rethink about its name so that it was not unsettling to the Hindu community.
India's Central Board Of Film Certification (CBFC) now has a new logo.
Chairperson Prasoon Joshi went on to unveil the new logo at an event on Saturday. Along with the logo, Joshi also revealed the new certificate identity of the board.
The
event, hosted in Mumbai, was attended by Minister of Information and Broadcasting Prakash Javadekar.
The new film certificate will be based on the template below.
The basketball game NBA 2K20 has made the news as the European games rating group PEGI and the US equivalent, ESRB, have been considering how to rate content depicting gambling.
Neither of the two rating organisations flagged NBA 2K20 for
gambling, simulated or otherwise. PEGI explained its reasoning saying that the gambling content descriptor doesn't apply because the mini-games involved in NBA 2K's MyTeam mode don't actually encourage and/or teach the use of games of chance that are
played/carried out as a traditional means of gambling.
The reply from PEGI acknowledges that the agency had seen the announcement trailer of NBA 2K20 and noticed the controversy it has caused. However, the board's representative noted that the
controversial imagery played a central role in the trailer, but it may not necessarily do so in the game, which has not yet been released.
PEGI notes that this isn't gambling, per se, in that nothing is really wagered in the slot machine, wheel of
fortune and pachinko mini-games, and whatever is won has value only as game content. Wheel/slot spins and ball drops are earned through gameplay and can't be bought, so nothing is really wagered.
For the ESRB, these mini-games aren't even
simulated gambling. In its rating summary for NBA 2K20 , the game's only content descriptor is mild language, as apparently the words hell and damn are in some dialogue.
PEGI says that the controversy over the game's trailer is part of an internal
discussion that PEGI is having for the moment:
The games industry is evolving constantly (and rapidly in recent years). As a rating organization, we need to ensure that these developments are reflected in our classification criteria.
We do not base our decisions on the content of a single trailer, but we will properly assess how the rating system (and the video games industry in general) should address these concerns.
Interestingly enough, the trailer posted by 2K
Games' United Kingdom YouTube account has since been taken down . It's still live on the main NBA 2K YouTube channel.
NBA 2K20 launches Friday, Sept. 6 on PlayStation 4, Windows PC, Xbox One and Nintendo Switch.
On August 1 2019, a South Korean exhibition of drawings and art films was cancelled at the Huam-Garok gallery due to supposed indecency .
Rebecca Goyette's Forever Animal solo exhibition, described by the artist as being about sexual
sovereignty, pleasure and healing through connection includes feminist depictions of women, nudity and sexuality.
According to Goyette, she had collaborated with Seoul-based curator Yeu Ryang Choi of Yeu & Me since 2017 and together agreed to
show her works to a public South Korean audience at Huam-Garok. Whilst the gallery managers had agreed to show the works, the gallery owner cancelled the show on alleged indecency grounds.
In response to viewing the works, Goyette explains that
the owner reacted very negatively and censored my show, stating it was bad for kids.
Goyette states that Yeu Ryang Choi has proceeded with a lawsuit against the gallery on the grounds of contract breach.