Melon Farmers Original Version

Censor Watch


August

 2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   Latest 
Feb   Jan   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec    

 

Making the UK internet the most censorial and red tape infested outside of China...

The Government salivates over suffocating proposals for censoring internet TV, now that it can go even further than the red tape Dystopia called the EU


Link Here30th August 2021
Full story: UK Internet TV censorship ...UK catch-up and US internet streaming
The UK Government has just opened a public consultation on proposals to significantly extend censorship laws for internet TV to match the nannying, burdensome control freakery that currently applies to broadcast TV in the UK. The tone of the press release highlights the obvious glee that the Government holds for more censorship:

Government to consult on better protections for UK audiences on video-on-demand services

Audiences could be better protected from harmful material like misinformation and pseudoscience while watching programmes on video-on-demand services (VoD), Culture Secretary Oliver Dowden has announced.

  • Netflix, Amazon Prime Video and Apple TV+ could be subject to stricter rules protecting UK audiences from harmful material
  • It would mean audiences - particularly children - receive a consistent level of protection on video-on-demand services as they do on traditional broadcasters
  • Ministers seek views to level the regulatory playing field in consultation launched today

The government is considering how to better level the regulatory playing field between mainstream VoD services and traditional broadcasters and is seeking views on the matter in a consultation launched today. This could mean aligning the content standards rules for on-demand TV services with those for traditional linear TV like BBC 1 and Sky.

Now that the UK has left the EU there is an opportunity to create regulation suited to UK viewers that goes beyond the minimum standards as set out in EU regulation under the revised Audiovisual Media Services Directive.

Culture Secretary Oliver Dowden said:

We want to give UK audiences peace of mind that however they watch TV in the digital age, the shows they enjoy are held to the same high standards that British broadcasting is world-renowned for.

It is right that now we have left the EU, we look at introducing proportionate new rules so that UK audiences are protected from harm.

Ofcom data shows a huge growth in popularity and use of on-demand services in the UK. The number of households that subscribe to one rose by almost 350% between 2014 and 2020. In 2021, 75% per cent of UK households say that they have used a subscription VoD service.

Viewers have access to thousands of hours of VoD shows and content at the touch of a button. However, services like Netflix, Amazon Prime Video and Disney+ are not regulated in the UK to the same extent as UK linear TV channels.

For example, except for BBC iPlayer, they are not subject to Ofcom's Broadcasting Code which sets out appropriate standards for content including harmful or offensive material, accuracy, fairness and privacy.

This means there is a gap between existing protections for audiences watching traditional TV and those watching newer VoD services. There are some protections for under-18s but minimal rules exist to regulate content. There are very few rules to protect audiences, for example, from misleading health advice or pseudoscience documentaries.

Some service providers have taken welcome steps to introduce their own standards and procedures for audience protection - such as pin-codes and content warnings - but the extent of these measures varies across services. Age ratings are also inconsistent and sometimes non-existent.

The consultation asks for views on whether UK audiences viewing TV-like VoD programmes should receive the same or similar level of protections as when they are watching traditional television. It asks which measures can and should be made consistent across VoD services.

It will also consider whether mainstream VoD services not currently regulated in the UK by Ofcom - like Netflix and Apple TV+ - should be brought within UK jurisdiction to provide accountability to UK audiences who use them.

Not all VoD providers deliver a TV-like experience, so any regulatory change will need to be proportionate, particularly for smaller or niche services, to ensure essential protections like freedom of speech are not affected.

Notes to Editors

  • The consultation is open for 8 weeks and closes on 26 October at 23:45 BST.
  • This review into VoD regulation will form part of a number of measures as part of a wide-ranging broadcasting White Paper into the future of broadcasting which will be published this autumn.
  • The consultation examines the current level of audience protection from harmful content provided through regulation and voluntarily by individual VoD services, and what steps are required to ensure appropriate protection levels for UK audiences going forward.
  • Now the UK has left the European Union, this is an opportunity to improve upon EU aligned provisions under the Audiovisual Media Services Directive with regulations that are designed in the best interests of UK audiences.
  • This consultation does not seek responses on wider broadcasting regulation, nor changes to how television or public service broadcasters such as the BBC or Channel 4 are funded or regulated. This consultation will also not cover changes to advertising rules/restrictions and does not cover topics such as introducing levies/quotas on VoD services. Responses on these issues will not be considered as part of this consultation.

 

 

Interspecies Reviewers...

Australian film censors ban an anime TV series


Link Here28th August 2021
Full story: Banned Films in Australia...I Want Your Love
Interspecies Reviewers is a 2020 Japan anime comedy romance TV series
Starring Junji Majima, Yûsuke Kobayashi and Miyu Tomita IMDb

From elves to succubi to fairies and more, Our heroes: Stunk, a human, Zel, an elf and a hermaphrodite angel named Crimvael are here to rate the red-light delights of all manner of monster girls

The video was banned by the Australian Censorship Board on 17th August 2021. The censors explained:
The film is classified RC in accordance with the National Classification Code, Films Table, 1. (b) as films that depict in a way that is likely to cause offence to a reasonable adult, a person who is, or appears to be, a child under 18 (whether the person is engaged in sexual activity or not).

 

 

Offsite Article: Zero Tolerance of majority opinion...


Link Here28th August 2021
Why are feminists blaming The Tiger Who Came To Tea for violence against women? By Ella Whelan

See article from spiked-online.com

 

 

Offsite Article: NHS data grab on hold as millions opt out...


Link Here28th August 2021
A plan to share GP data was set to launch in September, but an online summer campaign has prompted widespread dissent

See article from theguardian.com

 

 

Maybe more about data monetisation than data protection...

The government nominates the new Information Commissioner


Link Here 27th August 2021
Culture Secretary Oliver Dowden has announced that John Edwards is the Government's preferred candidate for Information Commissioner.

John Edwards is currently New Zealand's Privacy Commissioner. He will now appear before MPs on the Digital, Culture, Media and Sport Select Committee for pre-appointment scrutiny on 9th September 2021.

It seems that the Government has its eyes on market opportunities related to selling data rather than data protection. Dowden commented:

Data underpins innovation and the global digital economy, everyday apps and cloud computing systems. It allows businesses to trade, drives international investment, supports law enforcement agencies tackling crime, the delivery of critical public services and health and scientific research.

The government is outlining the first territories with which it will prioritise striking data adequacy partnerships now it has left the EU as the United States, Australia, the Republic of Korea, Singapore, the Dubai International Finance Centre and Colombia. It is also confirming that future partnerships with India, Brazil, Kenya and Indonesia are being prioritised.

Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers.

The aim is to move quickly and creatively to develop global partnerships which will make it easier for UK organisations to exchange data with important markets and fast-growing economies. T

The government also today names New Zealand Privacy Commissioner John Edwards as its preferred candidate to be the UK's next Information Commissioner, following a global search.

As Information Commissioner and head of the UK regulator responsible for enforcing data protection law, he will be empowered to go beyond the regulator's traditional role of focusing only on protecting data rights, with a clear mandate to take a balanced approach that promotes further innovation and economic growth.

...

It means reforming our own data laws so that they're based on common sense, not box-ticking. And it means having the leadership in place at the Information Commissioner's Office to pursue a new era of data-driven growth and innovation. John Edwards's vast experience makes him the ideal candidate to ensure data is used responsibly to achieve those goals.

 

 

Early Spring and Funeral Parade of Roses...

More bouncy ratings from the BBFC for a 1956 Japanese drama and a worthy trans drama


Link Here27th August 2021
Early Spring is a 1956 Japan drama by Yasujirô Ozu
Starring Chikage Awashima, Ryô Ikebe and Teiji Takahashi BBFC link 2020 IMDb

A young man and his wife struggle within the confines of their passionless relationship while he has an extramarital romance.

There are no censorship issues with this release beyond noting that the BBFC rating has just been raised from U to PG. What were 'very mild' sex references in 2012 are now 'mild' sex references in 2021.

Versions

BBFC uncut
uncut
run: 145m
pal: 139m
PGUK: Passed PG uncut for mild bad language, sex references:
  • 2021 cinema release

BBFC uncut
uncut
run: 144:39s
pal: 138:52s
U 1980UK: Passed U uncut for very mild references to infidelity:
  • 2012 BFI video

 

Funeral Parade Of Roses is a 1969 Japan trans drama by Toshio Matsumoto.
Starring Pîtâ and Osamu Ogasawara and Yoshimi Jô. BBFC link 2020 IMDb

While dealing drugs on the side, Gonda operates the Genet, a gay bar in Tokyo where he has hired a stable of transvestites to service the customers. The madame or lead "girl" of the bar is Leda, an older, old fashioned geisha-styled transvestite with who Gonda lives and is in a relationship. Arguably, the most popular of the girls working at the bar now is Eddie, a younger, modern transvestite. Like Leda, Eddie lives openly as a woman. Eddie's troubled life includes her father having deserted the family when she was a child, and having had a difficult relationship with her mother following, she who mocked Eddie's ability to be the man the of the family. Gonda enters into a sexual relationship with Eddie, who he promises to make madame of the bar, replacing Leda in both facets of his life, with Eddie having threatened to quit otherwise.

There are no censorship issues with this release beyond noting that a 2006 18 rated has just been downrated to 15.
 

Versions

BBFC uncut
uncut
run: 105m
pal: 101m
15UK: Passed 15 uncut with a BBFC trigger warning for strong injury detail, suicide, violence, sex references, drug misuse:
  • 2021 cinema release

BBFC uncut
uncut
run: 104:13s
pal: 100:03s
18 1980UK: Passed 18 uncut for strong violence and sex and drug references:
  • 2006 Eureka Entertainment Ltd video

 

Exploits At West Poley is a 1985 UK TV drama by Diarmuid Lawrence
Starring Brenda Fricker, Charlie Condou and Jonathan Adams BBFC link 2020 IMDb

In the late 1800s, two boys decide to divert an underground river and this causes problems for everyone in two villages, who demand the water for themselves.

There are no censorship issues with this release beyond noting that the BBFC rating has just been raised from a 1986 U to a 2021 PG.

Versions

BBFC uncut
uncut
run: 51m
pal: 49m
PGUK: Passed PG uncut for mild violence, threat, injury detail:
  • 2021 BFI video

BBFC uncut
uncut
run: 63:15s
pal: 60:43s
U 1980UK: Passed U uncut:
  • 1986 cinema release

 

 

Offsite Article: Verified as the age of self interest...


Link Here27th August 2021
Full story: ICO Age Appropriate Design...ICO calls for age assurance for websites accessed by children
Trade group for age verification companies s clearly campaigning for its own commercial interests but it does lay out the practical vagaries of ICO's Age Appropriate Design

See article from techmonitor.ai

 

 

Offsite Article: Rank censorship...


Link Here27th August 2021
According to regulations published in state media, all online ranking lists of Chinese celebrities must be removed from the internet.

See article from theguardian.com

 

 

Withdrawing consent...

UK government proposes to drop some of the ludicrous GDPR/Cookie laws introduced by the EU


Link Here24th August 2021
The British Government plans to sweep away inane parts of the EU's data laws in a move that would put an end to pointless web cookie consent banners and red tape.

In the first major regulatory reforms since Brexit, Culture Secretary Oliver Dowden has set out proposals which he says will turbocharge the UK's digital economy and allow data to be used more flexibly.

In particular Dowden signalled the reforms would cut down on cookie banners, used by websites to secure consent for storing personal data when using their websites, arguing that many of them were pointless. The cookie banners achieve little beyond dangerously training internet users to mindlessly tick boxes when asked, just to make the damn things go away.

Ministers also intend to shake up the Information Commissioners Office and have poached John Edwards, New Zealand's current privacy commissioner, to head up the data censor and oversee the new-look regime.

Describing the reforms as the data dividend of Brexit, Dowden said a new British framework would be more proportionate, help cut costs for businesses, and enable greater innovation, which will drive growth and opportunities and jobs.

However, the move risks opening up a fresh schism with the European Commission, which believes GDPR has been highly influential in driving up data privacy standards across the world.

 

 

John Cleese fights back in the Culture War...

'Well the woke started it, they invaded free speech!'


Link Here24th August 2021
John Cleese will take on the topic of cancel culture in a forthcoming television series for the UK's Channel 4.

The new documentary will reportedly explore why a new 'woke' generation is trying to rewrite the rules on what can and [mostly] can't be said.

John Cleese: Cancel Me will see Cleese meet various subjects who claim to have been cancelled for their actions or statements, and activists who have led opposition to various public figures. In a statement, Cleese said:

I'm delighted to have a chance to find out, on camera, about all the aspects of so-called political correctness. There's so much I really don't understand, like: how the impeccable idea of 'Let's all be kind to people' has been developed in some cases ad absurdum.

 

 

Lords of Chaos...

Cut for broadcast by Film4


Link Here24th August 2021
Lords of Chaos is a UK / Sweden thriller by Jonas Åkerlund.
Starring Rory Culkin, Emory Cohen and Sky Ferreira. Melon Farmers link  YouTube icon   BBFC link 2020   IMDb

Oslo, 1987. 17-year-old Euronymous is determined to escape his traditional upbringing and becomes fixated on creating 'true Norwegian black metal' with his band Mayhem. He mounts shocking publicity stunts to put the band's name on the map, but the lines between show and reality start to blur. Arson, violence and a vicious murder shock the nation that is under siege by these Lords of Chaos.

The film was uncut and 18 rated for UK cinema release and home video.

Film4 had the TV premiere of Lords of Chaos on Saturday night. Despite being shown after 11pm they made cuts to the suicide scene. Predictably they cut the method of slitting up the arms instead of across the wrists.

 

 

A date with ID verification...

Dating app Tinder confirms plans to introduce voluntary ID verification


Link Here18th August 2021
Dating app Tinder has confirmed plans to introduce ID verification for its users around the world.

The firm said it would begin by making the process voluntary, except where mandated by law, and would take into account expert recommendations and what documents are most appropriate in each country.

It could come into force by the end of the year.

Critics of the idea have argued it could leave whistleblowers exposed, particularly in authoritarian states, and restrict access to online services in countries where ID documents are not commonplace among the entire population.

Tracey Breeden, vice president of safety and social advocacy at Tinder's parent firm, Match Group, said feedback from experts and users would be a vital part of its approach in helping ease such fears:

We know that in many parts of the world and within traditionally marginalised communities, people might have compelling reasons that they can't or don't want to share their real-world identity with an online platform, she said.

Creating a truly equitable solution for ID verification is a challenging but critical safety project and we are looking to our communities as well as experts to help inform our approach.

 

 

Offsite Article: BBFC podcast on the new movie, Censor...


Link Here18th August 2021
The BBFC's latest Podcast on the new film Censor is little rambling and giggly but is interesting as it covers video nasties and reminisces about James Ferman era film censorship.

See article from bbfc.co.uk

 

 

Offsite Article: The state should not police private Snapchat groups...


Link Here18th August 2021
Full story: Insulting UK Law...UK proesecutions of jokes and insults on social media
Two men have been sent down for sharing a racist video about Priti Patel. By Andrew Tettenborn

See article from spiked-online.com

 

 

If You Build It, They Will Come...

Apple Has Opened the Backdoor to Increased Surveillance and Censorship Around the World


Link Here15th August 2021
Full story: Apple Privacy...Apple scans users' images for sexual content and child abuse

Apple's new program for scanning images sent on iMessage steps back from the company's prior support for the privacy and security of encrypted messages. The program, initially limited to the United States, narrows the understanding of end-to-end encryption to allow for client-side scanning. While Apple aims at the scourge of child exploitation and abuse, the company has created an infrastructure that is all too easy to redirect to greater surveillance and censorship. The program will undermine Apple's defense that it can't comply with the broader demands.

For years, countries around the world have asked for access to and control over encrypted messages, asking technology companies to "nerd harder" when faced with the pushback that access to messages in the clear was incompatible with strong encryption. The Apple child safety message scanning program is currently being rolled out only in the United States.

The United States has not been shy about seeking access to encrypted communications, pressuring the companies to make it easier to obtain data with warrants and to voluntarily turn over data. However, the U.S. faces serious constitutional issues if it wanted to pass a law that required warrantless screening and reporting of content. Even if conducted by a private party, a search ordered by the government is subject to the Fourth Amendment's protections. Any "warrant" issued for suspicionless mass surveillance would be an unconstitutional general warrant. As the Ninth Circuit Court of Appeals has explained , "Search warrants . . . are fundamentally offensive to the underlying principles of the Fourth Amendment when they are so bountiful and expansive in their language that they constitute a virtual, all-encompassing dragnet[.]" With this new program, Apple has failed to hold a strong policy line against U.S. laws undermining encryption, but there remains a constitutional backstop to some of the worst excesses. But U.S constitutional protection may not necessarily be replicated in every country.

Apple is a global company, with phones and computers in use all over the world, and many governments pressure that comes along with that. Apple has promised it will refuse government "demands to build and deploy government-mandated changes that degrade the privacy of users." It is good that Apple says it will not, but this is not nearly as strong a protection as saying it cannot, which could not honestly be said about any system of this type. Moreover, if it implements this change, Apple will need to not just fight for privacy, but win in legislatures and courts around the world. To keep its promise, Apple will have to resist the pressure to expand the iMessage scanning program to new countries, to scan for new types of content and to report outside parent-child relationships.

It is no surprise that authoritarian countries demand companies provide access and control to encrypted messages, often the last best hope for dissidents to organize and communicate. For example, Citizen Lab's research shows that--right now--China's unencrypted WeChat service already surveils images and files shared by users, and uses them to train censorship algorithms. "When a message is sent from one WeChat user to another, it passes through a server managed by Tencent (WeChat's parent company) that detects if the message includes blacklisted keywords before a message is sent to the recipient." As the Stanford Internet Observatory's Riana Pfefferkorn explains , this type of technology is a roadmap showing "how a client-side scanning system originally built only for CSAM [Child Sexual Abuse Material] could and would be suborned for censorship and political persecution." As Apple has found , China, with the world's biggest market, can be hard to refuse. Other countries are not shy about applying extreme pressure on companies, including arresting local employees of the tech companies.

But many times potent pressure to access encrypted data also comes from democratic countries that strive to uphold the rule of law, at least at first. If companies fail to hold the line in such countries, the changes made to undermine encryption can easily be replicated by countries with weaker democratic institutions and poor human rights records--often using similar legal language, but with different ideas about public order and state security, as well as what constitutes impermissible content, from obscenity to indecency to political speech. This is very dangerous. These countries, with poor human rights records, will nevertheless contend that they are no different. They are sovereign nations, and will see their public-order needs as equally urgent. They will contend that if Apple is providing access to any nation-state under that state's local laws, Apple must also provide access to other countries, at least, under the same terms.

'Five Eyes' Countries Will Seek to Scan Messages

For example, the Five Eyes--an alliance of the intelligence services of Canada, New Zealand, Australia, the United Kingdom, and the United States-- warned in 2018 that they will "pursue technological, enforcement, legislative or other measures to achieve lawful access solutions" if the companies didn't voluntarily provide access to encrypted messages. More recently, the Five Eyes have pivoted from terrorism to the prevention of CSAM as the justification, but the demand for unencrypted access remains the same, and the Five Eyes are unlikely to be satisfied without changes to assist terrorism and criminal investigations too.

The United Kingdom's Investigatory Powers Act, following through on the Five Eyes' threat, allows their Secretary of State to issue " technical capacity notices ," which oblige telecommunications operators to make the technical ability of "providing assistance in giving effect to an interception warrant, equipment interference warrant, or a warrant or authorisation for obtaining communications data." As the UK Parliament considered the IPA, we warned that a "company could be compelled to distribute an update in order to facilitate the execution of an equipment interference warrant, and ordered to refrain from notifying their customers."

Under the IPA, the Secretary of State must consider "the technical feasibility of complying with the notice." But the infrastructure needed to roll out Apple's proposed changes makes it harder to say that additional surveillance is not technically feasible. With Apple's new program, we worry that the UK might try to compel an update that would expand the current functionality of the iMessage scanning program, with different algorithmic targets and wider reporting. As the iMessage "communication safety" feature is entirely Apple's own invention, Apple can all too easily change its own criteria for what will be flagged for reporting. Apple may receive an order to adopt its hash matching program for iPhoto into the message pre-screening. Likewise, the criteria for which accounts will apply this scanning, and where positive hits get reported, are wholly within Apple's control.

Australia followed suit with its Assistance and Access Act, which likewise allows for requirements to provide technical assistance and capabilities, with the disturbing potential to undermine encryption. While the Act contains some safeguards, a coalition of civil society organizations, tech companies, and trade associations, including EFF and--wait for it--Apple, explained that they were insufficient.

Indeed, in Apple's own submission to the Australian government, Apple warned "the government may seek to compel providers to install or test software or equipment, facilitate access to customer equipment, turn over source code, remove forms of electronic protection, modify characteristics of a service, or substitute a service, among other things." If only Apple would remember that these very techniques could also be used in an attempt to mandate or change the scope of Apple's scanning program.

While Canada has yet to adopt an explicit requirement for plain text access, the Canadian government is actively pursuing filtering obligations for various online platforms , which raise the spectre of a more aggressive set of obligations targeting private messaging applications.

Censorship Regimes Are In Place And Ready to Go

For the Five Eyes, the ask is mostly for surveillance capabilities, but India and Indonesia are already down the slippery slope to content censorship. The Indian government's new Intermediary Guidelines and Digital Media Ethics Code (" 2021 Rules "), in effect earlier this year, directly imposes dangerous requirements for platforms to pre-screen content. Rule 4(4) compels content filtering, requiring that providers "endeavor to deploy technology-based measures," including automated tools or other mechanisms, to "proactively identify information" that has been forbidden under the Rules.

India's defense of the 2021 rules , written in response to the criticism from three UN Special Rapporteurs , was to highlight the very real dangers to children, and skips over the much broader mandate of the scanning and censorship rules. The 2021 Rules impose proactive and automatic enforcement of its content takedown provisions, requiring the proactive blocking of material previously held to be forbidden under Indian law. These laws broadly include those protecting "the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality." This is no hypothetical slippery slope--it's not hard to see how this language could be dangerous to freedom of expression and political dissent. Indeed, India's track record on its Unlawful Activities Prevention Act , which has reportedly been used to arrest academics, writers and poets for leading rallies and posting political messages on social media, highlight this danger.

It would be no surprise if India claimed that Apple's scanning program was a great start towards compliance, with a few more tweaks needed to address the 2021 Rules' wider mandate. Apple has promised to protest any expansion, and could argue in court, as WhatsApp and others have, that the 2021 Rules should be struck down, or that Apple does not fit the definition of a social media intermediary regulated under these 2021 Rules. But the Indian rules illustrate both the governmental desire and the legal backing for pre-screening encrypted content, and Apple's changes makes it all the easier to slip into this dystopia.

This is, unfortunately, an ever-growing trend. Indonesia, too, has adopted Ministerial Regulation MR5 to require service providers (including "instant messaging" providers) to "ensure" that their system "does not contain any prohibited [information]; and [...] does not facilitate the dissemination of prohibited [information]". MR5 defines prohibited information as anything that violates any provision of Indonesia's laws and regulations, or creates "community anxiety" or "disturbance in public order." MR5 also imposes disproportionate sanctions, including a general blocking of systems for those who fail to ensure there is no prohibited content and information in their systems. Indonesia may also see the iMessage scanning functionality as a tool for compliance with Regulation MR5, and pressure Apple to adopt a broader and more invasive version in their country.

Pressure Will Grow

The pressure to expand Apple's program to more countries and more types of content will only continue. In fall of 2020, in the European Union, a series of leaked documents from the European Commission foreshadowed an anti-encryption law to the European Parliament, perhaps this year. Fortunately, there is a backstop in the EU. Under the e-commerce directive, EU Member States are not allowed to impose a general obligation to monitor the information that users transmit or store, as stated in the Article 15 of the e-Commerce Directive (2000/31/EC). Indeed, the Court of Justice of the European Union ( CJEU) has stated explicitly that intermediaries may not be obliged to monitor their services in a general manner in order to detect and prevent illegal activity of their users. Such an obligation will be incompatible with fairness and proportionality. Despite this, in a leaked internal document published by Politico, the European Commission committed itself to an action plan for mandatory detection of CSAM by relevant online service providers (expected in December 2021) that pointed to client-side scanning as the solution, which can potentially apply to secure private messaging apps, and seizing upon the notion that it preserves the protection of end-to-end encryption.

For governmental policymakers who have been urging companies to nerd harder, wordsmithing harder is just as good. The end result of access to unencrypted communication is the goal, and if that can be achieved in a way that arguably leaves a more narrowly defined end-to-end encryption in place, all the better for them.

All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, the adoption of the iPhoto hash matching to iMessage, or a tweak of the configuration flags to scan, not just children's, but anyone's accounts. Apple has a fully built system just waiting for external pressure to make the necessary changes. China and doubtless other countries already have hashes and content classifiers to identify messages impermissible under their laws, even if they are protected by international human rights law. The abuse cases are easy to imagine: governments that outlaw homosexuality might require a classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand a classifier able to spot popular satirical images or protest flyers.

Now that Apple has built it, they will come. With good intentions, Apple has paved the road to mandated security weakness around the world, enabling and reinforcing the arguments that, should the intentions be good enough, scanning through your personal life and private communications is acceptable. We urge Apple to reconsider and return to the mantra Apple so memorably emblazoned on a billboard at 2019's CES conference in Las Vegas: What happens on your iPhone, stays on your iPhone

 

 

Offsite Article: Spectre...


Link Here15th August 2021
Spectre DVDMovie-Censorship.com reveals ITV's TV edits for an 8:30pm broadcast

See article from movie-censorship.com

 

 

Offsite Article: A legal view on the Online Harms Bill...


Link Here15th August 2021
Regulating content on user-to user and search service providers. By Rafe Jennings

See article from ukhumanrightsblog.com

 

 

Offsite Article: Invasive medical intervention...


Link Here15th August 2021
binah teamNew AI tool will allow businesses to see if employees are REALLY ill by remotely checking their vital signs on a smartphone

See article from dailymail.co.uk

 

 

Updated: Danger! Poisoned Apple...

Apple will add software to scan all your images, nominally for child abuse, but no doubt governments will soon be adding politically incorrect memes to the list


Link Here14th August 2021
Full story: Apple Privacy...Apple scans users' images for sexual content and child abuse
   Apple intends to install software, initially on American iPhones, to scan for child abuse imagery, raising alarm among security researchers who warn that it will open the door to surveillance of millions of people’s personal devices.

The automated system would proactively alert a team of human reviewers if it believes illegal imagery is detected, who would then contact law enforcement if the material can be verified. The scheme will initially roll out only in the US.

According to people briefed on the plans, every photo uploaded to iCloud in the US will be given a safety voucher saying whether it is suspect or not. Once a certain number of photos are marked as suspect, Apple will enable all the suspect photos to be decrypted and, if apparently illegal, passed on to the relevant authorities.

The scheme seems to be a nasty compromise with governments to allow Apple to offer encrypted communication whilst allowing state security to see what some people may be hiding.

Alec Muffett, a security researcher and privacy campaigner who formerly worked at Facebook and Deliveroo, said Apple's move was tectonic and a huge and regressive step for individual privacy. Apple are walking back privacy to enable 1984, he said.

Ross Anderson, professor of security engineering at the University of Cambridge, said:

It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of . . . our phones and laptops.

Although the system is currently trained to spot child sex abuse, it could be adapted to scan for any other targeted imagery and text, for instance, terror beheadings or anti-government signs at protests, say researchers. Apple's precedent could also increase pressure on other tech companies to use similar techniques.

And given that the system is based on mapping images to a hash code and then comparing that has code with those from known child porn images, then surely there is a chance of a false positive when an innocent image just happens to the map to the same hash code as an illegal image. That could surely have devastating consequences with police banging on doors at dawn accompanied by the 'there's no smoke without fire' presumption of guilt that exists around the scourge of child porn. An unlucky hash may then lead to a trashed life.

Apple's official blog post inevitably frames the new snooping capability as if it was targeted only at child porn but it is clear that the capability can be extended way beyond this narrow definition. The blog post states:

Child Sexual Abuse Material (CSAM) detection

To help address this, new technology in iOS and iPadOS* will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States.

Apple's method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users' devices.

Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.

Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user's account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM. And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users' photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.

Expanding guidance in Siri and Search

Apple is also expanding guidance in Siri and Search by providing additional resources to help children and parents stay safe online and get help with unsafe situations. For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report.

Siri and Search are also being updated to intervene when users perform searches for queries related to CSAM. These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.

These updates to Siri and Search are coming later this year in an update to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.*

Update: Apples photo scanning and snooping 'misunderstood'

13th August 2021. See article from cnet.com

Apple plans to scan some photos on iPhones, iPads and Mac computers for images depicting child abuse. The move has upset privacy advocates and security researchers, who worry that the company's newest technology could be twisted into a tool for surveillance and political censorship. Apple says those concerns are misplaced and based on a misunderstanding of the technology it's developed.

In an interview published Friday by The Wall Street Journal, Apple's software head, Craig Federighi, attributed much of people's concerns to the company's poorly handled announcements of its plans. Apple won't be scanning all photos on a phone, for example, only those connected to its iCloud Photo Library syncing system.

It's really clear a lot of messages got jumbled pretty badly in terms of how things were understood, Federighi said in his interview. We wish that this would've come out a little more clearly for everyone because we feel very positive and strongly about what we're doing.

 

 

Update: Apple offers slight improvements

14th August 2021. See article from theverge.com

The idea that Apple would be snooping on your device to detect child porn and nude mages hasn't gone down well with users and privacy campaigners. The bad publicity has prompted the company to offer an olive branch.

To address the possibility for countries to expand the scope of flagged images to be detected for their own surveillance purposes, Apple says it will only detect images that exist in at least 2 country's lists. Apple says it won't rely on a single government-affiliated database -- like that of the US-based National Center for Missing and Exploited Children, or NCMEC -- to identify CSAM. Instead, it will only match pictures from at least two groups with different national affiliations. The goal is that no single government could have the power to secretly insert unrelated content for censorship purposes, since it wouldn't match hashes in any other database.

Apple has also said that it would 'resist' requests from countries to expand the definition of images of interest. However this is a worthless reassurance when all it would take is a court order for Apple to be forced into complying with any requests that the authorities make.

Apple has also states the tolerances that will be applied to prevent false positives. It is alarming that innocent images can in fact generate a hash code that matches a child porn image. And to try and prevent innocent people from being locked up, Apple will now require 30 images to nave hashes matching illegal images before the images get investigated by Apple staff. Previously Apple had declined to comment on what the tolerance value will be.

 

 

The truth will out...

Thailand rescinds order censoring posts that 'cause fear' even if they are true


Link Here13th August 2021

Thailand's military backed government has been forced by a court injunction to rescind a recent order banning news that causes public fear, as it faces growing protests over its handling of the Covid pandemic.

The government, which had sought to restrict news that causes public fear, even if it is true, had been accused by journalists and human rights groups of trying to prevent negative reporting and silence critics. The civil court issued an injunction against the regulation last week and it was revoked on Tuesday.

Thai officials are facing increasing public anger over their response to a recent wave of Covid-19, including over the country's slow vaccination campaign. Protesters took to the streets over the weekend and again on Tuesday, with police firing rubber bullets, teargas and water cannon to disperse them.

 

 

 

What they taught me...

Author to rewrite memoir after PC bullies make accusations about supposed racism and ableism


Link Here11th August 2021
Publisher Picador is considering alterations to Kate Clanchy's Orwell prize-winning memoir amid accusations of supposed racial and ableist stereotyping.

The 2019 book titled Some Kids I Taught and What They Taught Me examines Clanchy's time teaching in UK state schools.

A PC lynch mob of critics and fellow authors formed after a reader review highlighted alleged problematic descriptions of children. The critics highlighted passages that referenced racial facial features and skin colour - as well as describing two autistic children as unselfconsciously odd and jarring company.

In response, the author said she's grateful for a chance to rewrite it:

I know I got many things wrong, and welcome the chance to write better, more lovingly

It was wrong. I don't really have an excuse, except that I am bereaved and it takes people in different ways.

I am not a good person. I do try to say that in my book. Not a pure person, not a patient person, no one's saviour. You are right to blame me, and I blame myself.

Authors who criticised Clanchy's response and questioned the award-winning merit of the book, including Chimene Suleyman, Monisha Rajesh and Professor Sunny Singh, also received criticism from social media users.

 

 

Comment: No one is safe from the woke mob

11th August 2021. See article from spiked-online.com by Joanna Williams

This, remember, is a book about Clanchy's experiences teaching a diverse array of comp kids and the lovely, life-affirming time she had doing so.

The people suggesting it is actually a thinly veiled retread of Mein Kampf are, not to put too fine a point on it, nuts. And yet Clanchy, her publisher and some of Clanchy's early defenders have bowed to these people and issued grovelling apologies for the emotional harm they have allegedly inflicted.

So much of the mob censorship we see today would disappear overnight if people and institutions grew about an inch of backbone and told the woke irritants to piss off. Why so few seem capable of doing this is a key question of this mad age of ours.

 

 

Tearful apologies...

Facebook shamed into reversing censorship of the poster for Pedro Amnodovar's Parallel Mothers


Link Here11th August 2021
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Madres paralelas is a 2022 Spain drama by Pedro Almodóvar
Starring Penélope Cruz, Rossy de Palma and Aitana Sánchez-Gijón IMDb

Two women, Janis and Ana, coincide in a hospital room where they are going to give birth. Both are single and became pregnant by accident. Janis, middle-aged, doesn't regret it and she is exultant. The other, Ana, an adolescent, is scared, repentant and traumatized. Janis tries to encourage her while they move like sleepwalkers along the hospital corridors. The few words they exchange in these hours will create a very close link between the two, which by chance develops and complicates, and changes their lives in a decisive way.

Instagram's owner Facebook has reversed a ban on a poster for Spanish director Pedro Almodovar's new film, Madres Paralelas (Parallel Mothers), showing a nipple producing a drop of milk. The company was shamed by bad publicity after its naff 'AI' censorship algorithm proved a failure in distinguishing art from porn. Facebook said it had made an exception to its usual ban on nudity because of the clear artistic context.

The promotional image was made to look like an eyeball producing a teardrop. Javier Jaen, who designed the advert for Madres Paralelas (Parallel Mothers), had said the platform should be ashamed for its censorship.

 

 

Insulting comments...

Ofcom Censures Indian language channel for derogatory comments about Pakistani people


Link Here9th August 2021
Ofcom has censured the English and Indian language channel, Republic Bharat for breaching UK censorship rules by bad mouthing Pakistan.

TV censor Ofcom said it had received a complaint about a programme contained demeaning statements which amounted to abusive and derogatory treatment of Pakistani people.

The Debate with Arnab is a regular English-language current affairs debate and discussion programme presented by the journalist Arnab Goswami. The debate featured in this episode took place in the context of Pakistan's announcement that it was planning to hold parliamentary elections in GilgitBaltistan.

Ofcom noted:

This programme contained material which was abusive and derogatory towards Pakistani people, in particular, by ascribing negative characteristics to Pakistani people as a whole on the basis of their nationality. Ofcom therefore considered this programme clearly had the potential to cause significant offence.

Ofcom expressed concern that after the publication of the above decisions, the Licensee went on to broadcast two further programmes which we have found to be in breach of Rule 3.3 of the Code.

Republic Bharat ceased broadcasting in the UK in May.

 

 

Comments: Poisoned Apple...

The EFF comments: Apple's Plan to Think Different About Encryption Opens a Backdoor to Your Private Life


Link Here9th August 2021
Full story: Apple Privacy...Apple scans users' images for sexual content and child abuse

Apple has announced impending changes to its operating systems that include new protections for children features in iCloud and iMessage. If you've spent any time following the Crypto Wars, you know what this means: Apple is planning to build a backdoor into its data storage system and its messaging system.

Child exploitation is a serious problem, and Apple isn't the first tech company to bend its privacy-protective stance in an attempt to combat it. But that choice will come at a high price for overall user privacy. Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.

To say that we are disappointed by Apple's plans is an understatement. Apple has historically been a champion of end-to-end encryption, for all of the same reasons that EFF has articulated time and time again. Apple's compromise on end-to-end encryption may appease government agencies in the U.S. and abroad, but it is a shocking about-face for users who have relied on the company's leadership in privacy and security.

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts204that is, accounts designated as owned by a minor204for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

When Apple releases these client-side scanning functionalities, users of iCloud Photos, child users of iMessage, and anyone who talks to a minor through iMessage will have to carefully consider their privacy and security priorities in light of the changes, and possibly be unable to safely use what until this development is one of the preeminent encrypted messengers.

Apple Is Opening the Door to Broader Abuse

We've said it before, and we'll say it again now: it's impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger's encryption itself and open the door to broader abuses.

All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children's, but anyone's accounts. That's not a slippery slope; that's a fully built system just waiting for external pressure to make the slightest change. Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content. New laws in Ethiopia requiring content takedowns of misinformation in 24 hours may apply to messaging services. And many other countries204often those with authoritarian governments204have passed similar laws. Apple's changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.

We've already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of terrorist content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society. While it's therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content as terrorism, including documentation of violence and repression, counterspeech, art, and satire.

Image Scanning on iCloud Photos: A Decrease in Privacy

Apple's plan for scanning photos that get uploaded into iCloud Photos is similar in some ways to Microsoft's PhotoDNA. The main product difference is that Apple's scanning will happen on-device. The (unauditable) database of processed CSAM images will be distributed in the operating system (OS), the processed images transformed so that users cannot see what the image is, and matching done on those transformed images using private set intersection where the device will not know whether a match has been found. This means that when the features are rolled out, a version of the NCMEC CSAM database will be uploaded onto every single iPhone. The result of the matching will be sent up to Apple, but Apple can only tell that matches were found once a sufficient number of photos have matched a preset threshold.

Once a certain number of photos are detected, the photos in question will be sent to human reviewers within Apple, who determine that the photos are in fact part of the CSAM database. If confirmed by the human reviewer, those photos will be sent to NCMEC, and the user's account disabled. Again, the bottom line here is that whatever privacy and security aspects are in the technical details, all photos uploaded to iCloud will be scanned.

Make no mistake: this is a decrease in privacy for all iCloud Photos users, not an improvement.

Currently, although Apple holds the keys to view Photos stored in iCloud Photos, it does not scan these images. Civil liberties organizations have asked the company to remove its ability to do so. But Apple is choosing the opposite approach and giving itself more knowledge of users' content.

Machine Learning and Parental Notifications in iMessage: A Shift Away From Strong Encryption

Apple's second main new feature is two kinds of notifications based on scanning photos sent or received by iMessage. To implement these notifications, Apple will be rolling out an on-device machine learning classifier designed to detect sexually explicit images. According to Apple, these features will be limited (at launch) to U.S. users under 18 who have been enrolled in a Family Account. In these new processes, if an account held by a child under 13 wishes to send an image that the on-device machine learning classifier determines is a sexually explicit image, a notification will pop up, telling the under-13 child that their parent will be notified of this content. If the under-13 child still chooses to send the content, they have to accept that the parent will be notified, and the image will be irrevocably saved to the parental controls section of their phone for the parent to view later. For users between the ages of 13 and 17, a similar warning notification will pop up, though without the parental notification.

Similarly, if the under-13 child receives an image that iMessage deems to be sexually explicit, before being allowed to view the photo, a notification will pop up that tells the under-13 child that their parent will be notified that they are receiving a sexually explicit image. Again, if the under-13 user accepts the image, the parent is notified and the image is saved to the phone. Users between 13 and 17 years old will similarly receive a warning notification, but a notification about this action will not be sent to their parent's device.

This means that if204for instance204a minor using an iPhone without these features turned on sends a photo to another minor who does have the features enabled, they do not receive a notification that iMessage considers their image to be explicit or that the recipient's parent will be notified. The recipient's parents will be informed of the content without the sender consenting to their involvement. Additionally, once sent or received, the sexually explicit image cannot be deleted from the under-13 user's device.

Whether sending or receiving such content, the under-13 user has the option to decline without the parent being notified. Nevertheless, these notifications give the sense that Apple is watching over the user's shoulder204and in the case of under-13s, that's essentially what Apple has given parents the ability to do.

It is also important to note that Apple has chosen to use the notoriously difficult-to-audit technology of machine learning classifiers to determine what constitutes a sexually explicit image. We know from years of documentation and research that machine-learning technologies, used without human oversight, have a habit of wrongfully classifying content, including supposedly sexually explicit content. When blogging platform Tumblr instituted a filter for sexual content in 2018, it famously caught all sorts of other imagery in the net, including pictures of Pomeranian puppies, selfies of fully-clothed individuals, and more. Facebook's attempts to police nudity have resulted in the removal of pictures of famous statues such as Copenhagen's Little Mermaid. These filters have a history of chilling expression, and there's plenty of reason to believe that Apple's will do the same.

Since the detection of a sexually explicit image will be using on-device machine learning to scan the contents of messages, Apple will no longer be able to honestly call iMessage end-to-end encrypted. Apple and its proponents may argue that scanning before or after a message is encrypted or decrypted keeps the end-to-end promise intact, but that would be semantic maneuvering to cover up a tectonic shift in the company's stance toward strong encryption.

Whatever Apple Calls It, It's No Longer Secure Messaging

As a reminder, a secure messaging system is a system where no one but the user and their intended recipients can read the messages or otherwise analyze their contents to infer what they are talking about. Despite messages passing through a server, an end-to-end encrypted message will not allow the server to know the contents of a message. When that same server has a channel for revealing information about the contents of a significant portion of messages, that's not end-to-end encryption. In this case, while Apple will never see the images sent or received by the user, it has still created the classifier that scans the images that would provide the notifications to the parent. Therefore, it would now be possible for Apple to add new training data to the classifier sent to users' devices or send notifications to a wider audience, easily censoring and chilling speech.

But even without such expansions, this system will give parents who do not have the best interests of their children in mind one more way to monitor and control them, limiting the internet's potential for expanding the world of those whose lives would otherwise be restricted. And because family sharing plans may be organized by abusive partners, it's not a stretch to imagine using this feature as a form of stalkerware.

People have the right to communicate privately without backdoors or censorship, including when those people are minors. Apple should make the right decision: keep these backdoors off of users' devices.

 

 

Rajan Zed recommends...

Kali Yuga East India Porter brewed by Bang the Elephant Brewing Co


Link Here9th August 2021
Full story: Rajan Zed...Taking easy offence at hindu imagery
The perennial Hindu whinger Rajan Zed is complaining about a UK beer label.

Kali Yuga East India Porter brewed by Nottingham based Bang the Elephant Brewing Co uses the Hindu goddess Kali's image on its East India Porter beer can.

Zed said that inappropriate usage of sacred Hindu deities or concepts or symbols or icons for commercial or other agenda was not okay as it hurt the devotees. He added:

Breweries should not be in the business of religious appropriation, sacrilege, and ridiculing entire communities. It was deeply trivializing of immensely venerated Hindu goddess Kali to be portrayed on a beer label.

Hindus were for free artistic expression and speech as much as anybody else if not more ...BUT... faith was something sacred and attempts at trivializing it hurt the followers.

 

 

Offsite Article: Do we really need to be protected from Allo 'Allo ?...


Link Here9th August 2021
Yet another much-loved sitcom has been slapped with an absurd content warning. By Tim Dawson

See article from spiked-online.com

 

 

Offsite Article: Handing out extraordinary powers to Silicon Valley companies...


Link Here 9th August 2021
Full story: Online Safety Bill Draft...UK Government legislates to censor social media
The government's online safety bill is another unseen power-grab. By Patrick Maxwell

See article from politics.co.uk

 

 

Opaque rules...

The Hungarian Government orders book shops to sell gay and sexy books in opaque covers


Link Here7th August 2021
  The Hungarian Government has ordered shops to wrap children's books that depict homosexuality in a positive light in closed packaging as the government of Prime Minister Viktor Orban fights against gay rights.

Under a government decree, stores will also be banned from selling books seen as containing explicit depictions of sexuality or narratives around gender change within a 200 meter radius of schools or churches. The rules similarly outlaw displays of products that depict gender roles that are different from an individual's gender at birth.

The latest steps come a month after Hungary introduced a law banning the dissemination of LGBTQ+ content in schools.

The European Commission has filed legal proceedings against Hungary claiming the rules violate the right to freedom of expression and information.

 

 

Locked down and locked up...

Wearing a bikini gets an Indonesian woman arrested on pornography charges


Link Here7th August 2021
Indonesian artist Dinar Candy has held a protest action over the extension of Indonesia's covid lockdown laws (Enforcement of Restrictions on Public Activities, PPKM) by wearing a bikini on the side of a road in Jakarta.

During the action, Candy also dsiplayed a banner with the message: I'm stressed out because the PPKM has been extended.

Candy was arrested by police who confiscated her mobile phone which is alleged to have been used to record the protest.

After being questioned by police, who sought advice from an 'expert witness on morality and culture', presumably a cleric, she was declared a suspect of an alleged act of pornography. Candy was charged under Article 36 of Law Number 44/2008 on Pornography.

 

 

Comments: Poisoned Apple...

Comments about Apple's plans to scan people's phones and tablets seeking sexual content and child abuse


Link Here7th August 2021
Full story: Apple Privacy...Apple scans users' images for sexual content and child abuse

Apple has announced 2 new snooping capabilities (initial used for US users only) that will be added to its operating systems in the near future.

The first will be to analyse the content of pictures on users' devices sent in messages. Apple says that this system will only be used to inform parents when their under 12yo children attempt to send sexual content. No doubt Apple will come under pressure to scan images of all users for an ever expanding list of restrictions, eg terrorism, covid memes, copyrighted images etc.

The second scan is to match any photos being uploaded to Apple's iCloud Photo Library. If these images match a curated list of child abuse image hashes then Apple will decrypt flagged images and judge for themselves whether it is an illegal image, and then inform the police. Apple claims that they will avoid the life shattering possibility of a false positive by only investigating if several images match hashes.

Update: WhatsApp responds: A setback for people's privacy all over the world

7th August 2021. See article from dailymail.co.uk

The head of WhatsApp tweeted a barrage of criticism against Apple over plans to automatically scan iPhones and cloud storage for images of child abuse. It would see flagged owners reported to the police after a company employee has looked at their photos.

But WhatsApp head Will Cathcart said the popular messaging app would not follow Apple's strategy. His criticism adds to a stream of criticism of Apple's new system by privacy campaigners who say it is the start of an infrastructure for surveillance and censorship. Cathcart said:

I think this is the wrong approach and a setback for people's privacy all over the world.

Apple's system can scan all the private photos on your phone -- even photos you haven't shared with anyone. That's not privacy.

People have asked if we'll adopt this system for WhatsApp. The answer is no.

Instead of focusing on making it easy for people to report content that's shared with them, Apple has built software that can scan all the private photos on your phone -- even photos you haven't shared with anyone. That's not privacy.

We've had personal computers for decades and there has never been a mandate to scan the private content of all desktops, laptops or phones globally for unlawful content. It's not how technology built in free countries works..

Will this system be used in China? What content will they consider illegal there and how will we ever know? How will they manage requests from governments all around the world to add other types of content to the list for scanning? Can this scanning software running on your phone be error proof? Researchers have not been allowed to find out. Why not? How will we know how often mistakes are violating people's privacy? What will happen when spyware companies find a way to exploit this software? Recent reporting showed the cost of vulnerabilities in iOS software as is. What happens if someone figures out how to exploit this new system?, Cathcart listed as concerning questions.

There are so many problems with this approach, and it's troubling to see them act without engaging experts that have long documented their technical and broader concerns with this.

 

 

Offsite Article: Vaguely repressive...


Link Here7th August 2021
New Zealand government's proposed hate speech law attacks free speech

See article from wsws.org

 

 

Censorship Crysis...

The video game Crysis 3 is cut in Australia for an MA15+ rating


Link Here5th August 2021
Crysis 3 is a 2013 first-person shooter video game developed by Crytek and published in 2013 by Electronic Arts

The 2021 re-mastered trilogy release has been cut in Australia to obtain an MA15+ rating.

The re-mastered game was originally passed R18+ uncut in 2021 but the distributors preferred a cut MA15+ rated release.

The cuts were to remove scenes depicting drug use.

 

 

Now Voyager...

Classic Bette Davis film with a BBFC rating that is up and down like a whore's draws


Link Here5th August 2021
Now Voyager is a 1942 US romance by Irving Rapper
Starring Bette Davis, Paul Henreid and Claude Rains BBFC link 2020 IMDb
There are no censorship issues with this release beyond noting that the film was rated PG until 1986, but was downrated to U in 2008, and was restored to PG in 2021.

Summary Notes

A frumpy spinster blossoms under therapy and becomes an elegant, independent woman.

Versions

BBFC uncut
uncut
run: 117:13s
pal: 112:32s
PG

U 1980

PG 1980

A pre 1970

UK: Passed PG uncut for mild sex references:
  • 2021 cinema release titled Now, Voyager
UK: Passed U uncut for very mild sex references:
  • 2008 cinema release titled Now, Voyager
UK: Passed PG uncut for mild sex references:
  • 1986 Warner Home Video Ltd VHS
UK: Passed A (PG) uncut:
  • 1942 cinema release

 

 

Better?...

Spanish advert in the series about strange characters getting back to normal after eating a Snickers winds up the easily offended


Link Here5th August 2021
Snickers in Spain has pulled a controversial TV advert after complaints from a few people who considered it homophobic'

The advert is one of a long running series showing strange characters getting back to normal after eating a Snickers.

In this case the strange character was the rather effeminate Spanish influencer Aless Gibaja who transformed into a regular masculine guy with a beard and low voice.

The video went viral this week, with some calling for a boycott of Snickers over homophobia, presumably because the masculine guy was depicted as an improvement on the effeminate guy.

The State Federation of Lesbians, Gays and Bisexuals tweeted:

It is shameful and regrettable that at this point there are companies that continue to perpetuate stereotypes and promote homophobia.

Spain's equality minister, Irene Montero, also joined the criticism:

I wonder to whom it might seem like a good idea to use homophobia as a business strategy.

Our society is diverse and tolerant. Hopefully those who have the power to make decisions about what we see and hear in commercials and TV shows will learn to be too.

On Thursday, Snickers Spain said it was deleting the advert and apologised for any misunderstanding it may have caused. The company said:

In this specific campaign, the aim was to convey in a friendly and casual way that hunger can change your character. At no time has it been intended to stigmatize or offend any person or group.

 

 

A hung jury...

The BBFC 15 rated The Suicide Squad stirs a little press interest with some viewers suggesting an 18 rating would be preferable


Link Here3rd August 2021
The Suicide Squad is a 2021 USA action comedy adventure by james gunn...
Starring Margot Robbie, Idris Elba and John Cena BBFC link 2020  IMDb

Supervillains Harley Quinn, Bloodsport, Peacemaker and a collection of nutty cons at Belle Reve prison join the super-secret, super-shady Task Force X as they are dropped off at the remote, enemy-infused island of Corto Maltese.

There are no cuts issues with this film.

In Ireland IFCO rated the film 16 for strong violence and bloody action. It was also 16 rated in New Zealand and Germany.

In the US it was rated R for strong violence and gore, language throughout, some sexual references, drug use and brief graphic nudity.

In Australia the film was originally rated R18+ by the censor board, but this was reduced to MA15+ on appeal to the review board. The appeal was paid for by the distributor.

In the UK the film was passed 15 uncut for strong bloody violence, gore, language, brief drug misuse. The BBFC described the violence in the film:

Frequent scenes of violence include shootings, stabbings, slashings, decapitations, limbs being cut off, people being crushed, melted, torn apart and exploding. Much of the violence has a darkly comic tone, and results in bloody detail and gory images.

The decision generated a little press coverage for viewers suggesting that they thought the film should be 18 rated. In response to The Independent's request for comment, the BBFC said:

Whilst comparatively more violent than the last film, the violence is mitigated by the film's humour and the action-packed fantasy context. The violence and gore were sufficiently mitigated due to the focus on action within a comic, fantastic, superhero context. At 15, our Classification Guidelines state that 'violence may be strong but should not dwell on the infliction of pain or injury.

 

 

Who remembers those comic book X-ray specs adverts?...

UK MP Maria Miller wants to ban an app that claims it can work out the nude body that hides behind clothed photos


Link Here3rd August 2021
MP Maria Miller wants a parliamentary debate on whether digitally generated but imaginary nude images need to be banned.

It comes as another service which allows users to guess what people in photos look like undressed.

The DeepSukebe's nudifier website had more than five million visits in June, according to one analyst. Celebrities, including an Olympic athlete, are among those who users claim to have nudified.

DeepSukebe's website claims it can reveal the truth hidden under clothes. According to its Twitter page, it is an AI-leveraged nudifier whose mission is to make all men's dreams come true. And in a blog post, the developers say that they are working on a more powerful version of the tool.

Miller told the BBC it was time to consider a ban of such tools:

Parliament needs to have the opportunity to debate whether nude and sexually explicit images generated digitally without consent should be outlawed, and I believe if this were to happen the law would change.

If software providers develop this technology, they are complicit in a very serious crime and should be required to design their products to stop this happening.

She said that it should be an offence to distribute sexual images online without consent to reflect the severity of the impact on people's lives. Miller wants the issue to be included in the forthcoming Online Safety Bill.

 

 

The Sky is falling...

YouTube censors Sky News Australia claiming covid misinformation


Link Here1st August 2021
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Sky News Australia has been banned from uploading content to YouTube for seven days with Google deciding that the news channel violated its medical misinformation censorship policies.

The ban was imposed the day after the Daily Telegraph ended Alan Jones's regular column amid controversy about his Covid-19 commentary which included calling the New South Wales chief health officer Kerry Chant a village idiot on his Sky News program.

The Sky News Australia YouTube channel, which has 1.85m subscribers, has been issued a strike and is temporarily suspended from uploading new videos or livestreams for one week. A YouTube spokesperson told Guardian Australia:

We have clear and established Covid-19 medical misinformation policies based on local and global health authority guidance, to prevent the spread of Covid-19 misinformation that could cause real-world harm.

Specifically, we don't allow content that denies the existence of Covid-19 or that encourages people to use hydroxychloroquine or ivermectin to treat or prevent the virus. We do allow for videos that have sufficient countervailing context, which the violative videos did not provide .

 

 

Offsite Article: Just like in China...


Link Here1st August 2021
Social media users in the West are now inventing codewords to bypass online censorship

See article from reclaimthenet.org


 2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   Latest 
Feb   Jan   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec    

Censor Watch logo
censorwatch.co.uk

 

Top

Home

Links
 

Censorship News Latest

Daily BBFC Ratings

Site Information