|
Is a US version of the UK Online Safety Bill forcing Platforms to Spy on Young People
|
|
|
 | 27th March 2022
|
|
| See CC article from eff.org
|
Putting children under surveillance and limiting their access to information doesn't make them safer--in fact, research suggests just the opposite. Unfortunately those tactics are the ones endorsed by the Kids Online Safety Act of 2022 (KOSA), introduced
by Sens. Blumenthal and Blackburn. The bill deserves credit for attempting to improve online data privacy for young people, and for attempting to update 1998's Children's Online Privacy Protection Rule (COPPA). But its plan to require surveillance and
censorship of anyone under sixteen would greatly endanger the rights, and safety, of young people online. KOSA would require the following:
A new legal duty for platforms to prevent certain harms: KOSA outlines a wide collection of content that platforms can be sued for if young people encounter it, including "promotion of self-harm, suicide, eating disorders,
substance abuse, and other matters that pose a risk to physical and mental health of a minor." Compel platforms to provide data to researchers An elaborate age-verification system, likely run
by a third-party provider Parental controls, turned on and set to their highest settings, to block or filter a wide array of content
There are numerous concerns with this plan. The parental controls would in effect require a vast number of online platforms to create systems for parents to spy on--and control--the conversations young people are able to have online,
and require those systems be turned on by default. It would also likely result in further tracking of all users. ights organizations agree have a greater need for privacy and independence than younger teens and kids. And in
contrast to COPPA's age self-verification scheme, KOSA would authorize a federal study of "the most technologically feasible options for developing systems to verify age at the device or operating system level." Age verification systems are
troubling--requiring such systems could hand over significant power, and private data, to third-party identity verification companies like Clear or ID.me . Additionally, such a system would likely lead platforms to set up elaborate age-verification
systems for everyone, meaning that all users would have to submit personal data. Lastly, KOSA's incredibly broad definition of a covered platform would include any "commercial software application or electronic service that
connects to the internet and that is used, or is reasonably likely to be used, by a minor." That would likely encompass everything from Apple's iMessage and Signal to web browsers, email applications and VPN software, as well as platforms like
Facebook and TikTok--platforms with wildly different user bases and uses. It's also unclear how deep into the 'tech stack' such a requirement would reach -- web hosts or domain registries likely aren't the intended platforms for KOSA, but depending on
interpretation, could be subject to its requirements. And, the bill raises concerns about how providers of end-to-end encrypted messaging platforms like iMessage, Signal, and WhatsApp would interpret their duty to monitor minors' communications, with the
potential that companies will simply compromise encryption to avoid litigation. Censorship Isn't the Answer KOSA would force sites to use filters to block content--filters that we've seen, time and
time again, fail to properly distinguish"good" speech from "bad" speech. The types of content targeted by KOSA are complex, and often dangerous--but discussing them is not bad by default . It's very hard to differentiate
between minors having discussions about these topics in a way that encourages them, as opposed to a way that discourages them. Under this bill, all discussion and viewing of these topics by minors should be blocked. Research already exists showing bans like these don't work: when Tumblr banned discussions of anorexia, it discovered that the keywords used in pro-anorexia content were the same ones used to discourage anorexia. Other research has shown that bans like these actually make the content easier to find by forcing people to create new keywords to discuss it (for example, "thinspiration" became "thynsperation").
The law also requires platforms to ban the potentially infinite category of "other matters that pose a risk to physical and mental health of a minor." As we've seen in the past, whenever the legality of material is up
for interpretation, it is far more likely to be banned outright, leaving huge holes in what information is accessible online. The law would seriously endanger access to information to teenagers, who may want to explore ideas without their parents
knowledge or approval. For example, they might have questions about sexual health that they do not feel safe asking their parents about, or they may want to help a friend with an eating disorder or a substance abuse problem. (Research has shown that a
large majority of young people have used the internet for health-related research.) KOSA would allow individual state attorneys general to bring actions against platforms when the state's residents are "threatened or
adversely affected by the engagement of any person in a practice that violates this Act." This leaves it up to individual state attorneys general to decide what topics pose a risk to the physical and mental health of a minor. A co-author of this
bill, Sen. Blackburn of Tennessee, has referred to education about race discrimination as " dangerous for kids ." Many states have agreed, and recently moved to limit public education about the history of race , gender, and sexuality
discrimination. Recently, Texas' governor directed the state's Department of Family and Protective Services to investigate gender affirming care as child abuse. KOSA would empower the Texas attorney general to define material that
is harmful to children, and the current position of the state would include resources for trans youth. This would allow the state to force online services to remove and block access to that material everywhere--not only Texas. That's not to mention the
frequent conflation by tech platforms of LGBTQ content with dangerous "sexually explicit" material. KOSA could result in loss of access to information that a vast majority of people would agree is not dangerous, but is under political
attack. Surveillance Isn't the Answer Some legitimate concerns are driving KOSA. Data collection is a scourge for every internet user, regardless of age. Invasive tracking of young people by online
platforms is particularly pernicious--EFF has long pushed back against remote proctoring , for example. But the answer to our lack of privacy isn't more tracking. Despite the growing ubiquity of technology to make it easy,
surveillance of young people is actually bad for them , even in the healthiest household, and is not a solution to helping young people navigate the internet. Parents have an interest in deciding what their children can view online, but no one could
argue that this interest is the same if a child is five or fifteen. KOSA would put all children under sixteen in the same group, and require that specific types of content be hidden from them, and that other content be tracked and logged by parental
tools. This would force platforms to more closely watch what all users do. KOSA's parental controls would give parents, by default, access to monitor and control a young person's online use. While a tool like Apple's Screen Time
allows parents to restrict access to certain apps, or limit their usage to certain times, platforms would need to do much more under KOSA. They would have to offer parents the ability to modify the results of any algorithmic recommendation system,
"including the right to opt-out or down-rank types or categories of recommendations," effectively deciding for young people what they see -- or don't see -- online. It would also give parents the ability to delete their child's account entirely
if they're unhappy with their use of the platform. ould likely cover features as different as Netflix's auto-playing of episodes and iMessage's new message notifications. Putting these features together under the heading of
"addictive" misunderstands which dark patterns actually harm users, including young people. EFF has long supported comprehensive data privacy legislation for all users. But the Kids Online Safety Act would not protect
the privacy of children or adults. It is a heavy-handed plan to force technology companies to spy on young people and stop them from accessing content that is "not in their best interest," as defined by the government, and interpreted by tech
platforms.
|
|
Resuming a series detailing old BBFC cuts to the Carry On films
|
|
|
 | 27th March 2022
|
|
| Thanks to Vince |
Carry on Girls is a 1973 UK comedy romance by Gerald Thomas Starring Sidney James, Barbara Windsor and Joan Sims
 Category cuts were required for an 'A' rated cinema release in 1973. Video releases are
PG rated and slightly less cut. Summary Notes Local councillor Sidney Fiddler persuades the Mayor to help improve the image of their rundown seaside town by holding a beauty contest.
But formidable Councillor Prodworthy, head of the local women's liberation movement, has other ideas. It's open warfare as the women's lib attempt to sabotage the contest.
Versions
|
|
|
|
|
 | 27th March 2022
|
|
|
Marilyn Monroe biopic, Blonde, enjoys a little hype for its MPA NC-17 rating. Not that the reporter knows what an NC-17 rating actually means though. See
article from movieweb.com |
|
Old BBFC category cuts revealed for Lewis Teague's 1980 horror
|
|
|
 | 20th March 2022
|
|
| Thanks to Vince |
Alligator is a 1980 USA horror by Lewis Teague. Starring Robert Forster, Robin Riker and Michael V Gazzo.
BBFC category cuts were required for an A rated 1982 cinema release. Uncut and 15 rated for home video. Uncut and MPAA R rated in the US. Summary Notes A baby alligator is flushed
down a Chicago toilet and survives by eating discarded lab rats, injected with growth hormones. The small animal grows gigantic, escapes the city sewers, and goes on a rampage. Alligator is an over the top, ridiculous and fun
man vs. beast horror movie. Think a B-movie version of Jaws if he were on steroids and able to jump out of the water and run through the streets looking for food. For the viewer that likes his horror gory Alligator has
significant bright red blood splattered about as the beast chomps away and devours his victims throughout the city neighbourhoods and sewers.
Versions
 uncut
|  
| UK: Passed 15 uncut for:
- 2010 Anchor Bay Double Bill R2 DVD at UK Amazon
- 2003
Anchor Bay R2 DVD
- 2000 Digital Entertainment video
- 1991 Braveworld VHS
US: Uncut and MPAA R Rated for:
|  category cuts
cut: | | run: | 89:27s | pal: | 85:52s |
|  | UK: Passed A (PG) after BBFC cuts for category for:
Thanks to Vince: I remember this cinema film being advertised as an AA but the distributors got cold feet and released it as an A (with cuts). The quad poster had a white sticker over one of the A's on it!. These BBFC
category cuts were. Reel 1 - Opening sequence in which alligator attacks trainer was reduced, removing close-ups of alligator biting man. - Sight of dismembered
leg was reduced to flash shot only. Reel 3 - Sequence of alligator tearing policeman's leg off as he hangs onto car door was reduced. Reel 4 - Sequence in which small boy is thrown into swimming pool was reduced by removing his look of fear when
he sees the alligator. - Dialogue What the fuck was removed. Reel 5 - Sight of man struggling as he is pulled into boat and alligator biting his legs off was reduced. - In garden party attack, shot of maid in alligator's jaws was removed, as was sight of maid with bloody face and sight of man in white jacket being eaten.
- Killing of Mayor, was reduced by removing sight of his bloody face against car window and shot of him in alligator's jaws. - Smashing
of car with Slade inside was considerably reduced, in particular shot of bloody leg hanging from car and shots of him holding bloody head and blood on window were removed. |
|
|
Ofcom formally bans the Russian propaganda channel RT
|
|
|
 | 20th March 2022
|
|
| See article from ofcom.org.uk |
Ofcom has revoked RT's licence to broadcast in the UK, with immediate effect. We have done so on the basis that we do not consider RT's licensee, ANO TV Novosti, fit and proper to hold a UK broadcast licence.
The decision comes amid 29 ongoing investigations by Ofcom into the due impartiality of RT's news and current affairs coverage of Russia's invasion of Ukraine. We consider the volume and potentially serious nature of the issues raised
within such a short period to be of great concern -- especially given RT's compliance history, which has seen the channel fined £200,000 for previous due impartiality breaches. In this context, we launched a separate investigation
to determine whether ANO TV Novosti is fit and proper to retain its licence to broadcast. This investigation has taken account of a number of factors, including RT's relationship with the Russian Federation. It has recognised that
RT is funded by the Russian state, which has recently invaded a neighbouring sovereign country. We also note new laws in Russia which effectively criminalise any independent journalism that departs from the Russian state's own news narrative, in
particular in relation to the invasion of Ukraine. We consider that given these constraints it appears impossible for RT to comply with the due impartiality rules of our Broadcasting Code in the circumstances. We recognise that RT
is currently off air in the UK, as a result of sanctions imposed by the EU since the invasion of Ukraine commenced. We take seriously the importance, in our democratic society, of a broadcaster's right to freedom of expression and the audience's right to
receive information and ideas without undue interference. We also take seriously the importance of maintaining audiences' trust and public confidence in the UK's broadcasting regulatory regime. Taking all of this into account, as
well as our immediate and repeated compliance concerns, we have concluded that we cannot be satisfied that RT can be a responsible broadcaster in the current circumstances. Ofcom is therefore revoking RT's licence to broadcast with immediate effect.
|
|
|
|
|
 | 20th March 2022
|
|
|
A 15A rating should be introduced for The Batman so young fans aren't deprived of watching it. By Scott Bates See article from metro.co.uk
|
|
|
|
|
 | 20th March
2022
|
|
|
The UK's Online Safety Bill is an authoritarian nightmare. By Fraser Myers See article from spiked-online.com
|
|
|
|
|
 |
20th March 2022
|
|
|
Polish internet freedom fighters spam Russian email users to inform them about Putin's murderous invasion of Ukraine See
article from therecord.media |
|
UK Government introduces its Online Censorship Bill which significantly diminishes British free speech whilst terrorising British businesses with a mountain of expense and red tape
|
|
|
 | 17th March 2022
|
|
| See press release from gov.uk See
bill progress from bills.parliament.uk See bill
text [pdf] from publications.parliament.uk |
The UK government's new online censorship laws have been brought before parliament. The Government wrote in its press release: The Online Safety Bill marks a milestone in the fight for a new digital age which is safer for users and
holds tech giants to account. It will protect children from harmful content such as pornography and limit people's exposure to illegal content, while protecting freedom of speech. It will require social media platforms, search
engines and other apps and websites allowing people to post their own content to protect children, tackle illegal activity and uphold their stated terms and conditions. The regulator Ofcom will have the power to fine companies
failing to comply with the laws up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites. Today the government is announcing that executives whose companies fail to
cooperate with Ofcom's information requests could now face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted. A raft of other new offences have also been added
to the Bill to make in-scope companies' senior managers criminally liable for destroying evidence, failing to attend or providing false information in interviews with Ofcom, and for obstructing the regulator when it enters company offices.
In the UK, tech industries are blazing a trail in investment and innovation. The Bill is balanced and proportionate with exemptions for low-risk tech and non-tech businesses with an online presence. It aims to increase people's trust
in technology, which will in turn support our ambition for the UK to be the best place for tech firms to grow. The Bill will strengthen people's rights to express themselves freely online and ensure social media companies are not
removing legal free speech. For the first time, users will have the right to appeal if they feel their post has been taken down unfairly. It will also put requirements on social media firms to protect journalism and democratic
political debate on their platforms. News content will be completely exempt from any regulation under the Bill. And, in a further boost to freedom of expression online, another major improvement announced today will mean social
media platforms will only be required to tackle 'legal but harmful' content, such as exposure to self-harm, harassment and eating disorders, set by the government and approved by Parliament. Previously they would have had to
consider whether additional content on their sites met the definition of legal but harmful material. This change removes any incentives or pressure for platforms to over-remove legal content or controversial comments and will clear up the grey area
around what constitutes legal but harmful. Ministers will also continue to consider how to ensure platforms do not remove content from recognised media outlets. Bill introduction and changes over the
last year The Bill will be introduced in the Commons today. This is the first step in its passage through Parliament to become law and beginning a new era of accountability online. It follows a period in which the government
has significantly strengthened the Bill since it was first published in draft in May 2021. Changes since the draft Bill include:
Bringing paid-for scam adverts on social media and search engines into scope in a major move to combat online fraud . Making sure all websites which publish or host pornography , including commercial
sites, put robust checks in place to ensure users are 18 years old or over. Adding new measures to clamp down on anonymous trolls to give people more control over who can contact them and what they see online. -
Making companies proactively tackle the most harmful illegal content and criminal activity quicker. Criminalising cyberflashing through the Bill.
Criminal liability for senior managers The Bill gives Ofcom powers to demand information and data from tech companies, including on the role of their algorithms in selecting and displaying content, so it
can assess how they are shielding users from harm. Ofcom will be able to enter companies' premises to access data and equipment, request interviews with company employees and require companies to undergo an external assessment of
how they're keeping users safe. The Bill was originally drafted with a power for senior managers of large online platforms to be held criminally liable for failing to ensure their company complies with Ofcom's information requests
in an accurate and timely manner. In the draft Bill, this power was deferred and so could not be used by Ofcom for at least two years after it became law. The Bill introduced today reduces the period to two months to strengthen
penalties for wrongdoing from the outset. Additional information-related offences have been added to the Bill to toughen the deterrent against companies and their senior managers providing false or incomplete information. They
will apply to every company in scope of the Online Safety Bill. They are:
offences for companies in scope and/or employees who suppress, destroy or alter information requested by Ofcom; offences for failing to comply with, obstructing or delaying Ofcom when exercising its
powers of entry, audit and inspection, or providing false information; offences for employees who fail to attend or provide false information at an interview.
Falling foul of these offences could lead to up to two years in imprisonment or a fine. Ofcom must treat the information gathered from companies sensitively. For example, it will not be able to share or publish
data without consent unless tightly defined exemptions apply, and it will have a responsibility to ensure its powers are used proportionately. Changes to requirements on 'legal but harmful' content Under the draft Bill, 'Category 1' companies - the largest online platforms with the widest reach including the most popular social media platforms - must address content harmful to adults that falls below the threshold of a criminal offence.
Category 1 companies will have a duty to carry risk assessments on the types of legal harms against adults which could arise on their services. They will have to set out clearly in terms of service how they will deal with such
content and enforce these terms consistently. If companies intend to remove, limit or allow particular types of content they will have to say so. The agreed categories of legal but harmful content will be set out in secondary
legislation and subject to approval by both Houses of Parliament. Social media platforms will only be required to act on the priority legal harms set out in that secondary legislation, meaning decisions on what types of content are harmful are not
delegated to private companies or at the whim of internet executives. It will also remove the threat of social media firms being overzealous and removing legal content because it upsets or offends someone even if it is not
prohibited by their terms and conditions. This will end situations such as the incident last year when TalkRadio was forced offline by YouTube for an "unspecified" violation and it was not clear on how it breached its terms and conditions.
The move will help uphold freedom of expression and ensure people remain able to have challenging and controversial discussions online. The DCMS Secretary of State has the power to add more categories of
priority legal but harmful content via secondary legislation should they emerge in the future. Companies will be required to report emerging harms to Ofcom. Proactive technology Platforms may need to
use tools for content moderation, user profiling and behaviour identification to protect their users. Additional provisions have been added to the Bill to allow Ofcom to set expectations for the use of these proactive technologies
in codes of practice and force companies to use better and more effective tools, should this be necessary. Companies will need to demonstrate they are using the right tools to address harms, they are transparent, and any
technologies they develop meet standards of accuracy and effectiveness required by the regulator. Ofcom will not be able to recommend these tools are applied on private messaging or legal but harmful content. Reporting child
sexual abuse A new requirement will mean companies must report child sexual exploitation and abuse content they detect on their platforms to the National Crime Agency . The CSEA reporting requirement
will replace the UK's existing voluntary reporting regime and reflects the Government's commitment to tackling this horrific crime. Reports to the National Crime Agency will need to meet a set of clear standards to ensure law
enforcement receives the high quality information it needs to safeguard children, pursue offenders and limit lifelong re-victimisation by preventing the ongoing recirculation of illegal content. In-scope companies will need to
demonstrate existing reporting obligations outside of the UK to be exempt from this requirement, which will avoid duplication of company's efforts. |
|
French censors bang the table demanding age verification but there are no data protection laws in place that protect porn users from being tracked and scammed
|
|
|
| 9th March 2022
|
|
| See article from
lefigaro.fr |
Pornhub, Pornhub, XHamster, XNXX and XVideos do not comply with French rules contrived from a law against domestic violence. The French internet censor Arcom (previously CSA) took legal action on March 8 and requested the blocking of 5 pornographic
sites: Pornhub, Pornhub, XHamster, Xnxx and Xvideos. The censor sent an injunction to the platforms and left 15 days to comply with the law. The websites did not comply. Since the vote on the law against domestic violence in 2020, an amendment
specifies that sites can no longer be satisfied with asking Internet users to declare that they are of legal age by clicking on a simple box. Depending on the judge's decision, ISPs will be forced or not to block access to the incriminated sites. In
case of blocking, visitors to the pornographic site will be redirected to a dedicated Arcom page. Distributors of pornographic content are therefore required, in theory, to check the age of their visitors. But how? There is currently no legally
defined method to achieve this. The censor itself has never given guidelines to the platforms. In fact data protection authorities have rather put a spanner in the works that has left the industry scratching its head. In an opinion issued on June 3,
2021, the National Commission for Computing and Freedoms (Cnil) decreed that a verification system which collects information on the identity of Internet users would, in this context, be illegal and risky. Such data collection would indeed present
significant risks for the persons concerned since their sexual orientation -- real or supposed -- could be deduced from the content viewed and directly linked to their identity. Faced with these legal contradictions, Senator Marie Mercier, rapporteur
for the amendment, has simply banged the table harder: I don't want to know how they are doing, but they have to find a solution . The law is the law.
Porn tube websites have explained their reluctance
to implement. The option to use third-party verifiers may prove very expensive for a business model based on a high number of users making up for low advertising income per users. An estimate denied by the Tukif site, says that the cost of a verification
service goes from 0.0522c to 0.222c per user, a cost to be multiplied by their 650,0000 unique daily visitors. It is presumed that many porn users will be very reluctant to hand over dangerous ID proof to porn websites so blocking the entry of
some audiences, while discouraging others will lead to collapsing income. The websites also note that as the regulator hasn't attempted to block all porn tube sites then users will be more likely to swap to unrestricted websites rather than submit
to ID verification on those website mandated to do so. |
|
Morality campaigners announces their annual Dirty Dozen List
|
|
|
 | 9th March 2022
|
|
| See article from endsexualexploitation.org
|
The US campaign group, Morality in Media now refers to itself as the National Center on Sexual Exploitation (NCOSE). The group publishes an annual whinge list of the most immoral organisations on the planet, and then it asks its members to boycott them.
This year the group wrote: NCOSE has revealed that technology companies Meta, Google Search, Discord, and Twitter, and more, are among its 2022 Dirty Dozen List of mainstream contributors to sexual exploitation.
Big Tech holds incredible influence over society, so it's especially egregious when tech companies normalize, enable, and even profit from sexually exploitative practices, policies, and products. There is no other industry that has
the capacity to help billions of people by prioritizing user protection and safety like Big Tech. Other companies named to the Dirty Dozen List include Visa, which allows the exploitative commercial sex industry to prosper; Etsy,
which enables sex dolls and pornographic content to be sold; and Netflix, which normalizes the sexualization of children and whitewashes the violence and exploitation in prostitution. The 2022 Dirty Dozen List includes:
Discord -- Discord consistently fails to address the extensive sexually graphic, violent, and exploitative content on its thousands of private and public channels. Inadequate age verification and moderation mean millions
of children and teen users are left with near-unmitigated exposure to child sexual abuse material, nonconsensual pornography trading, and the predatory grooming rampant on its platform. Etsy -- Global marketplace Etsy is in the
business of selling pornographic merchandise, misogynistic and dehumanizing apparel, and sex dolls -- including ones resembling children and young teens. Customers equate unwanted exposure to pornography and sexually explicit content to sexual
harassment. Google Search -- Google Search buttresses the pornography industry by facilitating access to graphic images and videos of sexual abuse -- depicted and real -- including sex trafficking, child sexual abuse, and rape.
Kanakuk Kamps -- Thousands of families have entrusted their children to Kanakuk Kamps--one of the largest Christian sports camps. Tragically, that trust was broken as years of child sexual abuse at Kanakuk Kamps have been swept
under the rug. Kik -- Kik boasts that a third of American teens use the free messaging app to chat with friends and strangers alike. It's also among the most dangerous online spaces for children. Meta
(formerly known as Facebook) -- Meta owns Facebook, Instagram, and WhatsApp: all of which are consistently under fire as primary places for grooming, sextortion, child sexual abuse materials, sex trafficking, and a host of other crimes. The tech giant
has the potential to lead the industry in online safety standards. Instead, Meta is prioritizing new projects like the metaverse and pursuing sweeping encryption despite international law enforcement warnings about the lack of sufficient provisions for
child online safety. Netflix -- Netflix is a staple of at-home entertainment, with over 200 million subscribers streaming their content worldwide. Yet mixed in with the fun and entertainment is rampant sexual objectification and
glamorization of abuse. Sociologists have identified a marked increase in graphic sex scenes and gratuitous nudity permeating Netflix shows. Further, Netflix continues a trend of normalizing the sexualization of children while also whitewashing the
violence and exploitation in prostitution. OnlyFans -- OnlyFans exploded in notoriety and profit during COVID-19, as the subscription-based platform known for pornography preyed on widespread financial insecurities and capitalized
on youth spending more time online. Sex buyers and pimps maximize buying and selling people behind the security of a paywall. Reddit -- Referred to as the front page of the Internet, Reddit hosts more than two million user-created
communities covering nearly as many topics. Among them are countless nonconsensually shared sexually explicit images and videos, child sexual abuse material, hardcore pornography, and prostitution advertisements. Twitter --
Pedophiles and other predators go to Twitter to trade in criminal content such as child sexual abuse and nonconsensual pornography. The platform is rampant with accounts and posts functioning as advertisements for commercial sex. Verisign -- Verisign provides Internet infrastructure and services and has exclusive management over the .com and .net generic top-level domains. 82% of all websites containing child sexual abuse material in 2020 were registered on .com and .net domains.
Visa -- Visa rightly cut ties with Pornhub in 2020 after public outcry regarding the rampant sex trafficking and child sexual abuse material hosted on the pornography site -- but has since re-initiated a relationship with
Pornhub's parent company, MindGeek.
|
|
Australian censorship board bans video game
|
|
|
 | 4th March 2022
|
|
| See article from nichegamer.com See also
Refused Classification via twitter.com |
RimWorld is a 2018 Canada building simulation game by Ludeon Studios The game was recently banned by the games censors of the Australian Classification Board. The ban applied to gaming consoles. The censors did not specify why the games is
banned, just the usual worthless all encompassing stock statement. At the time, everyone thought the PC version would remain unaffected, including developer Ludeon Studios who wrote: We did not expect this to
affect the Steam version because in previous similar cases, as with Disco Elysium for example, a Refused Classification (RC) rating on a console version did not affect the availability of the PC version on Steam. I'm sorry this
news was so sudden and for anyone who is frustrated by this, Ludeon Studios said. We are working to resolve this situation and make RimWorld available to everyone again as soon as possible, but we don't yet know what that might require or how long it may
take.
Australian users that purchased RimWorld at any point from its debut way back in 2013 can still access the game. New buyers, however, will now have to buy the game directly from the official RimWorld website. On March 4th
Refused Classification reported that the 'Refused Classification' has mysteriously been removed from the censors' database. No doubt an explanation will soon follow. |
|
Google and Meta win a complaint against Germany's internet censorship law
|
|
|
 | 4th March
2022
|
|
| See article from reclaimthenet.org |
A German court has ruled against the country's hate speech law, saying it is illegal to share innocent users' data with law enforcement. In 2018, Germany passed a controversial law requiring social media companies to remove criminal content and report
it to law enforcement. Last May, the German parliament amended the law, passing even stricter and wider regulations for social media companies. The expanded version of the law took effect in February. Google, Meta, and Twitter filed legal
complaints against the law in January 2022. In its complaint, Twitter argued that: We are concerned that the law provides for a significant encroachment on citizens' fundamental rights.In particular, we are concerned
that the obligation to proactively share user data with law enforcement forces private companies into the role of prosecutors by reporting users to law enforcement even when there is no illegal behavior.
On March 1, Cologne's
administrative court ruled on Meta's and Google's complaint. The court argued the online hate-speech law violated EU law because it allowed users' data to be shared with law enforcement even before it is determined if a crime has been committed. The decision can be appealed at a higher court.
|
|
|