|
ASA bans Temu adverts including images of a young girl in a bikini.
|
|
|
| 31st October 2023
|
|
| See article from asa.org.uk |
Four display ads and an in-app ad for Temu, an online marketplace: a. The first display ad seen on a regional online newspaper on 19 June 2023, featured six images in a row. The first image was of a young girl wearing a bikini,
the girl was shown looking at the camera, one hand on her hip and the other pushing her hair behind her ear. The second image was of a woman wearing a white halterneck dress, the image was cut so only her torso and arms were shown. The third image was of
a silver metallic facial roller. The fourth image was of three balloon tying tools in pink, red and blue colours. The fifth image was a woman wearing a white crop top. Only the woman's chest, arms and midriff were shown. The sixth image was of a grey
jock strap. b. The second display ad seen on a chess website on 18 June 2023, featured six images. The first image featured a woman wearing a burgundy one shoulder jumpsuit that was cut at one side showing part of the woman's
midriff, the top of her chest and her left arm. The image was cut just below the woman's eyes, showing the bottom part of her face only. The second image was of padded cycling underwear. The third image was of three balloon tying tools in pink, red and
blue. The fourth image was a woman wearing a grey tight fitting jumpsuit. The image was cut to show her face from the eyes down to the top part of her thighs only. The fifth image was of a grey jock strap. The sixth image was a pair of red boots.
c. The third display ad seen on a chess website on 17 June 2023, featured three of the images seen in ad (b); the woman wearing a burgundy jumpsuit, padded cycling underwear and three balloon tying tools in pink, red and blue.
d. The fourth display ad seen on a language translation website on 18 June 2023, featured eight images. Five that were also in ad (b); three balloon tying tools in pink, red and blue, padded cycling underwear, a woman wearing a
burgundy jumpsuit, a pair of red boots and a woman wearing a grey jumpsuit. The sixth image was featured in ad (a) and was a woman wearing a white halterneck dress. The seventh image was of a woman wearing a tight fitting pink cat suit, the woman's head
was not shown. The eighth image was of a rubber pink foot massager. e. The in-app ad seen within a puzzle app on the 18 June 2023, featured images of leopard print underwear with the back removed and a woman wearing a short black
skirt and tights, only the woman's legs were shown. Issue The ASA received five complaints. 1. Three complainants, who considered that the content of ads (a), (b), (c) and (d) were sexually graphic,
objected that the ads were likely to cause serious or widespread offence. 2. One complainant, who believed the pose and clothing of the model in a bikini, sexualised someone who was a child, challenged whether ad (a) was
irresponsible and offensive. 3. Two complainants, who believed ads (a), (b) and (c) sexually objectified women, challenged whether they were offensive and irresponsible. 4. Two complainants challenged
whether ads (b), (c) and (e) were inappropriately targeted. ASA Assessment: All complaints upheld. We considered that ads (a), (b), (c) and (d) taken in their entirety with the accompanying images of
the models, and with no explanation or labelling, contained products that were likely to be seen as sexual in nature. The ads appeared in general media where adult themed or sexual products were unlikely to be anticipated. On that basis the ads were
likely to cause widespread offence. We considered that the young model in ad (b) appeared to be a girl of eight to eleven years of age. The girl wore a two piece bikini. The image was cut off just beneath
the bikini bottoms. The girl was posed with one hand on her hip and the other appearing to push her hair behind her ears. The pose was quite adult for a girl of her age and she appeared alongside two other images also in the ad that featured mature women
modelling clothing intended for adults. We concluded that the ad had the effect of portraying a child in a sexual way and was irresponsible. Ad (a) showed a woman wearing a tight-fitting white dress, the
image was cut so only her torso and arms were shown. A second image featured a woman wearing a white crop top and only her chest, arms and midriff were shown. The images appeared alongside a jockstrap and items such as a facial massager and balloon ties,
which were phallic in shape and appeared sexual in nature. Further to that the jockstrap, with its accentuated crotch, gave the impression of being sexual, rather than for utility. Focusing on a person's body while obscuring or
removing their face was likely to be seen as objectifying. As the disembodied images of the women wearing tight and revealing clothing appeared alongside items that were likely to be understood as sexual, we considered the women were presented as
stereotypical sexual objects. We considered that ads (b) and (c) featured content that sexually objectified women and ad (b) featured an image of a person under 18 years of age in a sexual way. Therefore they were unsuitable to be
seen by audiences of any age, regardless of whether the advertiser had taken steps to target them towards audiences over 18. The ads must not appear again in their current form. We told Whaleco UK Ltd t/a Temu to ensure that
future ads were prepared with a sense of responsibility to consumers and to society and that they did not cause serious or widespread offence by presenting products in a sexual way in general media or by presenting individuals as stereotypical sexual
objects. In addition, persons who were or appeared to be under 18 years of age in ads must not be portrayed in a sexual way and ads must be responsibly targeted.
|
|
The Online Unsafety Bill gets Royal Assent and so becomes law
|
|
|
| 29th October 2023
|
|
| See article from ofcom.org.uk |
The Online Safety Bill received Royal Assenton 26th October 2023, heralding a new era of internet censorship. The new UK internet Ofcom was quick off the mark to outline its timetable for implementing the new censorship regime. Ofcom has set
out our plans for putting online safety laws into practice, and what we expect from tech firms, now that the Online Safety Act has passed. Ofcom writes: The Act makes companies that operate a wide range of online services legally
responsible for keeping people, especially children, safe online. These companies have new duties to protect UK users by assessing risks of harm, and taking steps to address them. All in-scope services with a significant number of UK users, or targeting
the UK market, are covered by the new rules, regardless of where they are based. While the onus is on companies to decide what safety measures they need given the risks they face, we expect implementation of the Act to ensure
people in the UK are safer online by delivering four outcomes:
stronger safety governance in online firms; online services designed and operated with safety in mind; choice for users so they can have meaningful
control over their online experiences; and transparency regarding the safety measures services use, and the action Ofcom is taking to improve them, in order to build trust .
We are moving quickly to implement the new rules Ofcom will give guidance and set out codes of practice on how in-scope companies can comply with their duties, in three phases, as set out in the Act.
Phase one: illegal harms duties We will publish draft codes and guidance on these duties on 9 November 2023, including:
analysis of the causes and impacts of online harm, to support services in carrying out their risk assessments; draft guidance on a recommended process for assessing risk; draft codes of practice, setting out what services can do to mitigate the risk of harm; and
draft guidelines on Ofcom's approach to enforcement.
We will consult on these documents, and plan to publish a statement on our final decisions in Autumn 2024. The codes of practices will then be submitted to the Secretary of State for Science, Innovation and Technology, and subject to
their approval, laid before Parliament. Phase two: child safety, pornography and the protection of women and girls Child protection duties will be set out in two parts. First, online pornography
services and other interested stakeholders will be able to read and respond to our draft guidance on age assurance from December 2023. This will be relevant to all services in scope of Part 5 of the Online Safety Act. Secondly,
regulated services and other interested stakeholders will be able to read and respond to draft codes of practice relating to protection of children, in Spring 2024. Alongside this, we expect to consult on:
We expect to publish draft guidance on protecting women and girls by Spring 2025, when we will have finalised our codes of practice on protection of children. Phase three: transparency, user empowerment, and
other duties on categorised services A small proportion of regulated services will be designated Category 1, 2A or 2B services if they meet certain thresholds set out in secondary legislation to be made by Government. Our
final stage of implementation focuses on additional requirements that fall only on these categorised services. Those requirements include duties to:
produce transparency reports; provide user empowerment tools; operate in line with terms of service; protect certain types of journalistic
content; and prevent fraudulent advertising.
We now plan to issue a call for evidence regarding our approach to these duties in early 2024 and a consultation on draft transparency guidance in mid 2024. Ofcom must produce a register of categorised
services. We will advise Government on the thresholds for these categories in early 2024, and Government will then make secondary legislation on categorisation, which we currently expect to happen by summer 2024. Assuming this is achieved, we will:
publish the register of categorised services by the end of 2024; publish draft proposals regarding the additional duties on these services in early 2025; and issue
transparency notices in mid 2025.
|
|
Ohio House Representative introduces a bill to criminalise the use of VPNs to circumvent age/ID verification
|
|
|
| 29th October 2023
|
|
| See article from
clevescene.com |
Ohio House Representative Steve Demetriou has introduced an extraordinarily repressive House Bill (HB) 295. Dubbed the Innocence Actwould implement an age verification requirement similar to what has already been implemented in other states. However this
bill goes way beyond other is that it introduces criminal penalties for websites that don't comply and misdemeanor penalties for any internet user who tries to circumvent age verification, eg by using VPNs. In its current form, companies and
webmasters who don't implement reasonable age verification methods could be subject to criminal charges -- a third-degree felony. No other proposed and implemented age verification regulation in the country has such punitive criminal penalties. Corey
Silverstein, a First Amendment attorney, commented: VPNs are available on most mobile devices through the Apple App Store or Google Play Store. They are also free or relatively inexpensive. And, to think that a 17-year-old
high school student can't learn about and effectively deploy a VPN is short-sighted. I can't think of a worse idea than charging minors with criminal offenses for viewing adult content and potentially ruining their futures. Attempting to shame and
embarrass minors for viewing adult-themed content goes so far beyond common sense that it begs the question of whether the supporters of this bill gave it any thought at all.
It is not yet clear if the bill has a chance of becoming
law. |
|
The distributors don't care about the viewers as they opt to cut an Indian actioner by 28m
|
|
|
| 29th October 2023
|
|
| Thanks to Scott |
Bhagavanth Kesari is a 2023 India action thriller by Anil Ravipudi Starring Nandamuri Balakrishna, Arjun Rampal and Kajal Aggarwal
Nelakonda Bhagavanth Kesari is adamant about getting even with a powerful businessman who cost him money. He is prepared to square off against the formidable foe and resolve their issues amicably.
The
UK 2023 cinema release was extensively cut by 28m for a BBFC 12A rating. Versions
distributor and BBFC category cuts
cut: | ~28:26s | run: | 135:24s | pal: | 129:59s |
| | UK: A massively pre-cut version was BBFC 12A rated for moderate violence, bloody images, sexual
violence references for moderate violence, bloody images, sexual violence references after further BBFC category cuts:
- 2023 BTS Entertainments cinema release (rated 23/10/2023)
The BBFC commented: The BBFC initially classified this film 15 uncut. The distributor requested a cuts list for 12A, but the BBFC refused to offer cuts because the changes would be extensive and could damage the
integrity of the film. However, the distributor chose to make their own cuts, removing a number of scenes of strong violence, and submitted a new version of the film for classification. This edit also received a 15 classification. The distributor chose
to make cuts to this version, removing further scenes of violence in order to achieve their preferred category of 12A.
Thanks to Scott who commented: It's already been released as an uncut 15, so clearly now the
distributor is trying to cash in and get the lucrative family crowd to see it. Cinema going is a family activity in Indian culture, hence distros regularly being happy to inflict gargantuan cuts on their films to get the 12A. I should add though that
it's incredibly unusual for one to be cut by more than 10 or so mins - half an hour is unprecedented. I can't even begin to imagine the mess they've made of this. Apparently fellow Indian actioners Beast and Jailer were incomprehensible in their heavily
cut forms, and they were only missing 6 or 7 mins. | uncut
run: | 163:50s | pal: | 157:17s |
|
| UK: Uncut and BBFC 15 rated for strong bloody violence:
- 2023 2G Entertainments cinema release (rated 13/10/2023)
Ireland: Uncut and IFCO 15A rated with a trigger warning for strong bloody violence, scenes of sexual threat and references to child abuse:
- 2023 2G Entertainment cinema release (rated 12/10/2023)
Australia: I Don't Care: Uncut and ACB MA 15+ rated for strong violence:
- 2023 Tollymovies cinema release (rated 18/10/2023) titled Bhagavanth Kesari: I Don't Car
|
|
|
|
|
|
| 29th October
2023
|
|
|
Debateable claims about Ulysses and I'll Never Forget What's'isname See article from faroutmagazine.co.uk |
|
Ofcom criticises GB News for deviating from the 'right think' line
|
|
|
| 24th October 2023
|
|
| See report [pdf] from ofcom.org.uk
|
Martin Daubney (standing in for Laurence Fox) GB News, 16 June 2023, 19:00 The above current affairs programme dealt with the topic of immigration and asylum policy, in particular in the context of controversy over small
boats crossing the English Channel. The presenter, Martin Daubney, gave his own views on this topic and interviewed the leader of the Reform Party, Richard Tice. Ofcom received a complaint about the programme.
We considered that immigration and asylum policy constituted a matter of major political controversy and a major matter relating to current public policy. When dealing with major matters, all Ofcom licensees must comply with the
heightened special impartiality requirements in the Code. These rules require broadcasters to include and give due weight to an appropriately wide range of significant views. We found that Mr Tice presented his position on a
matter of major political controversy and a major matter of current public policy with insufficient challenge, and the limited alternative views presented were dismissed. The programme therefore did not include and give due weight to an appropriately
wide range of significant views, as required by the Code. The Licensee accepted that the content was not compliant with the heightened special impartiality requirements in the Code. GB News failed to
preserve due impartiality, in breach of Rules 5.11 and 5.12 of the Code. Ofcom recognises that, in accordance with the right to freedom of expression, broadcasters have editorial freedom and can offer audiences innovative forms of
discussion and debate ... However... in light of the likely similarity of the views of the participants in this programme on the major matter being discussed, the Licensee should have taken additional steps to ensure that due
impartiality was preserved. We expect GB News to take careful account of this Decision in its compliance of future programming. |
|
2023 France action comedy by Hugo Benamozig cut by the BBFC for a cockfight
|
|
|
| 24th October 2023
|
|
| |
Sentinelle is a 2023 France action comedy by Hugo Benamozig, David Caviglioli Starring Jonathan Cohen, Emmanuelle Bercot and Raphaël Quenard
Cut for animal cruelty for 2023 15 rated VoD release.
Summary Notes François Sentinelle has two lives. By day he is the most famous police officer on Reunion Island, known for his harsh methods and flowery shirts, who chases criminals in his
4x4. But the rest of the time, he is also a charismatic singer. UK: BBFC 15 rated for strong violence, bloody images, language, sex references, drug misuse for strong violence, bloody images, language, sex references, drug misuse
after BBFC cuts:
- 2023 Amazon Media EU S.à r.l. VoD (rated 26/09/2023)
The BBFC commented: The company removed a sequence in which cocks are caused to fight in a cruel manner.
|
|
Belfast Council is again set to overrule the BBFC's 15 rating for a US PG-13 rated horror and replace it with a 15A rating
|
|
|
| 20th October 2023
|
|
| See
article from
belfasttelegraph.co.uk |
Five Nights At Freddy's is a 2023 US horror by Emma Tammi Starring Josh Hutcherson, Mary Stuart Masterson and Lucas Grant
A troubled security guard begins working at Freddy Fazbear's Pizza. During his first night on the job, he realizes that the night shift won't be so easy to get through. Pretty soon he will unveil what actually happened at Freddy's.
A few years ago the BBFC received a few complaints about The Woman in Black being to scary for its BBFC 12/12A rating. Ever since, the BBFC have been out of step with the rest of the world by overrating any slightly scary US PG-13
rated film with a rather harsh 15 rating in the UK. Now it seems that this overrating has overstepped the mark in Northern Ireland. Belfast Council looks set to overrule the BBFC 15 rating for Five Nights At Freddy's and replacing it with a
15A rating to match the Irish film censor's assessment of the film. Michael McAdam, managing director of the Movie House chain in Northern Ireland, made the request to the council for the 15A classification. Last year, he set a precedent after making
a similar request for The Batman. After initially refusing the request for The Batman, the council permitted the use of the 15A in Northern Ireland for the first time. The current decision has to be ratified at the full council, which
will not convene until Wednesday, November 1. The decision is also subject to a call-in, where the council reviews decisions, and could be delayed further. This means there is some legal peril in presenting the film to the public before a final decision.
The Belfast City Council decision means that not only Movie House cinemas are able to show the film with a 15A classification, but any cinema in the Belfast council area that notifies City Hall in advance. Michael McAdam argued for a 15A rating in
a letter to the council: The BBFC states on its website that statutory powers over film remain with the local councils, which can overrule any of the BBFC's decisions on appeal, including altering the age ratings for
films shown in their area. Over recent years there has been a significant shift in the way families consume film and parents prefer to be the decision makers. Parents find it hard to understand why they have the power to choose a
12A film for their children but not a 15 rated film. This can cause frustration and embarrassment for those who arrive at the cinema and are then prevented from seeing the film. We anticipate that this will be the case for Five
Nights at Freddy's, which has been awarded a 15 certificate in the UK. Speaking after the council meeting, SDLP councillor and chair of the licensing committee Gary McKeown, said the reclassification by the council highlights the
need for BBFC to be more flexible in its approach to classification. This is the second time that Belfast City Council has changed a 15 rating to a 15A, but we need to be clear that the council is not a film classification body --
that responsibility lies with BBFC, so they need to act now to remove the need for councils to have to step in by introducing a more appropriate model that gives parents their place, and also recognises the role of steaming which doesn't impose any such
restrictions in reality.
Versions
uncut
run: | 109:15s | pal: | 104:53s |
| Belfast 15A
| UK: Uncut and BBFC 15 rated for strong threat,
violence:
- 2023 Universal Pictures cinema release (rated 04/10/2023)
Belfast: Uncut and 15A rated by Belfast City Council:
- 2023 Universal Pictures cinema release
Ireland: Uncut and IFCO 15A rated for supernatural horror with jump scares and some scenes of bloody violence, injury detail depicted, theme of child abduction:
- 2023 Universal cinema release (rated 11/10/2023) titled Five Nights at Freddy's
US: Uncut and MPA PG-13 rated for strong violent content, bloody images and language:
- 2023 release (rated 26/07/2023) titled Five Nights at Freddy's
|
Update: 15A rating confirmed 2nd November 2023 The 15A rating was confirmed by Belfast City Council and a few dasy after release the BBFC 15 rating was duly replaced by a local 15A rating. |
|
1980 US slasher by Sean S Cunningham slashed for UK cinema re-release
|
|
|
| 18th October 2023
|
|
| Thanks to Andy, DB and Anthony |
Friday the 13th is a 1980 US slasher by Sean S Cunningham. With Betsy Palmer, Adrienne King and Jeannine Taylor.
The film exists as an uncut Unrated version and a cut R rated version. The uncut version was passed X for UK 1980 cinema release and was initially released on pre-cert VHS. However this was seized by police and the
distributors reverted to the cut version from 1983 until 1987. The video has been uncut and 18 rated since 2003 on DVD and Blu-ray. However a 2023 cinema re-release featured the cut R rated version, but now with a BBFC 15 rating for strong violence,
bloody images, threat, se
Thanks to Andy, DB and Anthony who note that the 2023 cinema re-release is cut and is therefore most likely the cut R rated version. Andy notes: Annie's death goes to
whiteout like in the old censored version, and then the snake killing was definitely missing the second slicing. Kevin Bacon's death is SEVERELY TRUNCATED! DB notes: The cinema screening was missing all
the extra gore footage. E.g. the front shot of Jack getting the arrow twisted through his neck. Instead it cut straight from the side shot to the outside of the cabin. I can confirm for sure the version screened was the US R version
Anthony adds:
The film has been downrated to a BBFC15 certificate for strong violence, bloody images, threat, sex. The BBFC actually has an entry that reflects this decision and says nothing about any cuts made to that particular
version, which was distributed by Park Circus Limited. However, I noticed that one previous cut has reappeared: ie after the four-second bird's-eye view of Jack (Kevin Bacon) getting skewered through the neck, you see the side
shot and the arrowhead passing through his throat and the blood pumping out in that shot but the very next shot, in which we would have seen a shot looking down on him with blood going over his throat and partially in his mouth was not there at all.
The running time of this cinema re-release matched the uncut Unrated version but the difference between the 2 version is only about 10s and this difference can easily get obscured in longer modern distributor and production company video
logos. |
|
The BBFC opens up its periodic public consultation with a disappointing public survey which allows for little public input
|
|
|
| 12th October 2023
|
|
| See survey from wearefamily-epk.co.uk |
The BBFC writes: Public Consultation BBFC Classification Guidelines Every four to five years we ask you what you think. These consultations provide the basis for our age ratings and content advice. Take
part in our short survey to help shape the future of classification. Take part in the survey from wearefamily-epk.co.uk
|
|
Internet company Cloudflare enables a feature preventing ISP website blocking at least for websites that use Cloudflare
|
|
|
| 9th October 2023
|
|
| See Creative Commons article from
torrentfreak.com |
A few days ago, Internet infrastructure company Cloudflare implemented widespread support for Encrypted Client Hello (ECH), a privacy technology that aims to render web traffic surveillance futile. This means that site blocking implemented by ISPs will
be rendered useless in most, if not all cases. ECH is a newly proposed privacy standard that's been in the making for a few years. The goal is to increase privacy for Internet users and it has already gained support from Chrome , Firefox , Edge , and
other browsers. Users can enable it in the settings, which may still be experimental in some cases.just The main barrier to widespread adoption is that this privacy technology is a two-way street. This means that websites have to support it as
well. Cloudflare has made a huge leap forward on that front by enabling it by default on all free plans, which currently serve millions of sites. Other subscribers can apply to have it enabled. Cloudflare writes in an announcement:
Cloudflare is a big proponent of privacy for everyone and is excited about the prospects of bringing this technology to life. Encrypted Client Hello (ECH) is a successor to ESNI and masks the Server Name Indication (SNI) that is used
to negotiate a TLS handshake. This means that whenever a user visits a website on Cloudflare that has ECH enabled, no one except for the user, Cloudflare, and the website owner will be able to determine which website was visited. If you're a website, and you care about users visiting your website in a fashion that doesn't allow any intermediary to see what users are doing, enable ECH today on Cloudflare
Tests conducted by TorrentFreak show that ISP blocking measures in the UK, the Netherlands, and Spain were rendered ineffective. |
|
|
|
|
| 9th October 2023
|
|
|
A US movie news website retells the story of the old BBFC ban of the original Exorcist movie. See article from collider.com |
|
ICO data censor harangues Snap with a nonsensically abstract accusation, whilst noting that rules haven't actually been broken yet
|
|
|
|
8th October 2023
|
|
| See
press release from ico.org.uk |
UK Information Commissioner issues preliminary enforcement notice against Snap
Snap issued with preliminary enforcement notice over potential failure to properly assess the privacy risks posed by its generative AI chatbot 'My AI' Investigation provisionally finds Snap failed to
adequately identify and assess the risks to several million 'My AI' users in the UK including children aged 13 to 17.
The Information Commissioner's Office (ICO) has issued Snap Inc with a preliminary enforcement notice over potential failure to properly assess the privacy risks posed by Snap's generative AI chatbot 'My AI'. The preliminary notice sets out the steps which the Commissioner may require, subject to Snap's representations on the preliminary notice. If a final enforcement notice were to be adopted, Snap may be required to stop processing data in connection with 'My AI'. This means not offering the 'My AI' product to UK users pending Snap carrying out an adequate risk assessment.
Snap launched the 'My AI' feature for UK Snapchat+ subscribers in February 2023, with a roll out to its wider Snapchat user base in the UK in April 2023. The chatbot feature, powered by OpenAI's GPT technology, marked the first
example of generative AI embedded into a major messaging platform in the UK. As at May 2023 Snapchat had 21 million monthly active users in the UK. The ICO's investigation provisionally found the risk assessment Snap conducted
before it launched 'My AI' did not adequately assess the data protection risks posed by the generative AI technology, particularly to children. The assessment of data protection risk is particularly important in this context which involves the use of
innovative technology and the processing of personal data of 13 to 17 year old children. The Commissioner's findings in the notice are provisional. No conclusion should be drawn at this stage that there has, in fact, been any
breach of data protection law or that an enforcement notice will ultimately be issued. The ICO will carefully consider any representations from Snap before taking a final decision. John Edwards, Information Commissioner said:
The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching 'My AI'. We have
been clear that organisations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.
|
|
Drinks packaging censor bans for IPA beer label referencing SlimFast slimming aid
|
|
|
| 8th October 2023
|
|
| See article from portmangroup.org.uk |
Tiny Rebel is a brewery specialising in fruit flavoured IPA beer. A limited edition IPA called TinyFast was the target of 2 complaints.
- ... Tinyfast is questionable as well in its use of SlimFast's branding to associate itself, even subconsciously, with health benefits, violating rule 3.2(j).
- The branding of these products is designed to mislead, they are clear facsimiles of
popular other products that are very obviously not alcoholic, the risk to the public is high (underage alcohol sales (energy drinks), identifying with health products (... slimfast)) due to the intentional duplication of branding, colours & typeface
Complainant two:
The Portman Group is a trade organisation tasked with censoring drinks packaging. It upheld complaints about TinyFast under the following rules:
- Code paragraph 3.2(f): A drink, it's packaging and any promotional material or activity should not in any direct or indirect way encourage illegal, irresponsible or immoderate consumption, such as drink driving, binge drinking or drunkenness.
- Code paragraph 3.2(g): A drink, its packaging or promotion should not urge the consumer to drink rapidly or to down a product in one.
- Code paragraph 3.2(h): A drink, its packaging or promotion should not have a particular appeal to under-18s
(in the case of sponsorship, those under 18 years of age should not comprise more than 25% of the participants, audience or spectators).
- Code paragraph 3.2(j): A drink, its packaging and any promotional material or activity should not in any
direct or indirect way suggest that the product has therapeutic qualities, can enhance mental or physical capabilities, or change mood or behaviour.
The company did not submit a response to the complaint. The Portman Group Panel's Assessment
The Panel discussed whether the packaging suggested, directly or indirectly, that the drink had therapeutic qualities, could enhance mental or physical capabilities, or change mood or behaviour. The Panel considered that the packaging of TinyFast was a
facsimile of a SlimFast Strawberry Meal Shake with the label intentionally mimicking the branding and flavour. The Panel noted that SlimFast was a well-known adult meal replacement drink that was used by consumers to aid in weight loss, or to help
maintain a healthy weight. The Panel considered that as the SlimFast brand was synonymous with a health drink, the average consumer could consider that TinyFast with its near identical branding, and name, indirectly implied the drink shared the same
weight loss properties. The Panel also noted that this association was particularly problematic in the context that some beers were considered fattening. While the Panel acknowledged that the packaging of TinyFast did not include
any direct health claims, the product descriptor text on the company's web page included the line Our 'Cos Jan's Bad Enough Beers are all made with love and fun to help chase away the January blues.. . The Panel also noted that the product had been
launched as part of a wider range in January, a month that in recent years had become linked to health goals and giving up certain products or types of food and drink. The Panel considered that the product's marketing reinforced the perception that
consumption of the drink could help change a consumer's mood to chase away the January blues, and that the drink indirectly suggested that it had a therapeutic quality. In addition to this, the Panel also considered that the product's mimicry of SlimFast
meant that the product's packaging indirectly implied it could change mood and behaviour by enabling consumers to make better dietary choices and lose weight more successfully than they would otherwise be able to. Accordingly, the Panel upheld the
complaint under Code rule 3.2(j). In the context that the packaging suggested the product had weight loss properties, and was mimicking a meal replacement drink, the Panel then considered whether there was anything on the
packaging of TinyFast which could encourage a consumer to drink irresponsibly as raised by the complainant. The Panel considered the complainants' wording carefully and noted that complainant two was concerned that the product was designed to mislead and
had raised Code rule 3.2(f) which included the wording irresponsible consumption. The Panel assessed the packaging in its entirety and considered that it was socially irresponsible for an alcoholic drink to create an association with a health product
known for being a meal replacement. After careful consideration of the Code rule 3.2(f) wording, and in the context of its decision under Code rule 3.2(j), the Panel considered that if a consumer believed that the product was a meal replacement, or had
weight loss properties, this could indirectly encourage a consumer to drink it to excess in order to gain the inferred health benefits of the product. The Panel considered that a consumer may then base their alcohol consumption on the purported health
benefits of the product, as opposed to making an informed consumption choice based on the amount of alcohol in the product and this, the Panel concluded, could reasonably lead to irresponsible consumption as a consumer might consume more than they
otherwise would have done. Accordingly, the Panel upheld the complaint under Code rule 3.2(f). As part of the discussion about whether the product indirectly encouraged irresponsible consumption, the name TinyFast was discussed
from the perspective that it may encourage fast consumption. The Panel considered that the word fast in the product name created a link to a style of consumption and encouraged a consumer to drink the product rapidly. Accordingly, the Panel also upheld
the product under Code rule 3.2(g). Finally, the Panel considered whether the can had a particular appeal to under-18s. The Panel discussed that while there were similarities between the packaging of TinyFast and a SlimFast
Strawberry Meal Shake, which was targeted at an adult audience, young children were unlikely to be aware of the similarities between the two brands. In that context, the Panel considered the overall impression conveyed by TinyFast and noted that it
included the image of a strawberry milkshake, strawberry fruit, the text strawberry flavour and the descriptor Strawberry Milkshake which was positioned above IPA and the drink's ABV. The Panel discussed the positioning of a beer as a milkshake and noted
that a milkshake IPA was known in the industry as referencing an IPA brewed with lactose and was likely to be understood by beer consumers. Alongside this, the Panel also noted that a strawberry milkshake, as a non-alcoholic sweet beverage, was likely to
have appeal to under-18s and that there were multiple strawberry milkshake products designed specifically for children. The Panel noted that in this instance the front label included an image of a milkshake with dripping sides and strawberry fruit
depicted in an illustrated style, which it considered, in combination with the above points, was likely to appeal to children. When considering the bear logo on the front of the can, in the context of a strawberry milkshake
flavoured drink, with product artwork that made the strawberry milkshake a dominant theme of the packaging, the Panel concluded that the overall impression conveyed was likely to have a particular appeal to under-18s. Accordingly, the complaint was
upheld under Code rule 3.2(h). |
|
|
|
|
| 8th October 2023
|
|
|
The Last House On Dead End Street See article from reprobatepress.com |
|
Ofcom investigates BitChute as an early test case of internet censorship
|
|
|
| 7th October 2023
|
|
| See press release from ofcom.org.uk
See also article from en.wikipedia.org See bitchute.com
|
BitChute is a British based video sharing platform that is particularly well known for hosting content that has been banned from more censorial websites, notably YouTube. The name was conceived from a portmanteau of the words bit
, a unit of information in computing, and parachute . At the time of the site's launch, founder Ray Vahey described BitChute as an alternative to mainstream platforms; he believed these platforms had demonstrated increased levels of censorship
over the previous few years by banning and demonetising users (barring them from receiving advertising revenue), and tweaking algorithms to send certain content into obscurity. In 2018, the creators of BitChute described themselves as a small team making
a stand against Internet censorship because we believe it is the right thing to do. Of course right leaning opionion does not sit well with the British establishment so it isn't a surprise to see it as an early example of interest from the UK
internet censor, Ofcom. Note that censorship powers have already been granted to Ofcom for video sharing platforms stupid enough to be based in Britain. So these platforms are an interesting forerunner to how Ofcom will censor the wider internet when
powers from the Online Censorship Bill are enacted. Ofcom writes: Investigation into BitChute Limited Case considered 3 October 2023 Summary Compliance
assurances from BitChute regarding its obligations under Part 4B of the Communications Act 2003: Improvements to the measures BitChute has in place to protect users from videos containing harmful material. Ofcom's role is to
ensure video-sharing platforms (VSPs) based in the UK have appropriate systems and processes in place as required under Part 4B of the Act to effectively protect their users from harmful video content in scope of the VSP regime. In May 2022, a far-right extremist carried out a racially motivated attack in Buffalo, New York. Ofcom conducted analysis of the measures in place to protect users from harmful material on several VSPs, including BitChute, in light of this incident.
Our analysis of BitChute's platform raised concerns that some of its measures were not effectively protecting users from encountering videos related to terrorism and other harmful material prohibited under the VSP regime.
Following a period of close engagement with BitChute to discuss its compliance with its obligations under Part 4B of the Communications Act 2003, it has made some important changes and also committed to further improvements to protect
users from harmful material. Background On 14 May 2022, a far-right extremist carried out a racially motivated attack in Buffalo, New York, killing ten people and wounding three others. The attacker
livestreamed the shooting online, and versions of the footage were distributed on multiple online services, including BitChute and other UK-based VSPs that we currently regulate . This resulted in UK users being potentially exposed to harmful material
related to terrorism and material likely to incite violence and hatred. Ofcom's role is to ensure VSPs have appropriate systems and processes in place as required under Part 4B of the Act to effectively protect their users from
harmful video content in scope of the VSP regime. Our approach to securing compliance focuses on oversight, accountability, and transparency, working with the industry where possible to drive improvements, as well as taking formal enforcement action
where appropriate. Our concerns In the weeks following the Buffalo attack, we engaged with relevant VSPs and the wider industry to learn more about how platforms can set up internal systems and
processes to prevent the livestreaming of such attacks and protect users from the sharing of associated video content. In October 2022, Ofcom published a report on the Buffalo attack that explored how footage of the attack, and related material, came to
be disseminated online and to understand the implications of this for platforms' efforts to keep people safe online. Our analysis raised concerns that BitChute's reporting and flagging measures were not effectively protecting
users from encountering videos related to terrorism and other harmful material prohibited under the VSP regime. In particular, Ofcom was concerned that BitChute's reporting function was not open to non-registered users, and that the capacity and coverage
of BitChute's content moderation team was insufficient to enable it to respond promptly to reports of harmful content. BitChute's commitments In response to our concerns, BitChute has made some
important changes to its reporting and flagging mechanisms and to the content moderation processes which underpin these, as well as committing to further changes. 1. Coverage and capacity of content moderation
In our 2022 VSP Report , published in October, we found that all VSPs, including BitChute, have adequate terms and conditions that prohibit material that would come within the scope of laws relating to terrorism, racism, and
xenophobia, as well as material likely to incite violence or hatred However, the Buffalo attack exposed key deficiencies in BitChute's ability to effectively enforce its terms and conditions relating to hate and terror content:
footage of the shooting was easily accessible on the platform in the days after the attack, and we learnt that the platform's content moderation team was modest in size and limited to certain working hours. This restricted BitChute's ability to respond
quickly to reports that footage was on the platform following the attack. BitChute has committed to triple the capacity of its moderation team by taking on more human moderators. It is also extending the coverage of its moderation
team by increasing the number of hours that moderators are available to review reports and has committed to having a safety team operational 24/7 in autumn 2023. 2. User reporting and flagging mechanisms
Prior to the Buffalo attack, BitChute had reporting and flagging mechanisms in place to allow users to report potentially harmful content. However, on-platform flagging was only available to users who had a registered BitChute
account. While all users (registered or unregistered) were able to report content by sending an email to BitChute, we were concerned that requiring non-registered users to email the platform, rather than click a reporting button next to the video,
introduces a layer of friction to the reporting process that could disincentivise the user from making a report and increase the time taken to respond to reports. As a result of our remediation work, BitChute has changed the
design of its platform to allow non-registered users to directly report potentially harmful content. It has also updated its user-facing guidelines to set out more clearly what registered and non-registered users can expect from the flagging and
reporting process. 3. Measuring effectiveness BitChute has also committed to collecting additional metrics to measure the impact of changes made to its systems and processes, including the volume of
content review reports raised each day and average response time in minutes for content reports. These metrics will help BitChute and Ofcom to evaluate the effectiveness of the platform's measures more easily. We have also
encouraged BitChute to implement additional reports on risk metrics, that measure the risk of harmful material being encountered on the platform and process metrics, that measure the effectiveness of BitChute's moderation systems.
Our response Taking into account BitChute's willingness to make timely improvements to its systems and processes to directly address the concerns we identified following the Buffalo incident, and our desire to work with
industry to secure changes that protect users, we have decided not to open an investigation against BitChute into its compliance with its duties under Part 4B of the Communications Act 2003 at this time. We will, however, closely monitor the
implementation of the proposed changes and the impact these changes have on user safety. We also note that, on 14 June 2023, BitChute became a member of the Global Internet Forum to Counter Terrorism (GIFCT). GIFCT is a
cross-industry initiative designed to prevent terrorists and violent extremists from exploiting digital platforms. Whilst we do not consider this an indicator of compliance, it is an encouraging step -- GIFCT has rigorous standards for membership ,
including demonstrating "a desire to explore new technical solutions to counter terrorist and violent extremist activity online" and "support for expanding the capacity of civil society organisations to challenge terrorism and violent
extremism". While we welcome BitChute's commitments to further improvements and measuring their effectiveness, we are aware of reports -- some of which have been communicated to us directly -- alleging that content likely to
incite violence and hatred continues to be uploaded to BitChute, can be accessed easily, and may pose significant risks to users. It is important to note that the VSP regime is a systems and processes regime, meaning the presence
of harmful video content on a service is not in itself a breach of the rules. Accordingly, Ofcom's focus is to drive improvements to platforms systems and processes to minimise the risks of users encountering harmful videos online in the first place.
However, such content can be indicative of an underlying issue with the user protections in place, and we will therefore continue to monitor BitChute closely to assess whether the changes it has made to its user reporting and content
moderation systems result in tangible improvements to user safety. If we find that, despite BitChute's assurances and improvements, users are not being adequately protected from the categories of harmful material covered by the VSP regime, we will not
hesitate to take further action, including formal enforcement action if necessary. |
|
|
|
|
|
7th October 2023
|
|
|
Canada Plots to Increase Online Censorship, Targeting AI, Search and Social Media Algorithms See
article from reclaimthenet.org |
|
|