Melon Farmers Original Version

Gooogle Privacy


Google's many run-ins with privacy


 

Taking the moral high road...

Google limits the authorities access to people's location histories


Link Here16th December 2023
Full story: Gooogle Privacy...Google's many run-ins with privacy

Google announced this week that it will be making several important changes to the way it handles users' "Location History" data. These changes would appear to make it much more difficult--if not impossible--for Google to provide mass location data in response to a geofence warrant , a change we've been asking Google to implement for years.

Geofence warrants require a provider--almost always Google--to search its entire reserve of user location data to identify all users or devices located within a geographic area during a time period specified by law enforcement. These warrants violate the Fourth Amendment because they are not targeted to a particular individual or device, like a typical warrant for digital communications. The only "evidence" supporting a geofence warrant is that a crime occurred in a particular area, and the perpetrator likely carried a cell phone that shared location data with Google. For this reason, they inevitably sweep up potentially hundreds of people who have no connection to the crime under investigation--and could turn each of those people into a suspect .

Geofence warrants have been possible because Google collects and stores specific user location data (which Google calls "Location History" data) altogether in a massive database called " Sensorvault ." Google reported several years ago that geofence warrants make up 25% of all warrants it receives each year.

Google's announcement outlined three changes to how it will treat Location History data. First, going forward, this data will be stored, by default, on a user's device, instead of with Google in the cloud. Second, it will be set by default to delete after three months; currently Google stores the data for at least 18 months. Finally, if users choose to back up their data to the cloud, Google will "automatically encrypt your backed-up data so no one can read it, including Google."

All of this is fantastic news for users, and we are cautiously optimistic that this will effectively mean the end of geofence warrants. These warrants are dangerous. They threaten privacy and liberty because they not only provide police with sensitive data on individuals, they could turn innocent people into suspects. Further, they have been used during political protests and threaten free speech and our ability to speak anonymously, without fear of government repercussions. For these reasons, EFF has repeatedly challenged geofence warrants in criminal cases and worked with other groups ( including tech companies) to push for legislative bans on their use.

However, we are not yet prepared to declare total victory. Google's collection of users' location data isn't limited to just the "Location History" data searched in response to geofence warrants; Google collects additional location information as well. It remains to be seen whether law enforcement will find a way to access these other stores of location data on a mass basis in the future. Also, none of Google's changes will prevent law enforcement from issuing targeted warrants for individual users' location data--outside of Location History--if police have probable cause to support such a search.

But for now, at least, we'll take this as a win. It's very welcome news for technology users as we usher in the end of 2023.

 

 

Offsite Article: Inside Google's Plan to Kill Snooping Cookies in Chrome and Android...


Link Here12th April 2023
Full story: Gooogle Privacy...Google's many run-ins with privacy
Google has an offering for the world called Privacy Sandbox. Here's an exclusive peek into how it will work.

See article from gizmodo.com

 

 

Offsite Article: FLoC off Google...


Link Here24th January 2022
Full story: Gooogle Privacy...Google's many run-ins with privacy
Google announces an improved minimal summary of Chrome website browsing history available for the targeting of advertising

See article from theverge.com

 

 

Jumping from the privacy frying pan into the monopoly abusing fire...

Google has delayed blocking 3rd party snooping cookies in its Chrome browser until 2023


Link Here26th June 2021
Full story: Gooogle Privacy...Google's many run-ins with privacy
Google has delayed its plan to block third-party cookies from its Chrome internet browser. These are cookies that track and analyse users' internet activity and allow digital publishers to target advertising.

They are already blocked by a number of Google's rivals, including Apple, Microsoft and Mozilla.

Google was intending to replace third party cookies which allow subscribing companies to analyse people's browsing with a system whereby only Google did the analysis and they passed on the resulting summary of user's interests to advertisers in a supposedly anonymised format. Google clled this scheme The Federated Learning of Cohorts, or Floc.

But critics say Google's ban forces ad sellers to go direct to the tech giant for this information gave it an unfair market control advantage. Google's proposals are already under investigation by the UK Competition and Markets Authority (CMA) which investigates monopolies.

Google's cookie ban had been planned for 2022, but has now been put back until 2023. In a blog , Vinay Goel, privacy engineering director for Google's Chrome browser said:

It's become clear that time is needed across the ecosystem in order to get this right.

Farhad Divecha, founder of digital marketing agency AccuraCast, said the delay was good news for his industry. He said:

We welcome this delay and only hope that Google uses this time to consult with the CMA as well as different parties that will be affected by the changes, including advertisers, agencies, publishers, and ad-tech and tracking solutions providers.

 

 

In an evil place...

US court documents reveal that Google has deliberately made it difficult for Android users to opt out of location snooping


Link Here1st June 2021
Full story: Gooogle Privacy...Google's many run-ins with privacy
Court documents show Google admits privacy is almost impossible on Android

Last year, the Arizona Attorney General's office filed a lawsuit against Google, accusing the tech giant of unlawfully collecting Android users' location data, even for users that have opted out. Last week, a judge ordered Google to unredact some sections of documents submitted in court.

The documents revealed not only Google's objectionable data collection policies, but also its employees admitting the policies are confusing and should be changed. Documented employee comments include:

So there's no way to give a third party app your location and not Google? This doesn't sound like something we would want on the front page of the [New York Times.]

Even after a user turned off location in the settings, Google still collects location data, the unredacted documents revealed.

In fact Google tested versions of its OS that made privacy settings easy to find. It saw the popularity of those settings as a problem and solved the problem by burying the settings deeper in Android's settings menu, and even pressured phone manufacturers, such as LG, to make those settings harder to find.

 

 

Offsite Article: No FLoC...


Link Here22nd April 2021
Full story: Gooogle Privacy...Google's many run-ins with privacy
Google's replacement for snooping on people's browsing history does not impress

See article from arstechnica.com

 

 

Google in a bad place...

Australian court finds that Google's Android settings sneakery left location tracking turned on by default even when careful users thought they had turned it off


Link Here20th April 2021
Full story: Gooogle Privacy...Google's many run-ins with privacy
An Australian Federal Court has found that Google misled consumers about personal location data collected through Android mobile devices between January 2017 and December 2018.

The complaint was brought by the Australian Competition and Consumer Commission.

The Court ruled that when consumers created a new Google Account during the initial set-up process of their Android device, Google misrepresented that the Location History setting was the only Google Account setting that affected whether Google collected, kept or used personally identifiable data about their location. In fact, another Google Account setting titled Web & App Activity also enabled Google to collect, store and use personally identifiable location data when it was turned on, and that setting was turned on by default.

The Court also found that when consumers later accessed the Location History setting on their Android device during the same time period to turn that setting off, they were also misled because Google did not inform them that by leaving the Web & App Activity setting switched on, Google would continue to collect, store and use their personally identifiable location data.

The Court also found that Google's conduct was liable to mislead the public.

The ACCC is now seeking declarations, pecuniary penalties, publications orders, and compliance orders. This will be determined at a later date. In addition to penalties, the ACCC is seeking an order for Google to publish a notice to Australian consumers to better explain Google's location data settings in the future.

 

 

Offsite Article: After Cookies...


Link Here18th April 2021
Full story: Gooogle Privacy...Google's many run-ins with privacy
The EFF explains how Ad Tech Wants to Use Your Email to Track You Everywhere. By Bennett Cyphers

See article from eff.org

 

 

Google's FLoC Is a Terrible Idea...

Explaining Google's idea to match individuals to groups for targetting advertisng. By Bennett Cyphers


Link Here10th March 2021
Full story: Gooogle Privacy...Google's many run-ins with privacy

The third-party cookie is dying, and Google is trying to create its replacement.

No one should mourn the death of the cookie as we know it. For more than two decades, the third-party cookie has been the lynchpin in a shadowy, seedy, multi-billion dollar advertising-surveillance industry on the Web; phasing out tracking cookies and other persistent third-party identifiers is long overdue. However, as the foundations shift beneath the advertising industry, its biggest players are determined to land on their feet.

Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn't learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC) , which is perhaps the most ambitious--and potentially the most harmful.

FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting.

Google's pitch to privacy advocates is that a world with FLoC (and other elements of the " privacy sandbox ") will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between "old tracking" and "new tracking." It's not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.

We stand at a fork in the road. Behind us is the era of the third-party cookie, perhaps the Web's biggest mistake. Ahead of us are two possible futures.

In one, users get to decide what information to share with each site they choose to interact with. No one needs to worry that their past browsing will be held against them--or leveraged to manipulate them--when they next open a tab.

In the other, each user's behavior follows them from site to site as a label, inscrutable at a glance but rich with meaning to those in the know. Their recent history, distilled into a few bits, is "democratized" and shared with dozens of nameless actors that take part in the service of each web page. Users begin every interaction with a confession: here's what I've been up to this week, please treat me accordingly.

Users and advocates must reject FLoC and other misguided attempts to reinvent behavioral targeting. We implore Google to abandon FLoC and redirect its effort towards building a truly user-friendly Web.

What is FLoC?

In 2019, Google presented the Privacy Sandbox , its vision for the future of privacy on the Web. At the center of the project is a suite of cookieless protocols designed to satisfy the myriad use cases that third-party cookies currently provide to advertisers. Google took its proposals to the W3C, the standards-making body for the Web, where they have primarily been discussed in the Web Advertising Business Group , a body made up primarily of ad-tech vendors. In the intervening months, Google and other advertisers have proposed dozens of bird-themed technical standards: PIGIN , TURTLEDOVE , SPARROW , SWAN , SPURFOWL , PELICAN , PARROT ... the list goes on. Seriously . Each of the "bird" proposals is designed to perform one of the functions in the targeted advertising ecosystem that is currently done by cookies.

FLoC is designed to help advertisers perform behavioral targeting without third-party cookies. A browser with FLoC enabled would collect information about its user's browsing habits, then use that information to assign its user to a "cohort" or group. Users with similar browsing habits--for some definition of "similar"--would be grouped into the same cohort. Each user's browser will share a cohort ID, indicating which group they belong to, with websites and advertisers. According to the proposal, at least a few thousand users should belong to each cohort (though that's not a guarantee).

If that sounds dense, think of it this way: your FLoC ID will be like a succinct summary of your recent activity on the Web.

Google's proof of concept used the domains of the sites that each user visited as the basis for grouping people together. It then used an algorithm called SimHash to create the groups. SimHash can be computed locally on each user's machine, so there's no need for a central server to collect behavioral data. However, a central administrator could have a role in enforcing privacy guarantees. In order to prevent any cohort from being too small (i.e. too identifying), Google proposes that a central actor could count the number of users assigned each cohort. If any are too small, they can be combined with other, similar cohorts until enough users are represented in each one.

According to the proposal, most of the specifics are still up in the air. The draft specification states that a user's cohort ID will be available via Javascript, but it's unclear whether there will be any restrictions on who can access it, or whether the ID will be shared in any other ways. FLoC could perform clustering based on URLs or page content instead of domains; it could also use a federated learning-based system (as the name FLoC implies) to generate the groups instead of SimHash. It's also unclear exactly how many possible cohorts there will be. Google's experiment used 8-bit cohort identifiers, meaning that there were only 256 possible cohorts. In practice that number could be much higher; the documentation suggests a 16-bit cohort ID comprising 4 hexadecimal characters. The more cohorts there are, the more specific they will be; longer cohort IDs will mean that advertisers learn more about each user's interests and have an easier time fingerprinting them.

One thing that is specified is duration. FLoC cohorts will be re-calculated on a weekly basis, each time using data from the previous week's browsing. This makes FLoC cohorts less useful as long-term identifiers, but it also makes them more potent measures of how users behave over time.

New privacy problems

FLoC is part of a suite intended to bring targeted ads into a privacy-preserving future. But the core design involves sharing new information with advertisers. Unsurprisingly, this also creates new privacy risks.

Fingerprinting

The first issue is fingerprinting. Browser fingerprinting is the practice of gathering many discrete pieces of information from a user's browser to create a unique, stable identifier for that browser. EFF's Cover Your Tracks project demonstrates how the process works: in a nutshell, the more ways your browser looks or acts different from others', the easier it is to fingerprint.

Google has promised that the vast majority of FLoC cohorts will comprise thousands of users each, so a cohort ID alone shouldn't distinguish you from a few thousand other people like you. However, that still gives fingerprinters a massive head start. If a tracker starts with your FLoC cohort, it only has to distinguish your browser from a few thousand others (rather than a few hundred million). In information theoretic terms, FLoC cohorts will contain several bits of entropy --up to 8 bits, in Google's proof of concept trial. This information is even more potent given that it is unlikely to be correlated with other information that the browser exposes. This will make it much easier for trackers to put together a unique fingerprint for FLoC users.

Google has acknowledged this as a challenge, but has pledged to solve it as part of the broader "Privacy Budget" plan it has to deal with fingerprinting long-term. Solving fingerprinting is an admirable goal, and its proposal is a promising avenue to pursue. But according to the FAQ , that plan is "an early stage proposal and does not yet have a browser implementation." Meanwhile, Google is set to begin testing FLoC as early as this month .

Fingerprinting is notoriously difficult to stop. Browsers like Safari and Tor have engaged in years-long wars of attrition against trackers, sacrificing large swaths of their own feature sets in order to reduce fingerprinting attack surfaces. Fingerprinting mitigation generally involves trimming away or restricting unnecessary sources of entropy--which is what FLoC is. Google should not create new fingerprinting risks until it's figured out how to deal with existing ones.

Cross-context exposure

The second problem is less easily explained away: the technology will share new personal data with trackers who can already identify users. For FLoC to be useful to advertisers, a user's cohort will necessarily reveal information about their behavior.

The project's Github page addresses this up front:

This API democratizes access to some information about an individual's general browsing history (and thus, general interests) to any site that opts into it. ... Sites that know a person's PII (e.g., when people sign in using their email address) could record and reveal their cohort. This means that information about an individual's interests may eventually become public.

As described above, FLoC cohorts shouldn't work as identifiers by themselves. However, any company able to identify a user in other ways--say, by offering "log in with Google" services to sites around the Internet--will be able to tie the information it learns from FLoC to the user's profile.

Two categories of information may be exposed in this way:

  • Specific information about browsing history. Trackers may be able to reverse-engineer the cohort-assignment algorithm to determine that any user who belongs to a specific cohort probably or definitely visited specific sites.

  • General information about demographics or interests. Observers may learn that in general , members of a specific cohort are substantially likely to be a specific type of person. For example, a particular cohort may over-represent users who are young, female, and Black; another cohort, middle-aged Republican voters; a third, LGBTQ+ youth.

This means every site you visit will have a good idea about what kind of person you are on first contact, without having to do the work of tracking you across the web. Moreover, as your FLoC cohort will update over time, sites that can identify you in other ways will also be able to track how your browsing changes. Remember, a FLoC cohort is nothing more, and nothing less, than a summary of your recent browsing activity.

You should have a right to present different aspects of your identity in different contexts. If you visit a site for medical information, you might trust it with information about your health, but there's no reason it needs to know what your politics are. Likewise, if you visit a retail website, it shouldn't need to know whether you've recently read up on treatment for depression. FLoC erodes this separation of contexts, and instead presents the same behavioral summary to everyone you interact with.

Beyond privacy

FLoC is designed to prevent a very specific threat: the kind of individualized profiling that is enabled by cross-context identifiers today. The goal of FLoC and other proposals is to avoid letting trackers access specific pieces of information that they can tie to specific people. As we've shown, FLoC may actually help trackers in many contexts. But even if Google is able to iterate on its design and prevent these risks, the harms of targeted advertising are not limited to violations of privacy. FLoC's core objective is at odds with other civil liberties.

The power to target is the power to discriminate. By definition, targeted ads allow advertisers to reach some kinds of people while excluding others. A targeting system may be used to decide who gets to see job postings or loan offers just as easily as it is to advertise shoes.

Over the years, the machinery of targeted advertising has frequently been used for exploitation , discrimination , and harm . The ability to target people based on ethnicity, religion, gender, age, or ability allows discriminatory ads for jobs, housing, and credit. Targeting based on credit history--or characteristics systematically associated with it-- enables predatory ads for high-interest loans. Targeting based on demographics, location, and political affiliation helps purveyors of politically motivated disinformation and voter suppression. All kinds of behavioral targeting increase the risk of convincing scams .

Google, Facebook, and many other ad platforms already try to rein in certain uses of their targeting platforms. Google, for example, limits advertisers' ability to target people in " sensitive interest categories ." However, these efforts frequently fall short; determined actors can usually find workarounds to platform-wide restrictions on certain kinds of targeting or certain kinds of ads .

Even with absolute power over what information can be used to target whom, platforms are too often unable to prevent abuse of their technology. But FLoC will use an unsupervised algorithm to create its clusters. That means that nobody will have direct control over how people are grouped together. Ideally (for advertisers), FLoC will create groups that have meaningful behaviors and interests in common. But online behavior is linked to all kinds of sensitive characteristics-- demographics like gender, ethnicity, age, and income; "big 5" personality traits ; even mental health . It is highly likely that FLoC will group users along some of these axes as well. FLoC groupings may also directly reflect visits to websites related to substance abuse, financial hardship, or support for survivors of trauma.

Google has proposed that it can monitor the outputs of the system to check for any correlations with its sensitive categories. If it finds that a particular cohort is too closely related to a particular protected group, the administrative server can choose new parameters for the algorithm and tell users' browsers to group themselves again.

This solution sounds both orwellian and sisyphean. In order to monitor how FLoC groups correlate with sensitive categories, Google will need to run massive audits using data about users' race, gender, religion, age, health, and financial status. Whenever it finds a cohort that correlates too strongly along any of those axes, it will have to reconfigure the whole algorithm and try again, hoping that no other "sensitive categories" are implicated in the new version. This is a much more difficult version of the problem it is already trying, and frequently failing , to solve.

In a world with FLoC, it may be more difficult to target users directly based on age, gender, or income. But it won't be impossible. Trackers with access to auxiliary information about users will be able to learn what FLoC groupings "mean"--what kinds of people they contain--through observation and experiment. Those who are determined to do so will still be able to discriminate. Moreover, this kind of behavior will be harder for platforms to police than it already is. Advertisers with bad intentions will have plausible deniability--after all, they aren't directly targeting protected categories, they're just reaching people based on behavior. And the whole system will be more opaque to users and regulators.

Google, please don't do this

We wrote about FLoC and the other initial batch of proposals when they were first introduced , calling FLoC "the opposite of privacy-preserving technology." We hoped that the standards process would shed light on FLoC's fundamental flaws, causing Google to reconsider pushing it forward. Indeed, several issues on the official Github page raise the exact same concerns that we highlight here. However, Google has continued developing the system, leaving the fundamentals nearly unchanged. It has started pitching FLoC to advertisers, boasting that FLoC is a " 95% effective " replacement for cookie-based targeting. And starting with Chrome 89, released on March 2 , it's deploying the technology for a trial run . A small portion of Chrome users--still likely millions of people--will be (or have been) assigned to test the new technology.

Make no mistake, if Google does follow through on its plan to implement FLoC in Chrome, it will likely give everyone involved "options." The system will probably be opt-in for the advertisers that will benefit from it, and opt-out for the users who stand to be hurt. Google will surely tout this as a step forward for "transparency and user control," knowing full well that the vast majority of its users will not understand how FLoC works, and that very few will go out of their way to turn it off. It will pat itself on the back for ushering in a new, private era on the Web, free of the evil third-party cookie--the technology that Google helped extend well past its shelf life, making billions of dollars in the process .

It doesn't have to be that way. The most important parts of the privacy sandbox, like dropping third-party identifiers and fighting fingerprinting, will genuinely change the Web for the better. Google can choose to dismantle the old scaffolding for surveillance without replacing it with something new and uniquely harmful.

We emphatically reject the future of FLoC. That is not the world we want, nor the one users deserve. Google needs to learn the correct lessons from the era of third-party tracking and design its browser to work for users, not for advertisers.

 

 

Ethical snooping...

Google promises not to replace cookie based web browsing snooping with another privacy invasive method of snooping


Link Here3rd March 2021
Full story: Gooogle Privacy...Google's many run-ins with privacy
David Temkin, Google's Director of Product Management, Ads Privacy and Trust has been commenting on Google's progress in reducing personalised advertising based on snooping of people's browsing history. Temkin commented:

72% of people feel that almost all of what they do online is being tracked by advertisers, technology firms or other companies, and 81% say that the potential risks they face because of data collection outweigh the benefits, according to a study by Pew Research Center. If digital advertising doesn't evolve to address the growing concerns people have about their privacy and how their personal identity is being used, we risk the future of the free and open web.

That's why last year Chrome announced its intent to remove support for third-party cookies, and why we've been working with the broader industry on the Privacy Sandbox to build innovations that protect anonymity while still delivering results for advertisers and publishers. Even so, we continue to get questions about whether Google will join others in the ad tech industry who plan to replace third-party cookies with alternative user-level identifiers. Today, we're making explicit that once third-party cookies are phased out, we will not build alternate identifiers to track individuals as they browse across the web, nor will we use them in our products.

We realize this means other providers may offer a level of user identity for ad tracking across the web that we will not -- like PII [Personally Identifying Information] graphs based on people's email addresses. We don't believe these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions, and therefore aren't a sustainable long term investment. Instead, our web products will be powered by privacy-preserving APIs which prevent individual tracking while still delivering results for advertisers and publishers.

People shouldn't have to accept being tracked across the web in order to get the benefits of relevant advertising. And advertisers don't need to track individual consumers across the web to get the performance benefits of digital advertising.

Advances in aggregation, anonymization, on-device processing and other privacy-preserving technologies offer a clear path to replacing individual identifiers. In fact, our latest tests of FLoC [Federated Learning of Cohorts] show one way to effectively take third-party cookies out of the advertising equation and instead hide individuals within large crowds of people with common interests. Chrome intends to make FLoC-based cohorts available for public testing through origin trials with its next release this month, and we expect to begin testing FLoC-based cohorts with advertisers in Google Ads in Q2. Chrome also will offer the first iteration of new user controls in April and will expand on these controls in future releases, as more proposals reach the origin trial stage, and they receive more feedback from end users and the industry.

This points to a future where there is no need to sacrifice relevant advertising and monetization in order to deliver a private and secure experience.

 

 

Class action...

Privacy campaigner takes Google to court claiming illegal use of children's data


Link Here13th September 2020
Full story: Gooogle Privacy...Google's many run-ins with privacy
Privacy campaigner Duncan McCann has filed a legal case accusing YouTube of selling the data of children using their service to advertisers in contravention of EU and UK law The case was lodged with the UK High Court in July and is the first of its kind in Europe.

It is understood that Google will strongly dispute the claim. One of its arguments is that the main YouTube platform is not intended for those under 13, who should be using the YouTube Kids app, which incorporates more safeguards.

Google is also expected to point to a series of changes that it introduced last year to improve notification to parents, limit data collection and restrict personalised adverts.

The case seeks compensation of £500 payable to those whose data was breached. But crucially it would set a precedent, potentially making YouTube liable for payouts to the estimated five million children in Britain who use the site as well as their parents or guardians.

McCann said:

It cannot be right that Google can take children's private data without explicit permission and then sell it to advertisers to target children. I believe it is only through legal action and damages that these companies will change their behaviour, and it is only through a class action that we can fight these companies on an equal basis.

The case, which focuses on children who have watched YouTube since May 2018 when the Data Protection Act became law, is backed by digital privacy campaigners Foxglove, and the global law firm Hausfeld. The case is not expected to come to court before next autumn and has been underwritten by Vannin Capital, a company which will take a cut of any compensation that remains unclaimed. The action will also depend on the outcome of another data and privacy case against Google which does not cover children.

 

 

Australian data censor calls out Google...

Google found to be exploiting user's personal data without consent


Link Here 27th July 2020
Full story: Gooogle Privacy...Google's many run-ins with privacy
Australia's competition regulator has launched court proceedings against Alphabet's Google for allegedly misleading consumers about the expanded use of personal data for targeted advertising.

The case by the Australian Competition and Consumer Commission (ACCC) in Federal Court said Google did not explicitly get consent nor properly inform consumers about a 2016 move to combine personal information in Google accounts with activities on non-Google websites that use its technology.

The regulator said this practice allowed the Alphabet Inc unit to link the names and other ways to identify consumers with their behaviour elsewhere on the internet .

 

 

Offsite Article: Free websites, advertising revenues and privacy...


Link Here 29th January 2020
Full story: Gooogle Privacy...Google's many run-ins with privacy
If Chrome fixes privacy too fast it could break the web, Google exec debates advertising revenue vs privacy

See article from cnet.com

 

 

Offsite Article Searching for better privacy...


Link Here15th January 2020
Full story: Gooogle Privacy...Google's many run-ins with privacy
Google to strangle user agent strings in its chrome browse to hamper advertisers from profiling users via fingerprinting

See article from zdnet.com

 

 

Sensitive changes...

Google to withhold details from advertisers about where people are browsing on the internet


Link Here17th November 2019
Full story: Gooogle Privacy...Google's many run-ins with privacy
In what sounds like a profound change to the commercial profiling of people's website browsing history, Google has announced that it will withhold data from advertisers that categorises web pages.

In response to the misuse of medical related browsing data, Google has announced that from February 2020 it will cease to inform advertisers about the content of webpage where advertising space is up for auction. Presumably this is something along the lines of Google having an available advert slot on worldwidepharmacy.com but not telling the advertiser that the John Doe is browsing an STD diagnosis page, but the advertiser will still be informed of the URL.

Chetna Bindra, senior product manager of trust and privacy at Google wrote:

While we already prohibit advertisers from using our services to build user profiles around sensitive categories, this change will help avoid the risk that any participant in our auctions is able to associate individual ad identifiers with Google's contextual content categories.

Google also plans to update its EU User Consent Policy audit program for publishers and advertisers, as well as our audits for the Authorized Buyers program, and continue to engage with data protection authorities, including the Irish Data Protection Commission as they continue their investigation into data protection practices in the context of Authorized Buyers.

Although this sounds very good news for people wishing to keep their sensitive data private it may not be so good for advertisers who will see costs rise and publishers who will see incomes fall.

ANd of course Google will still know itself that John Doe has been browsing STD diagnosis pages. There could be other consequences such as advertisers sending their own bots out to categorise likely advertising slots.

 

 

Offsite Article: Subtly identifying de-anatomised internet users...


Link Here 6th September 2019
Full story: Gooogle Privacy...Google's many run-ins with privacy
Brave presents technical new evidence about personalised advertising, and has uncovered a mechanism by which Google appears to be circumventing its purported GDPR privacy protections

See article from brave.com

 

 

Don't Play in Google's Privacy Sandbox...

A detailed technical investigation of Google's advanced tools designed to profile internet users for advertising


Link Here 31st August 2019
Full story: Gooogle Privacy...Google's many run-ins with privacy

Last week, Google announced a plan to build a more private web. The announcement post was, frankly, a mess. The company that tracks user behavior on over 2/3 of the web said that Privacy is paramount to us, in everything we do.

Google not only doubled down on its commitment to targeted advertising, but also made the laughable claim that blocking third-party cookies -- by far the most common tracking technology on the Web, and Google's tracking method of choice -- will hurt user privacy. By taking away the tools that make tracking easy, it contended, developers like Apple and Mozilla will force trackers to resort to opaque techniques like fingerprinting. Of course, lost in that argument is the fact that the makers of Safari and Firefox have shown serious commitments to shutting down fingerprinting, and both browsers have made real progress in that direction. Furthermore, a key part of the Privacy Sandbox proposals is Chrome's own (belated) plan to stop fingerprinting.

But hidden behind the false equivalencies and privacy gaslighting are a set of real technical proposals. Some are genuinely good ideas. Others could be unmitigated privacy disasters. This post will look at the specific proposals under Google's new Privacy Sandbox umbrella and talk about what they would mean for the future of the web. The good: fewer CAPTCHAs, fighting fingerprints

Let's start with the proposals that might actually help users.

First up is the Trust API . This proposal is based on Privacy Pass , a privacy-preserving and frustration-reducing alternative to CAPTCHAs. Instead of having to fill out CAPTCHAs all over the web, with the Trust API, users will be able to fill out a CAPTCHA once and then use trust tokens to prove that they are human in the future. The tokens are anonymous and not linkable to one another, so they won't help Google (or anyone else) track users. Since Google is the single largest CAPTCHA provider in the world, its adoption of the Trust API could be a big win for users with disabilities , users of Tor , and anyone else who hates clicking on grainy pictures of storefronts.

Google's proposed privacy budget for fingerprinting is also exciting. Browser fingerprinting is the practice of gathering enough information about a specific browser instance to try to uniquely identify a user. Usually, this is accomplished by combining easily accessible information like the user agent string with data from powerful APIs like the HTML canvas. Since fingerprinting extracts identifying data from otherwise-useful APIs, it can be hard to stop without hamstringing legitimate web apps. As a workaround, Google proposes limiting the amount of data that websites can access through potentially sensitive APIs. Each website will have a budget, and if it goes over budget, the browser will cut off its access. Most websites won't have any use for things like the HTML canvas, so they should be unaffected. Sites that need access to powerful APIs, like video chat services and online games, will be able to ask the user for permission to go over budget. The devil will be in the details, but the privacy budget is a promising framework for combating browser fingerprinting.

Unfortunately, that's where the good stuff ends. The rest of Google's proposals range from mediocre to downright dangerous.

The bad: Conversion measurement

Perhaps the most fleshed-out proposal in the Sandbox is the conversion measurement API . This is trying to tackle a problem as old as online ads: how can you know whether the people clicking on an ad ultimately buy the product it advertised? Currently, third-party cookies do most of the heavy lifting. A third-party advertiser serves an ad on behalf of a marketer and sets a cookie. On its own site, the marketer includes a snippet of code which causes the user's browser to send the cookie set earlier back to the advertiser. The advertiser knows when the user sees an ad, and it knows when the same user later visits the marketer's site and makes a purchase. In this way, advertisers can attribute ad impressions to page views and purchases that occur days or weeks later.

Without third-party cookies, that attribution gets a little more complicated. Even if an advertiser can observe traffic around the web, without a way to link ad impressions to page views, it won't know how effective its campaigns are. After Apple started cracking down on advertisers' use of cookies with Intelligent Tracking Prevention (ITP), it also proposed a privacy-preserving ad attribution solution . Now, Google is proposing something similar . Basically, advertisers will be able to mark up their ads with metadata, including a destination URL, a reporting URL, and a field for extra impression data -- likely a unique ID. Whenever a user sees an ad, the browser will store its metadata in a global ad table. Then, if the user visits the destination URL in the future, the browser will fire off a request to the reporting URL to report that the ad was converted.

In theory, this might not be so bad. The API should allow an advertiser to learn that someone saw its ad and then eventually landed on the page it was advertising; this can give raw numbers about the campaign's effectiveness without individually-identifying information.

The problem is the impression data. Apple's proposal allows marketers to store just 6 bits of information in a campaign ID, that is, a number between 1 and 64. This is enough to differentiate between ads for different products, or between campaigns using different media.

On the other hand, Google's ID field can contain 64 bits of information -- a number between 1 and 18 quintillion . This will allow advertisers to attach a unique ID to each and every ad impression they serve, and, potentially, to connect ad conversions with individual users. If a user interacts with multiple ads from the same advertiser around the web, these IDs can help the advertiser build a profile of the user's browsing habits.

The ugly: FLoC

Even worse is Google's proposal for Federated Learning of Cohorts (or FLoC). Behind the scenes, FLoC is based on Google's pretty neat federated learning technology . Basically, federated learning allows users to build their own, local machine learning models by sharing little bits of information at a time. This allows users to reap the benefits of machine learning without sharing all of their data at once. Federated learning systems can be configured to use secure multi-party computation and differential privacy in order to keep raw data verifiably private.

The problem with FLoC isn't the process, it's the product. FLoC would use Chrome users' browsing history to do clustering . At a high level, it will study browsing patterns and generate groups of similar users, then assign each user to a group (called a flock). At the end of the process, each browser will receive a flock name which identifies it as a certain kind of web user. In Google's proposal, users would then share their flock name, as an HTTP header, with everyone they interact with on the web.

This is, in a word, bad for privacy. A flock name would essentially be a behavioral credit score: a tattoo on your digital forehead that gives a succinct summary of who you are, what you like, where you go, what you buy, and with whom you associate. The flock names will likely be inscrutable to users, but could reveal incredibly sensitive information to third parties. Trackers will be able to use that information however they want, including to augment their own behind-the-scenes profiles of users.

Google says that the browser can choose to leave sensitive data from browsing history out of the learning process. But, as the company itself acknowledges, different data is sensitive to different people; a one-size-fits-all approach to privacy will leave many users at risk. Additionally, many sites currently choose to respect their users' privacy by refraining from working with third-party trackers. FLoC would rob these websites of such a choice.

Furthermore, flock names will be more meaningful to those who are already capable of observing activity around the web. Companies with access to large tracking networks will be able to draw their own conclusions about the ways that users from a certain flock tend to behave. Discriminatory advertisers will be able to identify and filter out flocks which represent vulnerable populations. Predatory lenders will learn which flocks are most prone to financial hardship.

FLoC is the opposite of privacy-preserving technology. Today, trackers follow you around the web, skulking in the digital shadows in order to guess at what kind of person you might be. In Google's future, they will sit back, relax, and let your browser do the work for them.

The ugh: PIGIN

That brings us to PIGIN. While FLoC promises to match each user with a single, opaque group identifier, PIGIN would have each browser track a set of interest groups that it believes its user belongs to. Then, whenever the browser makes a request to an advertiser, it can send along a list of the user's interests to enable better targeting.

Google's proposal devotes a lot of space to discussing the privacy risks of PIGIN. However, the protections it discusses fall woefully short. The authors propose using cryptography to ensure that there are at least 1,000 people in an interest group before disclosing a user's membership in it, as well as limiting the maximum number of interests disclosed at a time to 5. This limitation doesn't hold up to much scrutiny: membership in 5 distinct groups, each of which contains just a few thousand people, will be more than enough to uniquely identify a huge portion of users on the web. Furthermore, malicious actors will be able to game the system in a number of ways, including to learn about users' membership in sensitive categories. While the proposal gives a passing mention to using differential privacy, it doesn't begin to describe how, specifically, that might alleviate the myriad privacy risks PIGIN raises.

Google touts PIGIN as a win for transparency and user control. This may be true to a limited extent. It would be nice to know what information advertisers use to target particular ads, and it would be useful to be able to opt-out of specific interest groups one by one. But like FLoC, PIGIN does nothing to address the bad ways that online tracking currently works. Instead, it would provide trackers with a massive new stream of information they could use to build or augment their own user profiles. The ability to remove specific interests from your browser might be nice, but it won't do anything to prevent every company that's already collected it from storing, sharing, or selling that data. Furthermore, these features of PIGIN would likely become another option that most users don't touch. Defaults matter. While Apple and Mozilla work to make their browsers private out of the box, Google continues to invent new privacy-invasive practices for users to opt-out of.

It's never about privacy

If the Privacy Sandbox won't actually help users, why is Google proposing all these changes?

Google can probably see which way the wind is blowing. Safari's Intelligent Tracking Prevention and Firefox's Enhanced Tracking Protection have severely curtailed third-party trackers' access to data. Meanwhile, users and lawmakers continue to demand stronger privacy protections from Big Tech. While Chrome still dominates the browser market, Google might suspect that the days of unlimited access to third-party cookies are numbered.

As a result, Google has apparently decided to defend its business model on two fronts. First, it's continuing to argue that third-party cookies are actually fine , and companies like Apple and Mozilla who would restrict trackers' access to user data will end up harming user privacy. This argument is absurd. But unfortunately, as long as Chrome remains the most popular browser in the world, Google will be able to single-handedly dictate whether cookies remain a viable option for tracking most users.

At the same time, Google seems to be hedging its bets. The Privacy Sandbox proposals for conversion measurement, FLoC, and PIGIN are each aimed at replacing one of the existing ways that third-party cookies are used for targeted ads. Google is brainstorming ways to continue serving targeted ads in a post-third-party-cookie world. If cookies go the way of the pop-up ad, Google's targeting business will continue as usual.

The Sandbox isn't about your privacy. It's about Google's bottom line. At the end of the day, Google is an advertising company that happens to make a browser.

 

 

Offsite Article: Google defends tracking cookies...


Link Here27th August 2019
Full story: Gooogle Privacy...Google's many run-ins with privacy
Banning tracking cookies jeopardizes the future of the vibrant Web. By Timothy B. Lee -

See article from arstechnica.com

 

 

Offsite Article: Google's new reCAPTCHA has a dark side...


Link Here 28th June 2019
Full story: Gooogle Privacy...Google's many run-ins with privacy
Analysing the way you navigate around websites and hassling those it considers aren't doing it right

See article from fastcompany.com

 

 

Having to ask Google to find the way to opt out of personalised advertising...

Google fined 50 million euros for not providing clear consent when snooping on browsing history so as to personalise adverts


Link Here22nd January 2019
Full story: Gooogle Privacy...Google's many run-ins with privacy

Google has been fined 50 million euros by the French data censor CNIL, for a breach of the EU's data protection rules.

CNIL said it had levied the record fine for lack of transparency, inadequate information and lack of valid consent regarding ads personalisation. It judged that people were not sufficiently informed about how Google collected data to personalise advertising and that Google had not obtained clear consent to process data because essential information was disseminated across several documents. The relevant information is accessible after several steps only, implying sometimes up to five or six actions, CNIL said.

In a statement, Google said it was studying the decision to determine its next steps.

The first complaint under the EU's new General Data Protection Regulation (GDPR) was filed on 25 May 2018, the day the legislation took effect.The filing groups claimed Google did not have a valid legal basis to process user data for ad personalisation, as mandated by the GDPR.

Many internet companies rely on vague wording such as 'improving user experience' to gain consent for a wide range of data uses but the GDPR provides that the consent is 'specific' only if it is given distinctly for each purpose.

Perhaps this fine may help for the protection of data gathered on UK porn users under the upcoming age verification requirements. Obtaining consent for narrowly defined data usages may mean actions could be taken to prevent user identity and browsing history from being sold on.

 

 

General Data Protection Rights abuse...

Google may continue to use facial recognition to tag pictures obtained from Google Photos without obtaining consent


Link Here 2nd January 2019
Full story: Gooogle Privacy...Google's many run-ins with privacy
A US federal judge has thrown out a lawsuit that Google's non-consensual use of facial recognition technology violated users' privacy rights, allowing the tech giant to continue to scan and store their biometric data.

The lawsuit, filed in 2016, alleged that Google violated Illinois state law by collecting biometric data without their consent. The data was harvested from their pictures stored on Google Photos.

The plaintiffs wanted more than $5 million in damages for hundreds of thousands of users affected, arguing that the unauthorized scanning of their faces was a violation of the Illinois Biometric Information Privacy Act, which completely outlaws the gathering of biometric information without consent.

Google countered claiming that the plaintiffs were not entitled to any compensation, as they had not been harmed by the data collection. On Saturday, US District Judge Edmond E. Chang sided with the tech giant, ruling that the plaintiffs had not suffered any concrete harm, and dismissing the suit.

As well as allowing Google to continue the practice, the ruling could have implications for other cases pending against Facebook and Snapchat. Both companies are currently being sued for violating the Illinois act.

 

 

Offsite Article: Google sued for secretly tracking millions of UK iPhone users...


Link Here23rd May 2018
Full story: Gooogle Privacy...Google's many run-ins with privacy
Google accused of bypassing default browser Safari's privacy settings to collect a broad range of data and deliver targeted advertising.

See article from alphr.com



Censor Watch logo
censorwatch.co.uk

 

Top

Home

Links
 

Censorship News Latest

Daily BBFC Ratings

Site Information