- https://desfontain.es/privacy/almost-differential-privacy.ht... (describes a core intuition behind the system described in our paper)
I also think Section 2 of the paper should be readable by most folks with a basic understanding of SQL and differential privacy.
When I studied cryptographic voting systems, my "aha" moment was realizing the magic sauce is creating hash collisions so that a secure one-way hash can be used to protect voter privacy.
Re-re-reading the differential privacy stuff, this para jumped out:
"The intuition for the 2006 definition of ε-differential privacy is that a person's privacy cannot be compromised by a statistical release if their data are not in the database. Therefore with differential privacy, the goal is to give each individual roughly the same privacy that would result from having their data removed."
Oh. The "differential" part means modeling the difference between data captured and not captured.
I think (hope) this means figuring how much to fuzz the capture data so that hash collisions will match real world fuzzing.
Again, I'll continue to try to grok this stuff. Real world (story book style) examples will be very helpful.
Until I do understand, I think it's crucial for crypto and privacy minded people to quantify the assumptions and context involved. When I was working on election integrity (and medical records & guarding patient privacy), all the discussions were just make believe. I did help author a govt report to meant to help quantify the attack surface area for election administration. But I don't think it did much good, nor was it replicable (to new contexts).
So bravo. Please keep going.
1) How could aggregated data (means, average, min max) be used by attackers? Aren't aggregated data already private? For example, the Google postgres extension returns aggregated data, why is DP required here?
2) In the case of sharing entire databases, if all the PII are removed, why does it matter that we can match two records from two databases? Yes we can do correlation between 2 databases, but if PII were not gathered and stored at all in any database, there would be no privacy issue in the first place.
1) Note that the "min/max" example trivially leaks individual information: for example, releasing the max salary of employees of a company leaks the salary of the CEO. More generally, there have been numerous attacks on privacy notions purely based on aggregate data. One of my favorite is this one: https://blog.acolyer.org/2017/05/15/trajectory-recovery-from...
2) Typically, PII is not the only thing that can be used to reidentify someone, and matching records from different databases can sometimes infer sensitive information about people. One example: https://www.cs.cornell.edu/~shmat/netflix-faq.html
Applying differential privacy to that Netflix case study would be a terrific exercise.
1) The CEO example isn't really a good one to me, given the wealth inequalities in the world, leaking CEO's salary is almost desirable... I tried to read the blogpost and paper about mobile data location. At one point they talk about aggregated data, but then in the paper : "This dataset
is collected by a major mobile network operator in China.
It is a large-scale dataset including 100,000 mobile users
with the duration of one week, between April 1st and 7th,
2016. It records the spatiotemporal information of mobile
subscribers when they access cellular network (i.e., making
phone calls, sending texts, or consuming data plan). It also
contains anonymous user identification, accessed base sta-
tions and timestamp of each access.". So... the data is not really "aggregated"? The dataset literally lists some user IDs.
2) If I'm fired because my boss didn't like my history of movie, then it can probably be defended in court, depending on the country. I could also find another boss who has a natural sense of ethic and who doesn't judge me for what I watch.
Thank you for the links anyway. I will look at them again in a few day to see if I missed something
Your value judgement about a potential attack vector doesn't disqualify that it is an attack vector.
I'm wondering if you have any thoughts on Frank McSherry's old blog post expressing his distrust for approximate-DP . He seems to have different intuitions than your "almost DP" post expresses and makes criticisms that aren't quite addressed in your post.
This whole area of research seems like it exists as a way to rationalize wide-scale data collection. Rather than focusing on the collective rights of all people being tracked, it focuses on the risk that any individual person faces from an attacker.
Let's say your mapping app says 'Do you want to contribute traffic information during your drive to help provide better navigation experience for everyone' If you click "yes" and opt-in, do you, or do you not, want this to use Differential Privacy?
Differentially-private data analysis is a principled approach that enables organizations to learn from the majority of their data while simultaneously ensuring that those results do not allow any individual's data to be distinguished or re-identified.
> The 2006 paper presents both a mathematical definition of differential privacy and a mechanism based on the addition of Laplace noise that satisfies the definition.
So instead, you ask the person taking the survey to secretly flip a coin and privately look at the result. Then, you word the question as "If the coin came up heads, OR if you've had an affair, check this box".
When you look at overall survey data, and you see 51% of people checked the box, then it's reasonable to infer that 2% of them had an affair. But even if I de-anonymized the survey and saw you checked the box, I don't really have good evidence that you had an affair.
I definitely share mixed feelings about these adtech companies, and tend to think these personalized, targeted surveillance ads should die. But what about just plain old contextual, relevant ads, e.g. ads for car parts on a hot rod site?
There's a place for web advertising in general - it supports "free" sites in a better than any other model invented thus far. I don't think it's gonna be possible to burn the whole place down and go back to 1993.
Subscriptions work for large high-value offerings like Netflix or NY Times, but average people with small sites are unlikely to be able to provide that level of value. Other alternatives like micropayments always seem to fail, because even if viewing an article costs 1-cent or 1/100 cent, it causes users to hand-wring and watch their usage carefully, deliberate about their browsing choices and eventually stop. It's a psychological thing - people loved it when AOL switched from by-the-minute billing to unlimited, even though they generally paid less under the old scheme. Suddenly they were free to just browse without thinking about it, which is also what ad-sponsored content enables on the web.
There are certainly people who just create for attention / altruism alone and get no material reward - more power to them - but they'll have to keep their day jobs.
In a way I'm gratified to see adblocking taking off, because the ad industry constantly misbehaves and does every sleazy desperate thing it can - from seizure-flashes and automatically-opening popup windows in the 90s, to performance-killing HTML5 garbage ads, distracting animations and creepy privacy-invading remarketing stuff today. Desperate, annoying junk that is indeed killing the web (and people's trust in advertising).
I personally think a move back to classy, non-personalized ads could be the way to go, ideally static images that don't track you and don't fry your CPU & battery with 700 javsacript libraries. That may be wishful thinking too. But perhaps the ad industry can try a little harder to stem the tide of garbage and win people's trust back a bit.
Those existed before Google/Facebook/et al were a thing. You don’t need “adtech” for them.
Adtech by and large refers to the insanely complex tech infrastructure behind tracking people across the Internet and using a variety of tricks and dark-semi-dark patterns to try to get them to click ads, at the definite but not immediately obvious expense of privacy.
They could make a ton of money selling ad space whether it's privacy-invading or not (indeed, search ads did just that for many years by targeting the search term, not the personal details of the searcher).
If the infrastructure to select and distribute context-relevant ads is not "adtech", what is it?
Around the announcement of Chrome's "Privacy Sandbox" effort, Google lied about the effectiveness of targeting advertising (claiming it was 52% more effective, despite independent studies settling for somewhere around a 4% difference), because it can't survive in a world where we realize it isn't necessary for ad revenue.
Curious to look into the true effectiveness of this personalized stuff too, even my mom seems creeped out by it. Fingers crossed there's enough backlash and adblock-boycotting that they drop it.
This was a recent study on the effectiveness of personalized ads: https://www.zdnet.com/article/new-research-shows-personalize... (direct link is: https://weis2019.econinfosec.org/wp-content/uploads/sites/6/... )
Although I have to add that my computer's installation of Firefox (with all tracking protections enabled) does not make this blog unreadable. My settings in ublock do though (third party requests are all blocked).
In a world where Google is now hurting content creators and site owners more than it is helping them, I see no reason to help Google via differential privacy when outright blocking tracking data is a viable solution.
I have a hard time believing that because so few sites opt out of Google indexing.
Google has been providing results right in search, meaning that I no longer need to click a link. I would say 80% of my searches are answered by the knowledge graph and smart answers.
In the past, Google created value for website owners by both directing traffic towards them, and sharing ad revenue with them. Now it's doing neither.
While a business can control the information on it's website, it can't control the information on Google's, and often that is not to consumers' benefit.
They generally do compensate web sites. For example, when I ask for today's weather, it comes from weather.com and if I ask for the exchange rate for Canadian dollars, it comes from Morningstar. Both of those companies have a deal with Google to provide that information.
The FTC found that Google illegally took content from competitors, but Google was able to lobby away any punitive action: https://www.theguardian.com/technology/2015/mar/20/google-il...
If they do compensate any web sites it's only because the sites are too big to steal from or because they were sued and lost.
> Nobody's answer is going to be to refuse the crumbs.
Except those that think about opportunity cost.
Google's instant answers is a case of them putting the user first and that's hardly ever the wrong decision.
Instant answers is a direct threat to organized knowledge on the Internet, and that can't, by definition, be putting the user first. When we look at my link in my original comment, we discover that instant answers gets Google a larger percentage of the ad money (all of it), and suddenly, we figure out who Google is putting first: Google.
I think your worry about this overblown. If Google doesn't do a good job with these, then that's an opportunity for somebody to do a better job.
If you are stuck on an ad-supported model from the days of the dot-com boom, then I'm not sure you have a place in the current ecosystem where a lot of searches are started with a voice query and the user expects a short, definitive answer. The ever increasing number of searches that result in no clicks is surely influenced by the rise of Google Assistant, Amazon Echo, Siri, etc...
I don't think it holds up for commercial sites, but for news, hobbies, and blogs (which have been hit hardest by Google's vampire advertisement business) your statement is true for me!
1. User opens the browser, which defaults to the correct website the user is looking for.
2. User goes to Google instead, and searches for the company name. Clicks on the link for the company.
3. User navigates through the company website for the exact same page that the browser was set to open to by default.
Most people won't, perhaps, but I run a handful of reasonably popular websites, and I block Google's crawlers. I have no use for Google.
The rest, an implication of a desire for Google to fail even if the entire business model were based around privacy preserving techniques, makes the privacy saber rattling seem disingenuous and a means to a different end.
You could run ML locally.
If you want to know how long patients are waiting, you can just record wait time without recording details of the patient.
This doesn't prevent fingerprinting. If you send me enough data, even if it is not associated with an ID, I can still use that data to deanonymize or track you. People should read the papers on DP instead of just randomly commenting "you could just do X". The security researchers studying the threat models aren't just doing things in an overly complicated way for no reason.
> ML locally
ML training depends heavy on the training data set. You may not have all of the training data, and the training data may involve other people's private data.
Also, your phone is not the best place to run training for a model that takes 8 hours of training time and gobs of memory. However, you can slice up the problem, and people's phones can work together -- Federated Learning, which goes hand in hand with differential privacy.
Again, differential privacy is about finding excuses to justify the existence and protection of a company that is actively harmful to society. And we need to stop pretending your employer cares about it for any reason but protecting ad revenue.
Moreover, there are public safety reasons to use crowdsourcing. To give one very important example, early-warning earthquake detection for example, where seconds of warning are crucial. Millions of people are walking around with mobile seismometers, which in aggregate, could detect with high degree of accuracy, an earthquake as soon as it starts, and notify people within 1-5 seconds. It can only filter out false positives by sampling a vast number of motion sensors over a large area, and such a notification system can only work if it is virtually free of false positives (https://myshake.berkeley.edu/) This technology could save many lives, but needs differential privacy to be secure.
Differential privacy does not exist because of Google, was not invented by Google, and is part of the academic research of the crypto/security community.
I can't speak about my company in aggregate, but I can speak about myself, and I and many of my peers do care about privacy, with cryptographically strong guarantees. I've been doing this for more then 20 years, shipped one of the first anonymizing web proxy servers in 1996, worked on early forms of crypto-currency, remailer and onion networks, long before they hit the public consciousness. The people interested in these things work on them because they believe in them and are passionate. Not everything people work on is done because of ads. Chrome didn't need to ship RAPPOR in 2014, there's no business model for it. Likely, it was driven by people who felt deeply about the need to do it.
The other hyperbolic claims are not worth addressing.
Seriously though it was an insightful read.
It's throwing the baby out with the bath water and too limited of a view for my taste.
Also, only the government can generally turn a plate number into a person's identity reliably (I doubt Google knows my license plate, though I'd be interested to know if they did), and even then, a license plate is far less likely to always identify a given person's movements than their phone's GPS.
Speed cameras in the US aren't just radar, they photograph you, and even mail you a photo of yourself when you're caught speeding. This has lead to famous cases, like when a guy was caught speeding, and his home was mailed a speeding ticket of him in the car with his mistress, which his wife opened the mail.
Seems to me you're now hand waving away the long recognized danger of government surveillance, as in, actual danger, wherein government sanctions you based on surveillance, as opposed to theoretical dangers of someone showing you a Nike ad.
So while you're worrying about GDPR, this is happening: https://www.politico.eu/article/berlin-big-brother-state-sur...
The government has problems but trends towards the people's benefit in the long run. Corporate greed trends towards enriching the lives of sick and disturbing rich men like Larry and Sergey.
Corporations are a far worse and far less regulated threat than government, with far less checks and balances in place to protect us. Throwing out the government bogeyman hasn't worked on me before and isn't going to work on me now.
This is exactly what I would say about someone cozily working at Google in Silicon Valley. I literally deal with the costs imposed on seniors and less technically literate users by Google's ad nightmare on a daily basis.
And before you get high and mighty about how bad the government is compared to Google, remember that Google is built upon and protected by that very same government, and that Google's NetPAC is actively contributing chunks of many Googler paychecks (perhaps even yours?) to fund the very worst members of that government: https://twitter.com/Pinboard/status/1164945275066056704
I've rate-limited your account again and we will ban you again if you keep using HN as a platform like this. I know your views are sincere, but the way you pursue them has obviously left intellectual curiosity behind a long time ago. Worse, inundating HN threads with whatever agenda you're prosecuting has a destructive effect on the discussions.
This is not cool, and has happened more than once before (e.g. https://news.ycombinator.com/item?id=17216664). This is not a site for platforms, agendas, megaphones, campaigns, or harangues. It's for thoughtful conversation about curious things: https://news.ycombinator.com/newsguidelines.html.
Since the mechanics of the internet will now lead others to accuse me of pro-Google bias, I'll add that we've repeatedly banned accounts for pro-Google agenda-ness just the same way. Our concern is not Google or any other $Bigcorp, it's protecting HN.
Hand waving away US government's vastly more destructive tendencies, and the demonstrated abuses of government surveillance doesn't strike me as helpful. On the long laundry list of bad things Trump is guilty of, misogyny sits below the current proxy war in Yemen which started under Obama but has been boosted by Trump and Kushner.
It's now up to almost 100k dead, 45k wounded, and an incredible 84,000 children dead from starvation.
But hey, we have checks and balances on this right, thanks to FOIA? Can we get Trump's tax returns yet?
How do you know the cameras aren't recording? How do you know the resolution isn't lowered before being made public? How do you know there aren't other cameras that aren't made public?
If you're going to use a threat model that assumes malice on a "trusted" organization, you should do so for all such organizations: whether they be google or a government. It's only if you assume google isn't trustworthy, but governments are, that your threat model makes sense.
If you don't make that particular set of assumptions (and even if you do in many cases!) differential privacy is a net win.
FOIA. That falls into the whole part where government has checks and balances that Google does not.
How are those FOIA requests for stingray usage and tracking information going? Still blocked, last I checked. Like I keep saying: these arguments only work if you assume that the usg is all sunshine and rainbows, which is a naive assumption to make.
If you're willing to call NetPAC bad, which you do, you've got to agree that it's only bad if the people taking money do so and support the $badthings you claim google does, by not outlawing them. But if they're corrupt enough to do that, why aren't they corrupt enough to torpedo a few FOIA requests?
To put it another way if google is so bad, and you can't even get the USG to police google, how can you expect them to police themselves?
This was a recent story I read: https://cdt.org/blog/digital-is-different-pole-camera-ruling...
Meanwhile, Google hid Dragonfly from the internal team that was supposed to review it's projects for ethics issues. And it was pressure from journalists, and then the US Congress that inevitably forced Google to shut it down.
Thankfully, despite NetPAC, it looks like enforcement for Google is finally coming. The hand of justice moves slow, unfortunately, but it eventually gets there.
No. It was pressure from employees, employees who had ethical concerns. Employees who, I'll note, you consistently badmouth for continuing to work at Google. It's not like a journalist just magically inferred that Dragonfly was a thing one day and wrote an article one day. And Congress had almost nothing to do with it, lmao.
> The government doesn't "police themselves", the system of checks and balances ensures one branch polices another
That's true for things that aren't FOIA. But you mentioned FOIA. Which exists as a way for the executive to police themselves. The courts do exist as a check on the executive, (and local governments). So again, how are those stingray FOIA requests going? Note also that "law enforcement" is an FOIA exemption.
> This was a recent story I read: https://cdt.org/blog/digital-is-different-pole-camera-ruling....
This was a targeted piece of tracking that requires a warrant. Dragnet monitoring often does not require a warrant. In other words, setting up a single camera to monitor a single house needs a warrant. Setting up a hundred cameras to monitor the public doesn't, although getting data from them to target a specific person might, at least if it was being used to imprison them. When was the last time Google imprisoned anyone again?
It's also really, really amusing to see you claim that Google is subject to no public accountability. There are absolutely ways you could try to hold Google accountable, if you actually think that they've done something wrong. You just need to be able to convince a Judge. Same as for the executive branch of the government.
But apart from that leak, it was Congress, not employees, that shut down Dragonfly. Google is happy to dismiss employee concerns, as they've done over and over again. But after Sundar was dragged into Congress and repeatedly questioned about helping China, shockingly, Dragonfly disappeared. It's very revisionist to suggest Google shut it down to placate their own employees, rather than avoid action from Congress.
Remember that after some 20% of Google's employees walked out with a list of demands, Google dismissed 6/7ths of those demands without any further comment on the matter... and the employees didn't do anything further to retaliate. Google is well aware that ignoring employee complaints works, but ignoring Congress does not.
People can barely get their stuff back from civil asset forfeiture. Local and state governments are often far more corrupt and less transparent.
Meanwhile, I'd like to request Google's search algorithm, and any emails revolving Google's discussion of searches for "mapquest". Any takers?
Which is to say, suggesting FOIA isn't all-inclusive is a pretty bad logical fallacy, when Google is subject to no public accountability whatsoever.
"For people whose property has been seized through civil asset forfeiture, legally regaining such property is notoriously difficult and expensive, with costs sometimes exceeding the value of the property. "
Anyone with a long history on the internet, and in civil libertarian, privacy, and security circles would be aware of the long history of abuses, including targeting internet activists. Local law firms in the 80s and 90s, often pushed local governments to use CAF to seize computers from people suspected of piracy or hacking, often without evidence or conviction.
In a country with the highest per capita incarceration rate in the modern world, with a trail of dead and damaged bodies around the globe from military adventures, with systemic racism still in force today, and kids separated from parents and held in internment camps, some unable to even be reunited with parents and turned into orphans, I don't want to hear how easy it is to reign in our government because the evidence suggests restraint has been largely a failure.
The fact of the matter is, our government is not being checked and balanced, not by the legislatures, not by the fifth estate (news media), and not by the people. So on my hierarchy of needs: fighting climate change, war, poverty, gun violence, racism, disease, malnourishment, lack of access to healthcare and education, all of which are being held up by forces that have captured the government and by the apathy of the people and even the news media these days, your obsession that ad targeting is the top evil, and that the other things will be taken care of by oversight, rings hollow.
We are still living with the Patriot Act, the AUMF, National Security Letters, the Five Eyes agreements that enable problems like MUSCULAR, and that's been going on for 18 years now. Do I think FOIA requests will reign this in? Even Snowden didn't really reign it i much. And things like widespread automatic license plate readers and face detection being used by local governments, like with the voting machine changes, gerrymandering, and voter suppression, will largely be installed with only a wimper.
FWIW, that's how I read this too: If Google can get people on the "Differential Privacy"† bandwagon it diffuses their own culpability.
(I think privacy is effectively dead, so from my POV this DF BS is just propping up its corpse.)
†In practice, "Differential Privacy" is a euphemism for "no privacy".
We don't usually call something infeasible "a solution", especially it means a fundamental structural change without giving a viable alternative to power a 300B market. So tell me, what is your proposal to give an approximately similar amount of efficiency to the market? If you don't have the one, that's fine but your criticism doesn't really resonate since at least Google tries to improve the status quo.
I don't see Google trying to do that at all. I see them trying to ensure that the status quo will never change.
Which market are you referring to?
Advertising? Search engines? Other?
Why would that happen? DP is a compromise. For people who are concerned about data collection, surely "none" is preferable to that compromise.
I'd love a tool like differential privacy to gather statistics in a provably anonymous way. Without a tool like that, only companies with shedloads of money (like Google, Microsoft, Apple) can afford the market research (or the amount of spaghetti to throw at the wall) to compete.
Why? They're paying moneys through conversions, which is a pretty significant commitment and proof of its usefulness. You can argue that there's no real incrementality here, but I can tell you that at least half of ad revenues in general very likely have causal relationships from my observation.
We can't instrument websites and applications. Data, once collected, never dies and nobody can be sure of where it will end up. We have to stop. The model itself is broken. Now it's up to the public policy folks to figure a way to get us out of this mess. I am not optimistic in the short run.
Yes, the names of the engineers on the team are present in the acknowledgement section: but, this is a single line at the bottom of the post, whereas the name of the PM and the fact that he is the author is featured prominently below the title. This pattern is common across many product/OSS library announcements.
Sure, one could argue that the PM has a holistic view of the product or library being announced, and that developing this perspective is in fact their job. But surely a sufficiently senior engineer can (and often does) have an equally holistic, or perhaps even more insightful overview. At least sometimes. Even if this were not the case, why not acknowledge everyone's contributions at the same place in the article?
I think this is symptomatic of the ubiquitous class divide between the "suits" and "nerds" in the corporate world.
The benefit of this is that you can get an exact computation, whereas with differential privacy the output is rougher.
The benefit of differential privacy is that it does not rely on the trust of a majority of other users; you can theoretically verify that a certain percent of the time your device sends out a wrong answer.
The goal of MPC is to hide the inputs of the program. But it is okay for an adversary to make all sorts of inferences by looking at the outputs.
The goal of differential privacy is to limit the kind of inferences that an adversary can make about a particular user/input from the output itself.
I mean, I consider myself moderately knowledgeable about statistics, but even I have problems understanding DP. Worse, scientists who are supposed to use it will also have a harder time understanding DP over their usual methods.
And if a situation arises where a manager at Google has to make the decision to 'slightly' reduce the effectiveness of differential privacy because they need a certain metric for a report do you really think they're going to make the principled choice?
I trust the math and method just fine, but I'm in the "won't trust the implementation" group. Ad companies like Google have demonstrated they can't be trusted too many times for me to think that they'll do DP in a way that goes against their business interest.
If you are interested in learning more, my company (LeapYear) is hiring differential privacy researchers, as well as software engineers interested in developing an enterprise machine learning platform.
Some background on our team: We recently raised our Series B, and hired VMWare’s first VP of Engineering, who scaled VMWare from 15 to 750+ engineers. Almost all of our backend code is written in Haskell.
On the commercial side, we’ve signed several multi-million dollar contracts with Fortune 100 customers in financial services, healthcare, & tech, and deployed on sensitive data at petabyte scale.
Happy to answer questions and review applications submitted here: https://leapyear.ai/careers
Having worked at LeapYear for just over 3 years now, I can confirm that it's a good place to work and is solving interesting technical problems.
I can also answer questions, if anyone is interested.
I think it's good that someone is putting in the work and open sourcing tools to make differential privacy easier. But at the same time I'm wondering if this is just a smokescreen put up by Google.
You still have to trust the company hosting the dataset so distributed solutions lend themselves more naturally to trust.
First of all, there's a lot of recent (and not so recent) work in Local Differential Privacy , which uses the "untrusted curator" model. Although this software doesn't use it, the article mentions RAPPOR, which is a good example.
Second of all, encryption protects your _data_, but not your _privacy_; that is, assuming your data gets used in any way, you have no guarantees about whether the result reveals anything you'd rather keep secret. Of course, if you're talking about normal encryption, your data _can't_ be used, but then you're not really sharing it at all, as much as storing it there (like Dropbox). But once you start talking about things like homomorphic encryption or secure multiparty computation, it's important to keep in mind that they are complements to differential privacy, not replacements.
This isn't a market that needs to be powered. This is a market that needs to be shut down. Targeted advertising is inherently harmful to society.
We detached this subthread from https://news.ycombinator.com/item?id=20888273.
You're essentially arguing that something is beneficial to society because it makes money (mostly for Google).
Surveillance capitalism is harming privacy and the idea of a free society. It gives the people holding the collected information massive power over others.
That's not the case, my view is based on facts:
3. Google's data collection practices have been investigated by several organizations. Here are some quotes:
"As demonstrated throughout this chapter, the ways that Google has designed its Location History and Web & App Activity settings are problematic in light of European data protection requirements. In this report we have questioned the legal basis Google has for collecting and processing this location data. It is questionable whether users have given free, specific, informed, and unambiguous consent to the collection and use of location data through
Location History. It can also be discussed whether the user can withdraw his or her consent, since there is no real option to turn off Location History, only to pause it." - Norwegian consumer council.
"The combination of privacy intrusive defaults and the use of dark patterns nudge users of Facebook and Google [...] toward the least privacy options to a degree that we consider unethical". - Norwegian consumer council
They were already fined by CNIL for GDPR violations and are under investigation in other countries.
The FTC has been corrupted by tech lobbying, but a long time ago they did take notice of Google: "FTC Charges Deceptive Privacy Practices in Googles Rollout of Its Buzz Social Network".
That's an opinion. That's the whole comment that the person you're replying to said was an opinion. And it is.
> ...in an attempt to discredit my statements.
Nope. He wasn't even talking to you or about your statements.
You tried to do the same thing to me that you're trying to doing here, in another thread. You said that people "repeatedly" told me something when I actually just had made one comment in the thread at that point.
What's your deal?
In this thread eclipxe seems to be talking about my statements, since they replied to me saying "This is your opinion". Or maybe they also got confused and thought I was the person they were originally talking to.
In any case, I do think eclipxe is incorrect, because the statement which he said is an opinion is that "targeted advertising is harmful to society" and then they counter by saying that "advertising has its place in society". Targeted advertising is a totally different thing from mere advertising, and I've shown that one of the biggest companies doing targeted advertising has been repeatedly investigated and fined for their practices.
These fines are given for things which are harmful to society, QED.
Good luck on your mission!
I think he was joking, referring to a recent trend where Google gets questionably broad patents on security solutions.
IBM and Microsoft were notorious for this, as was AT&T. Microsoft has reformed recently.
If they cared about actual privacy, they would go after no-name ad networks and data mining companies.
As far as the companies they go after, I think going after large brand-name ad networks and data mining companies like Google is at least as important as going after the no-name ones.
(not selling per see, more like carelessly giving away)
> Today, we’re rolling out the open-source version of the differential privacy library that helps power some of Google’s core products.